test(config): add migration tests for litellm provider
Some checks failed
CI / build-and-test (push) Failing after -32m22s
Some checks failed
CI / build-and-test (push) Failing after -32m22s
* Implement tests for migrating configuration from v1 to v2 for the litellm provider. * Validate the structure and values of the migrated configuration. * Ensure migration rejects newer versions of the configuration. fix(validate): enhance AI provider validation logic * Consolidate provider validation into a dedicated method. * Ensure at least one provider is specified and validate its type. * Check for required fields based on provider type. fix(mcpserver): update tool set to use new enrichment tool * Replace RetryMetadataTool with RetryEnrichmentTool in the ToolSet. fix(tools): refactor tools to use embedding and metadata runners * Update tools to utilize EmbeddingRunner and MetadataRunner instead of Provider. * Adjust method calls to align with the new runner interfaces.
This commit is contained in:
48
README.md
48
README.md
@@ -244,12 +244,24 @@ Link existing skills and guardrails to a project so they are automatically avail
|
||||
Config is YAML-driven. Copy `configs/config.example.yaml` and set:
|
||||
|
||||
- `database.url` — Postgres connection string
|
||||
- `auth.mode` — `api_keys` or `oauth_client_credentials`
|
||||
- `auth.keys` — API keys for MCP access via `x-brain-key` or `Authorization: Bearer <key>` when `auth.mode=api_keys`
|
||||
- `auth.oauth.clients` — client registry when `auth.mode=oauth_client_credentials`
|
||||
- `auth.keys` — static API keys for MCP access via `x-brain-key` or `Authorization: Bearer <key>`
|
||||
- `auth.oauth.clients` — optional OAuth client credentials registry
|
||||
- `ai.providers` — named provider definitions (`litellm`, `ollama`, `openrouter`)
|
||||
- `ai.embeddings.primary` / `ai.metadata.primary` — primary role targets (`provider` + `model`)
|
||||
- `ai.embeddings.fallbacks` / `ai.metadata.fallbacks` — sequential fallback targets
|
||||
- `mcp.version` is build-generated and should not be set in config
|
||||
|
||||
**OAuth Client Credentials flow** (`auth.mode=oauth_client_credentials`):
|
||||
Config schema is versioned. Current schema version is `2`.
|
||||
|
||||
Use the migration helper to rewrite legacy configs in-place:
|
||||
|
||||
```bash
|
||||
go run ./cmd/amcs-migrate-config --config ./configs/dev.yaml
|
||||
```
|
||||
|
||||
Use `--dry-run` to print migrated YAML without writing.
|
||||
|
||||
**OAuth Client Credentials flow**:
|
||||
|
||||
1. Obtain a token — `POST /oauth/token` (public, no auth required):
|
||||
```
|
||||
@@ -267,8 +279,9 @@ Config is YAML-driven. Copy `configs/config.example.yaml` and set:
|
||||
```
|
||||
|
||||
Alternatively, pass `client_id` and `client_secret` as body parameters instead of `Authorization: Basic`. Direct `Authorization: Basic` credential validation on the MCP endpoint is also supported as a fallback (no token required).
|
||||
- `ai.litellm.base_url` and `ai.litellm.api_key` — LiteLLM proxy
|
||||
- `ai.ollama.base_url` and `ai.ollama.api_key` — Ollama local or remote server
|
||||
- `AMCS_LITELLM_BASE_URL` / `AMCS_LITELLM_API_KEY` override all configured LiteLLM providers
|
||||
- `AMCS_OLLAMA_BASE_URL` / `AMCS_OLLAMA_API_KEY` override all configured Ollama providers
|
||||
- `AMCS_OPENROUTER_API_KEY` overrides all configured OpenRouter providers
|
||||
|
||||
See `llm/plan.md` for an audited high-level status summary of the original implementation plan, and `llm/todo.md` for the audited backfill/fallback follow-up status.
|
||||
|
||||
@@ -643,27 +656,32 @@ Notes:
|
||||
|
||||
## Ollama
|
||||
|
||||
Set `ai.provider: "ollama"` to use a local or self-hosted Ollama server through its OpenAI-compatible API.
|
||||
Set your role targets to an Ollama provider to use a local or self-hosted Ollama server through its OpenAI-compatible API.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
ai:
|
||||
provider: "ollama"
|
||||
providers:
|
||||
local:
|
||||
type: "ollama"
|
||||
base_url: "http://localhost:11434/v1"
|
||||
api_key: "ollama"
|
||||
request_headers: {}
|
||||
embeddings:
|
||||
model: "nomic-embed-text"
|
||||
dimensions: 768
|
||||
primary:
|
||||
provider: "local"
|
||||
model: "nomic-embed-text"
|
||||
metadata:
|
||||
model: "llama3.2"
|
||||
temperature: 0.1
|
||||
ollama:
|
||||
base_url: "http://localhost:11434/v1"
|
||||
api_key: "ollama"
|
||||
request_headers: {}
|
||||
primary:
|
||||
provider: "local"
|
||||
model: "llama3.2"
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- For remote Ollama servers, point `ai.ollama.base_url` at the remote `/v1` endpoint.
|
||||
- For remote Ollama servers, point `ai.providers.<name>.base_url` at the remote `/v1` endpoint.
|
||||
- The client always sends Bearer auth; Ollama ignores it locally, so `api_key: "ollama"` is a safe default.
|
||||
- `ai.embeddings.dimensions` must match the embedding model you actually use, or startup will fail the database vector-dimension check.
|
||||
|
||||
Reference in New Issue
Block a user