docs: audit plan and todo status
This commit is contained in:
@@ -262,7 +262,7 @@ Alternatively, pass `client_id` and `client_secret` as body parameters instead o
|
||||
- `ai.litellm.base_url` and `ai.litellm.api_key` — LiteLLM proxy
|
||||
- `ai.ollama.base_url` and `ai.ollama.api_key` — Ollama local or remote server
|
||||
|
||||
See `llm/plan.md` for full architecture and implementation plan.
|
||||
See `llm/plan.md` for an audited high-level status summary of the original implementation plan, and `llm/todo.md` for the audited backfill/fallback follow-up status.
|
||||
|
||||
## Backfill
|
||||
|
||||
|
||||
1901
llm/plan.md
1901
llm/plan.md
File diff suppressed because it is too large
Load Diff
500
llm/todo.md
500
llm/todo.md
@@ -1,450 +1,126 @@
|
||||
# AMCS TODO
|
||||
## Auto Embedding Backfill Tool
|
||||
## Embedding Backfill and Text-Search Fallback Audit
|
||||
|
||||
## Objective
|
||||
This file originally described the planned `backfill_embeddings` work and semantic-to-text fallback behavior. Most of that work is now implemented. This document now tracks what landed, what still needs verification, and what follow-up work remains.
|
||||
|
||||
Add an MCP tool that automatically backfills missing embeddings for existing thoughts so semantic search keeps working after:
|
||||
|
||||
* embedding model changes
|
||||
* earlier capture or update failures
|
||||
* import or migration of raw thoughts without vectors
|
||||
|
||||
The tool should be safe to run repeatedly, should not duplicate work, and should make it easy to restore semantic coverage without rewriting existing thoughts.
|
||||
For current operator-facing behavior, prefer `README.md`.
|
||||
|
||||
---
|
||||
|
||||
## Desired outcome
|
||||
## Status summary
|
||||
|
||||
After this work:
|
||||
### Implemented
|
||||
|
||||
* raw thought text remains the source of truth
|
||||
* embeddings are treated as derived data per model
|
||||
* search continues to query only embeddings from the active embedding model
|
||||
* when no embeddings exist for the active model and scope, search falls back to Postgres text search
|
||||
* operators or MCP clients can trigger a backfill for the current model
|
||||
* AMCS can optionally auto-run a limited backfill pass on startup or on a schedule later
|
||||
The main work described in this file is already present in the repo:
|
||||
|
||||
- `backfill_embeddings` MCP tool exists
|
||||
- missing-embedding selection helpers exist in the store layer
|
||||
- embedding upsert helpers exist in the store layer
|
||||
- semantic retrieval falls back to Postgres full-text search when the active model has no embeddings in scope
|
||||
- fallback behavior is wired into the main query-driven tools
|
||||
- a full-text index migration exists
|
||||
- optional automatic backfill runner exists in config/startup flow
|
||||
- retry and reparse maintenance tooling also exists around metadata quality
|
||||
|
||||
### Still worth checking or improving
|
||||
|
||||
The broad feature is done, but some implementation-depth items are still worth tracking:
|
||||
|
||||
- test coverage around fallback/backfill behavior
|
||||
- whether configured backfill batching is used consistently end-to-end
|
||||
- observability depth beyond logs
|
||||
- response visibility into which retrieval mode was used
|
||||
|
||||
---
|
||||
|
||||
## Why this is needed
|
||||
## What is already implemented
|
||||
|
||||
Current search behavior is model-specific:
|
||||
### Backfill tool
|
||||
|
||||
* query text is embedded with the configured provider model
|
||||
* results are filtered by `embeddings.model`
|
||||
* thoughts with no embedding for that model are invisible to semantic search
|
||||
Implemented:
|
||||
|
||||
This means a model switch leaves old thoughts searchable only by listing and metadata filters until new embeddings are generated.
|
||||
- `backfill_embeddings`
|
||||
- project scoping
|
||||
- archived-thought filtering
|
||||
- age filtering
|
||||
- dry-run mode
|
||||
- bounded concurrency
|
||||
- best-effort per-item failure handling
|
||||
- idempotent embedding upsert behavior
|
||||
|
||||
To avoid that dead zone, AMCS should also support a lexical fallback path backed by native Postgres text-search indexing.
|
||||
### Search fallback
|
||||
|
||||
Implemented:
|
||||
|
||||
- full-text fallback when no embeddings exist for the active model in scope
|
||||
- fallback helper shared by query-based tools
|
||||
- full-text index migration on thought content
|
||||
|
||||
### Tools using fallback
|
||||
|
||||
Implemented fallback coverage for:
|
||||
|
||||
- `search_thoughts`
|
||||
- `recall_context`
|
||||
- `get_project_context` when a query is provided
|
||||
- `summarize_thoughts` when a query is provided
|
||||
- semantic neighbors in `related_thoughts`
|
||||
|
||||
### Optional automatic behavior
|
||||
|
||||
Implemented:
|
||||
|
||||
- config-gated startup backfill pass
|
||||
- config-gated periodic backfill loop
|
||||
|
||||
---
|
||||
|
||||
## Tool proposal
|
||||
## Remaining follow-ups
|
||||
|
||||
### New MCP tool
|
||||
### 1. Expose retrieval mode in responses
|
||||
|
||||
`backfill_embeddings`
|
||||
Still outstanding.
|
||||
|
||||
Purpose:
|
||||
Why it matters:
|
||||
- callers currently benefit from fallback automatically
|
||||
- but debugging is easier if responses explicitly say whether retrieval was `semantic` or `text`
|
||||
|
||||
* find thoughts missing an embedding for the active model
|
||||
* generate embeddings in batches
|
||||
* write embeddings with upsert semantics
|
||||
* report counts for scanned, embedded, skipped, and failed thoughts
|
||||
Suggested shape:
|
||||
- add a machine-readable field such as `retrieval_mode: semantic|text`
|
||||
- keep it consistent across all query-based tools that use shared retrieval logic
|
||||
|
||||
### Input
|
||||
### 2. Verify and improve tests
|
||||
|
||||
```json
|
||||
{
|
||||
"project": "optional project name or id",
|
||||
"limit": 100,
|
||||
"batch_size": 20,
|
||||
"include_archived": false,
|
||||
"older_than_days": 0,
|
||||
"dry_run": false
|
||||
}
|
||||
```
|
||||
Still worth auditing.
|
||||
|
||||
Notes:
|
||||
Recommended checks:
|
||||
- no-embedding scope falls back to text search
|
||||
- project-scoped fallback only searches within project scope
|
||||
- archived thoughts remain excluded by default
|
||||
- `related_thoughts` falls back correctly when semantic vectors are unavailable
|
||||
- backfill creates embeddings that later restore semantic search
|
||||
|
||||
* `project` scopes the backfill to a project when desired
|
||||
* `limit` caps total thoughts processed in one tool call
|
||||
* `batch_size` controls provider load
|
||||
* `include_archived` defaults to `false`
|
||||
* `older_than_days` is optional and mainly useful to avoid racing with fresh writes
|
||||
* `dry_run` returns counts and sample IDs without calling the embedding provider
|
||||
### 3. Re-embedding / migration ergonomics
|
||||
|
||||
### Output
|
||||
Still optional future work.
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "openai/text-embedding-3-small",
|
||||
"scanned": 100,
|
||||
"embedded": 87,
|
||||
"skipped": 13,
|
||||
"failed": 0,
|
||||
"dry_run": false,
|
||||
"failures": []
|
||||
}
|
||||
```
|
||||
|
||||
Optional:
|
||||
|
||||
* include a short `next_cursor` later if we add cursor-based paging
|
||||
Potential additions:
|
||||
- count missing embeddings by project
|
||||
- add `missing_embeddings` stats to `thought_stats`
|
||||
- add a controlled re-embed or reindex flow for model migrations
|
||||
|
||||
---
|
||||
|
||||
## Backfill behavior
|
||||
## Notes for maintainers
|
||||
|
||||
### Core rules
|
||||
Do not read this file as an untouched future roadmap item anymore. The repo has already implemented the core work described here.
|
||||
|
||||
* Backfill only when a thought is missing an embedding row for the active model.
|
||||
* Do not recompute embeddings that already exist for that model unless an explicit future `force` flag is added.
|
||||
* Keep embeddings per model side by side in the existing `embeddings` table.
|
||||
* Use `insert ... on conflict (thought_id, model) do update` so retries stay idempotent.
|
||||
|
||||
### Selection query
|
||||
|
||||
Add a store query that returns thoughts where no embedding exists for the requested model.
|
||||
|
||||
Shape:
|
||||
|
||||
* from `thoughts t`
|
||||
* left join `embeddings e on e.thought_id = t.guid and e.model = $model`
|
||||
* filter `e.id is null`
|
||||
* optional filters for project, archived state, age
|
||||
* order by `t.created_at asc`
|
||||
* limit by requested batch
|
||||
|
||||
Ordering oldest first is useful because it steadily restores long-tail recall instead of repeatedly revisiting recent writes.
|
||||
|
||||
### Processing loop
|
||||
|
||||
For each selected thought:
|
||||
|
||||
1. read `content`
|
||||
2. call `provider.Embed(content)`
|
||||
3. upsert embedding row for `thought_id + model`
|
||||
4. continue on per-item failure and collect errors
|
||||
|
||||
Use bounded concurrency instead of fully serial processing so large backfills complete in reasonable time without overwhelming the provider.
|
||||
|
||||
Recommended first pass:
|
||||
|
||||
* one tool invocation handles batches internally
|
||||
* concurrency defaults to a small fixed number like `4`
|
||||
* `batch_size` and concurrency are kept server-side defaults at first, even if only `limit` is exposed in MCP input
|
||||
If more backfill/fallback work is planned, append it as concrete follow-ups against the current codebase rather than preserving the old speculative rollout order.
|
||||
|
||||
---
|
||||
|
||||
## Search fallback behavior
|
||||
## Historical note
|
||||
|
||||
### Goal
|
||||
The original long-form proposal was replaced during the repo audit because it described work that is now largely complete and was causing issue/document drift.
|
||||
|
||||
If semantic retrieval cannot run because no embeddings exist for the active model in the selected scope, AMCS should fall back to Postgres text search instead of returning empty semantic results by default.
|
||||
|
||||
### Fallback rules
|
||||
|
||||
* If embeddings exist for the active model, keep using vector search as the primary path.
|
||||
* If no embeddings exist for the active model in scope, run Postgres text search against raw thought content.
|
||||
* Fallback should apply to:
|
||||
|
||||
* `search_thoughts`
|
||||
* `recall_context`
|
||||
* `get_project_context` when `query` is provided
|
||||
* `summarize_thoughts` when `query` is provided
|
||||
* semantic neighbors in `related_thoughts`
|
||||
|
||||
* Fallback should not mutate data. It is retrieval-only.
|
||||
* Backfill remains the long-term fix; text search is the immediate safety net.
|
||||
|
||||
### Postgres search approach
|
||||
|
||||
Add a native full-text index on thought content and query it with a matching text-search configuration.
|
||||
|
||||
Recommended first pass:
|
||||
|
||||
* add a migration creating a GIN index on `to_tsvector('simple', content)`
|
||||
* use `websearch_to_tsquery('simple', $query)` for user-entered text
|
||||
* rank results with `ts_rank_cd(...)`
|
||||
* continue excluding archived thoughts by default
|
||||
* continue honoring project scope
|
||||
|
||||
Using the `simple` configuration is a safer default for mixed prose, identifiers, and code-ish text than a language-specific stemmer.
|
||||
|
||||
### Store additions for fallback
|
||||
|
||||
Add store methods such as:
|
||||
|
||||
* `HasEmbeddingsForModel(ctx, model string, projectID *uuid.UUID) (bool, error)`
|
||||
* `SearchThoughtsText(ctx, query string, limit int, projectID *uuid.UUID, excludeID *uuid.UUID) ([]SearchResult, error)`
|
||||
|
||||
These should be used by a shared retrieval helper in `internal/tools` so semantic callers degrade consistently.
|
||||
|
||||
### Notes on ranking
|
||||
|
||||
Text-search scores will not be directly comparable to vector similarity scores.
|
||||
|
||||
That is acceptable in v1 because:
|
||||
|
||||
* each request will use one retrieval mode at a time
|
||||
* fallback is only used when semantic search is unavailable
|
||||
* response payloads can continue to return `similarity` as a generic relevance score
|
||||
|
||||
---
|
||||
|
||||
## Auto behavior
|
||||
|
||||
The user asked for an auto backfill tool, so define two layers:
|
||||
|
||||
### Layer 1: explicit MCP tool
|
||||
|
||||
Ship `backfill_embeddings` first.
|
||||
|
||||
This is the lowest-risk path because:
|
||||
|
||||
* it is observable
|
||||
* it is rate-limited by the caller
|
||||
* it avoids surprise provider cost on startup
|
||||
|
||||
### Layer 2: optional automatic runner
|
||||
|
||||
Add a config-gated background runner after the tool exists and is proven stable.
|
||||
|
||||
Config sketch:
|
||||
|
||||
```yaml
|
||||
backfill:
|
||||
enabled: false
|
||||
run_on_startup: false
|
||||
interval: "15m"
|
||||
batch_size: 20
|
||||
max_per_run: 100
|
||||
include_archived: false
|
||||
```
|
||||
|
||||
Behavior:
|
||||
|
||||
* on startup, if enabled and `run_on_startup=true`, run a small bounded backfill pass
|
||||
* if `interval` is set, periodically backfill missing embeddings for the active configured model
|
||||
* log counts and failures, but never block server startup on backfill failure
|
||||
|
||||
This keeps the first implementation simple while still giving us a clean path to true automation.
|
||||
|
||||
---
|
||||
|
||||
## Store changes
|
||||
|
||||
Add store methods focused on missing-model coverage.
|
||||
|
||||
### New methods
|
||||
|
||||
* `ListThoughtsMissingEmbedding(ctx, model string, limit int, projectID *uuid.UUID, includeArchived bool, olderThanDays int) ([]Thought, error)`
|
||||
* `UpsertEmbedding(ctx, thoughtID uuid.UUID, model string, embedding []float32) error`
|
||||
|
||||
### Optional later methods
|
||||
|
||||
* `CountThoughtsMissingEmbedding(ctx, model string, projectID *uuid.UUID, includeArchived bool) (int, error)`
|
||||
* `ListThoughtIDsMissingEmbeddingPage(...)` for cursor-based paging on large datasets
|
||||
|
||||
### Why separate `UpsertEmbedding`
|
||||
|
||||
`InsertThought` and `UpdateThought` already contain embedding upsert logic, but a dedicated helper will:
|
||||
|
||||
* reduce duplication
|
||||
* let backfill avoid full thought updates
|
||||
* make future re-embedding jobs cleaner
|
||||
|
||||
---
|
||||
|
||||
## Tooling changes
|
||||
|
||||
### New file
|
||||
|
||||
`internal/tools/backfill.go`
|
||||
|
||||
Responsibilities:
|
||||
|
||||
* parse input
|
||||
* resolve project if provided
|
||||
* select missing thoughts
|
||||
* run bounded embedding generation
|
||||
* record per-item failures without aborting the whole batch
|
||||
* return summary counts
|
||||
|
||||
### MCP registration
|
||||
|
||||
Add the tool to:
|
||||
|
||||
* `internal/mcpserver/server.go`
|
||||
* `internal/mcpserver/schema.go` and tests if needed
|
||||
* `internal/app/app.go` wiring
|
||||
|
||||
Suggested tool description:
|
||||
|
||||
* `Generate missing embeddings for stored thoughts using the active embedding model.`
|
||||
|
||||
---
|
||||
|
||||
## Config changes
|
||||
|
||||
No config is required for the first manual tool beyond the existing embedding provider settings.
|
||||
|
||||
For the later automatic runner, add:
|
||||
|
||||
* `backfill.enabled`
|
||||
* `backfill.run_on_startup`
|
||||
* `backfill.interval`
|
||||
* `backfill.batch_size`
|
||||
* `backfill.max_per_run`
|
||||
* `backfill.include_archived`
|
||||
|
||||
Validation rules:
|
||||
|
||||
* `batch_size > 0`
|
||||
* `max_per_run >= batch_size`
|
||||
* `interval` must parse when provided
|
||||
|
||||
---
|
||||
|
||||
## Failure handling
|
||||
|
||||
The backfill tool should be best-effort, not all-or-nothing.
|
||||
|
||||
Rules:
|
||||
|
||||
* one thought failure does not abort the full run
|
||||
* provider errors are captured and counted
|
||||
* database upsert failures are captured and counted
|
||||
* final tool response includes truncated failure details
|
||||
* full details go to logs
|
||||
|
||||
Failure payloads should avoid returning raw thought content to the caller if that would create noisy or sensitive responses. Prefer thought IDs plus short error strings.
|
||||
|
||||
---
|
||||
|
||||
## Observability
|
||||
|
||||
Add structured logs for:
|
||||
|
||||
* selected model
|
||||
* project scope
|
||||
* scan count
|
||||
* success count
|
||||
* failure count
|
||||
* duration
|
||||
|
||||
Later, metrics can include:
|
||||
|
||||
* `amcs_backfill_runs_total`
|
||||
* `amcs_backfill_embeddings_total`
|
||||
* `amcs_backfill_failures_total`
|
||||
* `amcs_thoughts_missing_embeddings`
|
||||
|
||||
---
|
||||
|
||||
## Concurrency and rate limiting
|
||||
|
||||
Keep the first version conservative.
|
||||
|
||||
Plan:
|
||||
|
||||
* use a worker pool with a small fixed concurrency
|
||||
* keep batch sizes small by default
|
||||
* stop fetching new work once `limit` is reached
|
||||
* respect `ctx` cancellation so long backfills can be interrupted cleanly
|
||||
|
||||
Do not add provider-specific rate-limit logic in v1 unless real failures show it is needed.
|
||||
|
||||
---
|
||||
|
||||
## Security and safety
|
||||
|
||||
* Reuse existing MCP auth.
|
||||
* Do not expose a broad `force=true` option in v1.
|
||||
* Default to non-archived thoughts only.
|
||||
* Do not mutate raw thought text or metadata during backfill.
|
||||
* Treat embeddings as derived data that may be regenerated safely.
|
||||
|
||||
---
|
||||
|
||||
## Testing plan
|
||||
|
||||
### Store tests
|
||||
|
||||
Add tests for:
|
||||
|
||||
* listing thoughts missing embeddings for a model
|
||||
* project-scoped missing-embedding queries
|
||||
* archived thought filtering
|
||||
* idempotent upsert behavior
|
||||
|
||||
### Tool tests
|
||||
|
||||
Add tests for:
|
||||
|
||||
* dry-run mode
|
||||
* successful batch embedding
|
||||
* partial provider failures
|
||||
* empty result set
|
||||
* project resolution
|
||||
* context cancellation
|
||||
|
||||
### Integration tests
|
||||
|
||||
Add a flow covering:
|
||||
|
||||
1. create thoughts without embeddings for a target model
|
||||
2. run `backfill_embeddings`
|
||||
3. confirm rows exist in `embeddings`
|
||||
4. confirm `search_thoughts` can now retrieve them when using that model
|
||||
|
||||
### Fallback search tests
|
||||
|
||||
Add coverage for:
|
||||
|
||||
* no embeddings for model -> `search_thoughts` uses Postgres text search
|
||||
* project-scoped queries only search matching project thoughts
|
||||
* archived thoughts stay excluded by default
|
||||
* `related_thoughts` falls back to text search neighbors when semantic vectors are unavailable
|
||||
* once embeddings exist, semantic search remains the primary path
|
||||
|
||||
---
|
||||
|
||||
## Rollout order
|
||||
|
||||
1. Add store helpers for missing-embedding selection and embedding upsert.
|
||||
2. Add Postgres full-text index migration and text-search store helpers.
|
||||
3. Add shared semantic-or-text fallback retrieval logic for query-based tools.
|
||||
4. Add `backfill_embeddings` MCP tool and wire it into the server.
|
||||
5. Add unit and integration tests.
|
||||
6. Document usage in `README.md`.
|
||||
7. Add optional background auto-runner behind config.
|
||||
8. Consider a future `force` or `reindex_model` path only after v1 is stable.
|
||||
|
||||
---
|
||||
|
||||
## Open questions
|
||||
|
||||
* Should the tool expose `batch_size` to clients, or should batching stay internal?
|
||||
* Should the first version support only the active model, or allow a `model` override for admins?
|
||||
* Should archived thoughts be backfilled by default during startup jobs but not MCP calls?
|
||||
* Do we want a separate CLI/admin command for large one-time reindex jobs outside MCP?
|
||||
|
||||
Recommended answers for v1:
|
||||
|
||||
* keep batching mostly internal
|
||||
* use only the active configured model
|
||||
* exclude archived thoughts by default everywhere
|
||||
* postpone a dedicated CLI until volume justifies it
|
||||
|
||||
---
|
||||
|
||||
## Nice follow-ups
|
||||
|
||||
* add a `missing_embeddings` stat to `thought_stats`
|
||||
* expose a read-only tool for counting missing embeddings by project
|
||||
* add a re-embed path for migrating from one model to another in controlled waves
|
||||
* add metadata extraction backfill as a separate job if imported content often lacks metadata
|
||||
* expose the retrieval mode in responses for easier debugging of semantic vs text fallback
|
||||
If needed, recover the older version from git history.
|
||||
|
||||
Reference in New Issue
Block a user