Compare commits
3 Commits
structured
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| a6165a0f2e | |||
| b6e156011f | |||
| 4d107cb87e |
674
README.md
674
README.md
@@ -1,19 +1,669 @@
|
|||||||
# AMCS Directory
|
# Avalon Memory Crystal Server (amcs)
|
||||||
|
|
||||||
This is the AMCS (Advanced Module Control System) directory.
|

|
||||||
|
|
||||||
## Purpose
|
A Go MCP server for capturing and retrieving thoughts, memory, and project context. Exposes tools over Streamable HTTP, backed by Postgres with pgvector for semantic search.
|
||||||
|
|
||||||
The AMCS directory is used to store configuration and code for the Advanced Module Control System, which handles...
|
## What it does
|
||||||
|
|
||||||
## Structure
|
- **Capture** thoughts with automatic embedding and metadata extraction
|
||||||
|
- **Search** thoughts semantically via vector similarity
|
||||||
|
- **Organise** thoughts into projects and retrieve full project context
|
||||||
|
- **Summarise** and recall memory across topics and time windows
|
||||||
|
- **Link** related thoughts and traverse relationships
|
||||||
|
|
||||||
- `configs/` - Configuration files
|
## Stack
|
||||||
- `scripts/` - Scripts for managing the system
|
|
||||||
- `assets/` - Asset files
|
|
||||||
|
|
||||||
## Next Steps
|
- Go — MCP server over Streamable HTTP
|
||||||
|
- Postgres + pgvector — storage and vector search
|
||||||
|
- LiteLLM — primary hosted AI provider (embeddings + metadata extraction)
|
||||||
|
- OpenRouter — default upstream behind LiteLLM
|
||||||
|
- Ollama — supported local or self-hosted OpenAI-compatible provider
|
||||||
|
|
||||||
- Review the configuration files in `configs/`
|
## Tools
|
||||||
- Run the setup script in `scripts/`
|
|
||||||
- Check the `assets/` directory for any required media files
|
| Tool | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `capture_thought` | Store a thought with embedding and metadata |
|
||||||
|
| `search_thoughts` | Semantic similarity search |
|
||||||
|
| `list_thoughts` | Filter thoughts by type, topic, person, date |
|
||||||
|
| `thought_stats` | Counts and top topics/people |
|
||||||
|
| `get_thought` | Retrieve a thought by ID |
|
||||||
|
| `update_thought` | Patch content or metadata |
|
||||||
|
| `delete_thought` | Hard delete |
|
||||||
|
| `archive_thought` | Soft delete |
|
||||||
|
| `create_project` | Register a named project |
|
||||||
|
| `list_projects` | List projects with thought counts |
|
||||||
|
| `get_project_context` | Recent + semantic context for a project; uses explicit `project` or the active session project |
|
||||||
|
| `set_active_project` | Set session project scope; requires a stateful MCP session |
|
||||||
|
| `get_active_project` | Get current session project |
|
||||||
|
| `summarize_thoughts` | LLM prose summary over a filtered set |
|
||||||
|
| `recall_context` | Semantic + recency context block for injection |
|
||||||
|
| `link_thoughts` | Create a typed relationship between thoughts |
|
||||||
|
| `related_thoughts` | Explicit links + semantic neighbours |
|
||||||
|
| `upload_file` | Stage a file from a server-side path or base64 and get an `amcs://files/{id}` resource URI |
|
||||||
|
| `save_file` | Store a file (base64 or resource URI) and optionally link it to a thought |
|
||||||
|
| `load_file` | Retrieve a stored file by ID; returns metadata, base64 content, and an embedded MCP binary resource |
|
||||||
|
| `list_files` | Browse stored files by thought, project, or kind |
|
||||||
|
| `backfill_embeddings` | Generate missing embeddings for stored thoughts |
|
||||||
|
| `reparse_thought_metadata` | Re-extract metadata from thought content |
|
||||||
|
| `retry_failed_metadata` | Retry pending/failed metadata extraction |
|
||||||
|
| `add_maintenance_task` | Create a recurring or one-time home maintenance task |
|
||||||
|
| `log_maintenance` | Log completed maintenance; updates next due date |
|
||||||
|
| `get_upcoming_maintenance` | List maintenance tasks due within the next N days |
|
||||||
|
| `search_maintenance_history` | Search the maintenance log by task name, category, or date range |
|
||||||
|
| `save_chat_history` | Save chat messages with optional title, summary, channel, agent, and project |
|
||||||
|
| `get_chat_history` | Fetch chat history by UUID or session_id |
|
||||||
|
| `list_chat_histories` | List chat histories; filter by project, channel, agent_id, session_id, or days |
|
||||||
|
| `delete_chat_history` | Delete a chat history by id |
|
||||||
|
| `add_skill` | Store an agent skill (instruction or capability prompt) |
|
||||||
|
| `remove_skill` | Delete an agent skill by id |
|
||||||
|
| `list_skills` | List all agent skills, optionally filtered by tag |
|
||||||
|
| `add_guardrail` | Store an agent guardrail (constraint or safety rule) |
|
||||||
|
| `remove_guardrail` | Delete an agent guardrail by id |
|
||||||
|
| `list_guardrails` | List all agent guardrails, optionally filtered by tag or severity |
|
||||||
|
| `add_project_skill` | Link a skill to a project; pass `project` if client is stateless |
|
||||||
|
| `remove_project_skill` | Unlink a skill from a project; pass `project` if client is stateless |
|
||||||
|
| `list_project_skills` | Skills for a project; pass `project` if client is stateless |
|
||||||
|
| `add_project_guardrail` | Link a guardrail to a project; pass `project` if client is stateless |
|
||||||
|
| `remove_project_guardrail` | Unlink a guardrail from a project; pass `project` if client is stateless |
|
||||||
|
| `list_project_guardrails` | Guardrails for a project; pass `project` if client is stateless |
|
||||||
|
| `get_version_info` | Build version, commit, and date |
|
||||||
|
| `describe_tools` | List all available MCP tools with names, descriptions, categories, and model-authored usage notes; call this at the start of a session to orient yourself |
|
||||||
|
| `annotate_tool` | Persist your own usage notes for a specific tool; notes are returned by `describe_tools` in future sessions |
|
||||||
|
|
||||||
|
## Self-Documenting Tools
|
||||||
|
|
||||||
|
AMCS includes a built-in tool directory that models can read and annotate.
|
||||||
|
|
||||||
|
**`describe_tools`** returns every registered tool with its name, description, category, and any model-written notes. Call it with no arguments to get the full list, or filter by category:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "category": "thoughts" }
|
||||||
|
```
|
||||||
|
|
||||||
|
Available categories: `system`, `thoughts`, `projects`, `files`, `admin`, `maintenance`, `skills`, `chat`, `meta`.
|
||||||
|
|
||||||
|
**`annotate_tool`** lets a model write persistent usage notes against a tool name. Notes survive across sessions and are returned by `describe_tools`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "tool_name": "capture_thought", "notes": "Always pass project explicitly — session state is not reliable in this client." }
|
||||||
|
```
|
||||||
|
|
||||||
|
Pass an empty string to clear notes. The intended workflow is:
|
||||||
|
|
||||||
|
1. At the start of a session, call `describe_tools` to discover tools and read accumulated notes.
|
||||||
|
2. As you learn something non-obvious about a tool — a gotcha, a workflow pattern, a required field ordering — call `annotate_tool` to record it.
|
||||||
|
3. Future sessions receive the annotation automatically via `describe_tools`.
|
||||||
|
|
||||||
|
## MCP Error Contract
|
||||||
|
|
||||||
|
AMCS returns structured JSON-RPC errors for common MCP failures. Clients should branch on both `error.code` and `error.data.type` instead of parsing the human-readable message.
|
||||||
|
|
||||||
|
### Stable error codes
|
||||||
|
|
||||||
|
| Code | `data.type` | Meaning |
|
||||||
|
|---|---|---|
|
||||||
|
| `-32602` | `invalid_arguments` | MCP argument/schema validation failed before the tool handler ran |
|
||||||
|
| `-32602` | `invalid_input` | Tool-level input validation failed inside the handler |
|
||||||
|
| `-32050` | `session_required` | Tool requires a stateful MCP session |
|
||||||
|
| `-32051` | `project_required` | No explicit `project` was provided and no active session project was available |
|
||||||
|
| `-32052` | `project_not_found` | The referenced project does not exist |
|
||||||
|
| `-32053` | `invalid_id` | A UUID-like identifier was malformed |
|
||||||
|
| `-32054` | `entity_not_found` | A referenced entity such as a thought or contact does not exist |
|
||||||
|
|
||||||
|
### Error data shape
|
||||||
|
|
||||||
|
AMCS may include these fields in `error.data`:
|
||||||
|
|
||||||
|
- `type` — stable machine-readable error type
|
||||||
|
- `field` — single argument name such as `name`, `project`, or `thought_id`
|
||||||
|
- `fields` — multiple argument names for one-of or mutually-exclusive validation
|
||||||
|
- `value` — offending value when safe to expose
|
||||||
|
- `detail` — validation detail such as `required`, `invalid`, `one_of_required`, `mutually_exclusive`, or a schema validation message
|
||||||
|
- `hint` — remediation guidance
|
||||||
|
- `entity` — entity name for generic not-found errors
|
||||||
|
|
||||||
|
Example schema-level error:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": -32602,
|
||||||
|
"message": "invalid tool arguments",
|
||||||
|
"data": {
|
||||||
|
"type": "invalid_arguments",
|
||||||
|
"field": "name",
|
||||||
|
"detail": "validating root: required: missing properties: [\"name\"]",
|
||||||
|
"hint": "check the name argument"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Example tool-level error:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": -32051,
|
||||||
|
"message": "project is required; pass project explicitly or call set_active_project in this MCP session first",
|
||||||
|
"data": {
|
||||||
|
"type": "project_required",
|
||||||
|
"field": "project",
|
||||||
|
"hint": "pass project explicitly or call set_active_project in this MCP session first"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Client example
|
||||||
|
|
||||||
|
Go client example handling AMCS MCP errors:
|
||||||
|
|
||||||
|
```go
|
||||||
|
result, err := session.CallTool(ctx, &mcp.CallToolParams{
|
||||||
|
Name: "get_project_context",
|
||||||
|
Arguments: map[string]any{},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
var rpcErr *jsonrpc.Error
|
||||||
|
if errors.As(err, &rpcErr) {
|
||||||
|
var data struct {
|
||||||
|
Type string `json:"type"`
|
||||||
|
Field string `json:"field"`
|
||||||
|
Hint string `json:"hint"`
|
||||||
|
}
|
||||||
|
_ = json.Unmarshal(rpcErr.Data, &data)
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case rpcErr.Code == -32051 && data.Type == "project_required":
|
||||||
|
// Retry with an explicit project, or call set_active_project first.
|
||||||
|
case rpcErr.Code == -32602 && data.Type == "invalid_arguments":
|
||||||
|
// Ask the caller to fix the malformed arguments.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ = result
|
||||||
|
```
|
||||||
|
|
||||||
|
## Build Versioning
|
||||||
|
|
||||||
|
AMCS embeds build metadata into the binary at build time.
|
||||||
|
|
||||||
|
- `version` is generated from the current git tag when building from a tagged commit
|
||||||
|
- `tag_name` is the repo tag name, for example `v1.0.1`
|
||||||
|
- `build_date` is the UTC build timestamp in RFC3339 format
|
||||||
|
- `commit` is the short git commit SHA
|
||||||
|
|
||||||
|
For untagged builds, `version` and `tag_name` fall back to `dev`.
|
||||||
|
|
||||||
|
Use `get_version_info` to retrieve the runtime build metadata:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"server_name": "amcs",
|
||||||
|
"version": "v1.0.1",
|
||||||
|
"tag_name": "v1.0.1",
|
||||||
|
"commit": "abc1234",
|
||||||
|
"build_date": "2026-03-31T14:22:10Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Agent Skills and Guardrails
|
||||||
|
|
||||||
|
Skills and guardrails are reusable agent behaviour instructions and constraints that can be attached to projects.
|
||||||
|
|
||||||
|
**At the start of every project session, always call `list_project_skills` and `list_project_guardrails` first.** Use the returned skills and guardrails to guide agent behaviour for that project. Only generate or create new skills/guardrails if none are returned. If your MCP client does not preserve sessions across calls, pass `project` explicitly instead of relying on `set_active_project`.
|
||||||
|
|
||||||
|
### Skills
|
||||||
|
|
||||||
|
A skill is a reusable behavioural instruction or capability prompt — for example, "always respond in structured markdown" or "break complex tasks into numbered steps before starting".
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "name": "structured-output", "description": "Enforce markdown output format", "content": "Always structure responses using markdown headers and bullet points.", "tags": ["formatting"] }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Guardrails
|
||||||
|
|
||||||
|
A guardrail is a constraint or safety rule — for example, "never delete files without explicit confirmation" or "do not expose secrets in output".
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "name": "no-silent-deletes", "description": "Require confirmation before deletes", "content": "Never delete, drop, or truncate data without first confirming with the user.", "severity": "high", "tags": ["safety"] }
|
||||||
|
```
|
||||||
|
|
||||||
|
Severity levels: `low`, `medium`, `high`, `critical`.
|
||||||
|
|
||||||
|
### Project linking
|
||||||
|
|
||||||
|
Link existing skills and guardrails to a project so they are automatically available when that project is active:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "project": "my-project", "skill_id": "<uuid>" }
|
||||||
|
{ "project": "my-project", "guardrail_id": "<uuid>" }
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Config is YAML-driven. Copy `configs/config.example.yaml` and set:
|
||||||
|
|
||||||
|
- `database.url` — Postgres connection string
|
||||||
|
- `auth.mode` — `api_keys` or `oauth_client_credentials`
|
||||||
|
- `auth.keys` — API keys for MCP access via `x-brain-key` or `Authorization: Bearer <key>` when `auth.mode=api_keys`
|
||||||
|
- `auth.oauth.clients` — client registry when `auth.mode=oauth_client_credentials`
|
||||||
|
- `mcp.version` is build-generated and should not be set in config
|
||||||
|
|
||||||
|
**OAuth Client Credentials flow** (`auth.mode=oauth_client_credentials`):
|
||||||
|
|
||||||
|
1. Obtain a token — `POST /oauth/token` (public, no auth required):
|
||||||
|
```
|
||||||
|
POST /oauth/token
|
||||||
|
Content-Type: application/x-www-form-urlencoded
|
||||||
|
Authorization: Basic base64(client_id:client_secret)
|
||||||
|
|
||||||
|
grant_type=client_credentials
|
||||||
|
```
|
||||||
|
Returns: `{"access_token": "...", "token_type": "bearer", "expires_in": 3600}`
|
||||||
|
|
||||||
|
2. Use the token on the MCP endpoint:
|
||||||
|
```
|
||||||
|
Authorization: Bearer <access_token>
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, pass `client_id` and `client_secret` as body parameters instead of `Authorization: Basic`. Direct `Authorization: Basic` credential validation on the MCP endpoint is also supported as a fallback (no token required).
|
||||||
|
- `ai.litellm.base_url` and `ai.litellm.api_key` — LiteLLM proxy
|
||||||
|
- `ai.ollama.base_url` and `ai.ollama.api_key` — Ollama local or remote server
|
||||||
|
|
||||||
|
See `llm/plan.md` for an audited high-level status summary of the original implementation plan, and `llm/todo.md` for the audited backfill/fallback follow-up status.
|
||||||
|
|
||||||
|
## Backfill
|
||||||
|
|
||||||
|
Run `backfill_embeddings` after switching embedding models or importing thoughts without vectors.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "optional-project-name",
|
||||||
|
"limit": 100,
|
||||||
|
"include_archived": false,
|
||||||
|
"older_than_days": 0,
|
||||||
|
"dry_run": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `dry_run: true` — report counts without calling the embedding provider
|
||||||
|
- `limit` — max thoughts per call (default 100)
|
||||||
|
- Embeddings are generated in parallel (4 workers) and upserted; one failure does not abort the run
|
||||||
|
|
||||||
|
## Metadata Reparse
|
||||||
|
|
||||||
|
Run `reparse_thought_metadata` to fix stale or inconsistent metadata by re-extracting it from thought content.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "optional-project-name",
|
||||||
|
"limit": 100,
|
||||||
|
"include_archived": false,
|
||||||
|
"older_than_days": 0,
|
||||||
|
"dry_run": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `dry_run: true` scans only and does not call metadata extraction or write updates
|
||||||
|
- If extraction fails for a thought, existing metadata is normalized and written only if it changes
|
||||||
|
- Metadata reparse runs in parallel (4 workers); one failure does not abort the run
|
||||||
|
|
||||||
|
## Failed Metadata Retry
|
||||||
|
|
||||||
|
`capture_thought` now stores the thought even when metadata extraction times out or fails. Those thoughts are marked with `metadata_status: "pending"` and retried in the background. Use `retry_failed_metadata` to sweep any thoughts still marked `pending` or `failed`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "optional-project-name",
|
||||||
|
"limit": 100,
|
||||||
|
"include_archived": false,
|
||||||
|
"older_than_days": 1,
|
||||||
|
"dry_run": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `dry_run: true` scans only and does not call metadata extraction or write updates
|
||||||
|
- successful retries mark the thought metadata as `complete` and clear the last error
|
||||||
|
- failed retries update the retry markers so the daily sweep can pick them up again later
|
||||||
|
|
||||||
|
## File Storage
|
||||||
|
|
||||||
|
Files can optionally be linked to a thought by passing `thought_id`, which also adds an attachment reference to that thought's metadata. AI clients should prefer `save_file` when the goal is to retain the artifact itself, rather than reading or summarizing the file first. Stored files and attachment metadata are not forwarded to the metadata extraction client.
|
||||||
|
|
||||||
|
### MCP tools
|
||||||
|
|
||||||
|
**Stage a file and get a URI** (`upload_file`) — preferred for large or binary files:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "diagram.png",
|
||||||
|
"content_path": "/absolute/path/to/diagram.png"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Or with base64 for small files (≤10 MB):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "diagram.png",
|
||||||
|
"content_base64": "<base64-payload>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns `{"file": {...}, "uri": "amcs://files/<id>"}`. Pass `thought_id`/`project` to link immediately, or omit them and use the URI in a later `save_file` call.
|
||||||
|
|
||||||
|
**Link a staged file to a thought** (`save_file` with `content_uri`):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "meeting-notes.pdf",
|
||||||
|
"thought_id": "optional-thought-uuid",
|
||||||
|
"content_uri": "amcs://files/<id-from-upload_file>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Save small files inline** (`save_file` with `content_base64`, ≤10 MB):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "meeting-notes.pdf",
|
||||||
|
"media_type": "application/pdf",
|
||||||
|
"kind": "document",
|
||||||
|
"thought_id": "optional-thought-uuid",
|
||||||
|
"content_base64": "<base64-payload>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`content_base64` and `content_uri` are mutually exclusive in both tools.
|
||||||
|
|
||||||
|
**Load a file** — returns metadata, base64 content, and an embedded MCP binary resource (`amcs://files/{id}`). The `id` field accepts either the bare stored file UUID or the full `amcs://files/{id}` URI:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "id": "stored-file-uuid" }
|
||||||
|
```
|
||||||
|
|
||||||
|
**List files** for a thought or project:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"thought_id": "optional-thought-uuid",
|
||||||
|
"project": "optional-project-name",
|
||||||
|
"kind": "optional-image-document-audio-file",
|
||||||
|
"limit": 20
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### MCP resources
|
||||||
|
|
||||||
|
Stored files are also exposed as MCP resources at `amcs://files/{id}`. MCP clients can read raw binary content directly via `resources/read` without going through `load_file`.
|
||||||
|
|
||||||
|
### HTTP upload and download
|
||||||
|
|
||||||
|
Direct HTTP access avoids base64 encoding entirely. The Go server caps `/files` uploads at 100 MB per request. Large uploads are also subject to available memory, Postgres limits, and any reverse proxy or load balancer in front of AMCS.
|
||||||
|
|
||||||
|
Multipart upload:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8080/files \
|
||||||
|
-H "x-brain-key: <key>" \
|
||||||
|
-F "file=@./diagram.png" \
|
||||||
|
-F "project=amcs" \
|
||||||
|
-F "kind=image"
|
||||||
|
```
|
||||||
|
|
||||||
|
Raw body upload:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST "http://localhost:8080/files?project=amcs&name=meeting-notes.pdf" \
|
||||||
|
-H "x-brain-key: <key>" \
|
||||||
|
-H "Content-Type: application/pdf" \
|
||||||
|
--data-binary @./meeting-notes.pdf
|
||||||
|
```
|
||||||
|
|
||||||
|
Binary download:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8080/files/<id> \
|
||||||
|
-H "x-brain-key: <key>" \
|
||||||
|
-o meeting-notes.pdf
|
||||||
|
```
|
||||||
|
|
||||||
|
**Automatic backfill** (optional, config-gated):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
backfill:
|
||||||
|
enabled: true
|
||||||
|
run_on_startup: true # run once on server start
|
||||||
|
interval: "15m" # repeat every 15 minutes
|
||||||
|
batch_size: 20
|
||||||
|
max_per_run: 100
|
||||||
|
include_archived: false
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
metadata_retry:
|
||||||
|
enabled: true
|
||||||
|
run_on_startup: true # retry failed metadata once on server start
|
||||||
|
interval: "24h" # retry pending/failed metadata daily
|
||||||
|
max_per_run: 100
|
||||||
|
include_archived: false
|
||||||
|
```
|
||||||
|
|
||||||
|
**Search fallback**: when no embeddings exist for the active model in scope, `search_thoughts`, `recall_context`, `get_project_context`, `summarize_thoughts`, and `related_thoughts` automatically fall back to Postgres full-text search so results are never silently empty.
|
||||||
|
|
||||||
|
## Client Setup
|
||||||
|
|
||||||
|
### Claude Code
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# API key auth
|
||||||
|
claude mcp add --transport http amcs http://localhost:8080/mcp --header "x-brain-key: <key>"
|
||||||
|
|
||||||
|
# Bearer token auth
|
||||||
|
claude mcp add --transport http amcs http://localhost:8080/mcp --header "Authorization: Bearer <token>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenAI Codex
|
||||||
|
|
||||||
|
Add to `~/.codex/config.toml`:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[mcp_servers]]
|
||||||
|
name = "amcs"
|
||||||
|
url = "http://localhost:8080/mcp"
|
||||||
|
|
||||||
|
[mcp_servers.headers]
|
||||||
|
x-brain-key = "<key>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenCode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# API key auth
|
||||||
|
opencode mcp add --name amcs --type remote --url http://localhost:8080/mcp --header "x-brain-key=<key>"
|
||||||
|
|
||||||
|
# Bearer token auth
|
||||||
|
opencode mcp add --name amcs --type remote --url http://localhost:8080/mcp --header "Authorization=Bearer <token>"
|
||||||
|
```
|
||||||
|
|
||||||
|
Or add directly to `opencode.json` / `~/.config/opencode/config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcp": {
|
||||||
|
"amcs": {
|
||||||
|
"type": "remote",
|
||||||
|
"url": "http://localhost:8080/mcp",
|
||||||
|
"headers": {
|
||||||
|
"x-brain-key": "<key>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Apache Proxy
|
||||||
|
|
||||||
|
If AMCS is deployed behind Apache HTTP Server, configure the proxy explicitly for larger uploads and longer-running requests.
|
||||||
|
|
||||||
|
Example virtual host settings for the current AMCS defaults:
|
||||||
|
|
||||||
|
```apache
|
||||||
|
<VirtualHost *:443>
|
||||||
|
ServerName amcs.example.com
|
||||||
|
|
||||||
|
ProxyPreserveHost On
|
||||||
|
LimitRequestBody 104857600
|
||||||
|
RequestReadTimeout handshake=0 header=20-40,MinRate=500 body=600,MinRate=500
|
||||||
|
Timeout 600
|
||||||
|
ProxyTimeout 600
|
||||||
|
|
||||||
|
ProxyPass /mcp http://127.0.0.1:8080/mcp connectiontimeout=30 timeout=600
|
||||||
|
ProxyPassReverse /mcp http://127.0.0.1:8080/mcp
|
||||||
|
|
||||||
|
ProxyPass /files http://127.0.0.1:8080/files connectiontimeout=30 timeout=600
|
||||||
|
ProxyPassReverse /files http://127.0.0.1:8080/files
|
||||||
|
</VirtualHost>
|
||||||
|
```
|
||||||
|
|
||||||
|
Recommended Apache settings:
|
||||||
|
|
||||||
|
- `LimitRequestBody 104857600` matches AMCS's 100 MB `/files` upload cap.
|
||||||
|
- `RequestReadTimeout ... body=600` gives clients up to 10 minutes to send larger request bodies.
|
||||||
|
- `ProxyTimeout 600` and `ProxyPass ... timeout=600` give Apache enough time to wait for the Go backend.
|
||||||
|
- If another proxy or load balancer sits in front of Apache, align its size and timeout settings too.
|
||||||
|
|
||||||
|
## CLI
|
||||||
|
|
||||||
|
`amcs-cli` is a pre-built CLI client for the AMCS MCP server. Download it from https://git.warky.dev/wdevs/amcs/releases
|
||||||
|
|
||||||
|
The primary purpose is to give agents and MCP clients a ready-made bridge to the AMCS server so they do not need to implement their own HTTP MCP client. Configure it once and any stdio-based MCP client can use AMCS immediately.
|
||||||
|
|
||||||
|
### Commands
|
||||||
|
|
||||||
|
| Command | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `amcs-cli tools` | List all tools available on the remote server |
|
||||||
|
| `amcs-cli call <tool>` | Call a tool by name with `--arg key=value` flags |
|
||||||
|
| `amcs-cli stdio` | Start a stdio MCP bridge backed by the remote server |
|
||||||
|
|
||||||
|
`stdio` is the main integration point. It connects to the remote HTTP MCP server, discovers all its tools, and re-exposes them over stdio. Register it as a stdio MCP server in your agent config and it proxies every tool call through to AMCS.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Config file: `~/.config/amcs/config.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
server: https://your-amcs-server
|
||||||
|
token: your-bearer-token
|
||||||
|
```
|
||||||
|
|
||||||
|
Env vars override the config file: `AMCS_URL`, `AMCS_TOKEN`. Flags `--server` and `--token` override env vars.
|
||||||
|
|
||||||
|
### stdio MCP client setup
|
||||||
|
|
||||||
|
#### Claude Code
|
||||||
|
|
||||||
|
```bash
|
||||||
|
claude mcp add --transport stdio amcs amcs-cli stdio
|
||||||
|
```
|
||||||
|
|
||||||
|
With inline credentials (no config file):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
claude mcp add --transport stdio amcs amcs-cli stdio \
|
||||||
|
--env AMCS_URL=https://your-amcs-server \
|
||||||
|
--env AMCS_TOKEN=your-bearer-token
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Output format
|
||||||
|
|
||||||
|
`call` outputs JSON by default. Pass `--output yaml` for YAML.
|
||||||
|
|
||||||
|
## Development
|
||||||
|
|
||||||
|
Run the SQL migrations against a local database with:
|
||||||
|
|
||||||
|
`DATABASE_URL=postgres://... make migrate`
|
||||||
|
|
||||||
|
### Backend + embedded UI build
|
||||||
|
|
||||||
|
The web UI now lives in the top-level `ui/` module and is embedded into the Go binary at build time with `go:embed`.
|
||||||
|
|
||||||
|
**Use `pnpm` for all UI work in this repo.**
|
||||||
|
|
||||||
|
- `make build` — runs the real UI build first, then compiles the Go server
|
||||||
|
- `make test` — runs `svelte-check` for the frontend and `go test ./...` for the backend
|
||||||
|
- `make ui-install` — installs frontend dependencies with `pnpm install --frozen-lockfile`
|
||||||
|
- `make ui-build` — builds only the frontend bundle
|
||||||
|
- `make ui-dev` — starts the Vite dev server with hot reload on `http://localhost:5173`
|
||||||
|
- `make ui-check` — runs the frontend type and Svelte checks
|
||||||
|
|
||||||
|
### Local UI workflow
|
||||||
|
|
||||||
|
For the normal production-style local flow:
|
||||||
|
|
||||||
|
1. Start the backend: `./scripts/run-local.sh configs/dev.yaml`
|
||||||
|
2. Open `http://localhost:8080`
|
||||||
|
|
||||||
|
For frontend iteration with hot reload and no Go rebuilds:
|
||||||
|
|
||||||
|
1. Start the backend once: `go run ./cmd/amcs-server --config configs/dev.yaml`
|
||||||
|
2. In another shell start the UI dev server: `make ui-dev`
|
||||||
|
3. Open `http://localhost:5173`
|
||||||
|
|
||||||
|
The Vite dev server proxies backend routes such as `/api/status`, `/llm`, `/healthz`, `/readyz`, `/files`, `/mcp`, and the OAuth endpoints back to the Go server on `http://127.0.0.1:8080` by default. Override that target with `AMCS_UI_BACKEND` if needed.
|
||||||
|
|
||||||
|
The root page (`/`) is now the Svelte frontend. It preserves the existing landing-page content and status information by fetching data from `GET /api/status`.
|
||||||
|
|
||||||
|
LLM integration instructions are still served at `/llm`.
|
||||||
|
|
||||||
|
## Containers
|
||||||
|
|
||||||
|
The repo now includes a `Dockerfile` and Compose files for running the app with Postgres + pgvector.
|
||||||
|
|
||||||
|
1. Set a real LiteLLM key in your shell:
|
||||||
|
`export AMCS_LITELLM_API_KEY=your-key`
|
||||||
|
2. Start the stack with your runtime:
|
||||||
|
`docker compose -f docker-compose.yml -f docker-compose.docker.yml up --build`
|
||||||
|
`podman compose -f docker-compose.yml up --build`
|
||||||
|
3. Call the service on `http://localhost:8080`
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- The app uses `configs/docker.yaml` inside the container.
|
||||||
|
- The local `./configs` directory is mounted into `/app/configs`, so config edits apply without rebuilding the image.
|
||||||
|
- `AMCS_LITELLM_BASE_URL` overrides the LiteLLM endpoint, so you can retarget it without editing YAML.
|
||||||
|
- `AMCS_OLLAMA_BASE_URL` overrides the Ollama endpoint for local or remote servers.
|
||||||
|
- The Compose stack uses a default bridge network named `amcs`.
|
||||||
|
- The base Compose file uses `host.containers.internal`, which is Podman-friendly.
|
||||||
|
- The Docker override file adds `host-gateway` aliases so Docker can resolve the same host endpoint.
|
||||||
|
- Database migrations `001` through `005` run automatically when the Postgres volume is created for the first time.
|
||||||
|
- `migrations/006_rls_and_grants.sql` is intentionally skipped during container bootstrap because it contains deployment-specific grants for a role named `amcs_user`.
|
||||||
|
|
||||||
|
## Ollama
|
||||||
|
|
||||||
|
Set `ai.provider: "ollama"` to use a local or self-hosted Ollama server through its OpenAI-compatible API.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
ai:
|
||||||
|
provider: "ollama"
|
||||||
|
embeddings:
|
||||||
|
model: "nomic-embed-text"
|
||||||
|
dimensions: 768
|
||||||
|
metadata:
|
||||||
|
model: "llama3.2"
|
||||||
|
temperature: 0.1
|
||||||
|
ollama:
|
||||||
|
base_url: "http://localhost:11434/v1"
|
||||||
|
api_key: "ollama"
|
||||||
|
request_headers: {}
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- For remote Ollama servers, point `ai.ollama.base_url` at the remote `/v1` endpoint.
|
||||||
|
- The client always sends Bearer auth; Ollama ignores it locally, so `api_key: "ollama"` is a safe default.
|
||||||
|
- `ai.embeddings.dimensions` must match the embedding model you actually use, or startup will fail the database vector-dimension check.
|
||||||
|
|||||||
@@ -162,10 +162,11 @@ func routes(logger *slog.Logger, cfg *config.Config, info buildinfo.Info, db *st
|
|||||||
oauthEnabled := oauthRegistry != nil && tokenStore != nil
|
oauthEnabled := oauthRegistry != nil && tokenStore != nil
|
||||||
authMiddleware := auth.Middleware(cfg.Auth, keyring, oauthRegistry, tokenStore, accessTracker, logger)
|
authMiddleware := auth.Middleware(cfg.Auth, keyring, oauthRegistry, tokenStore, accessTracker, logger)
|
||||||
filesTool := tools.NewFilesTool(db, activeProjects)
|
filesTool := tools.NewFilesTool(db, activeProjects)
|
||||||
metadataRetryer := tools.NewMetadataRetryer(context.Background(), db, provider, cfg.Capture, cfg.AI.Metadata.Timeout, activeProjects, logger)
|
enrichmentRetryer := tools.NewEnrichmentRetryer(context.Background(), db, provider, cfg.Capture, cfg.AI.Metadata.Timeout, activeProjects, logger)
|
||||||
|
backfillTool := tools.NewBackfillTool(db, provider, activeProjects, logger)
|
||||||
|
|
||||||
toolSet := mcpserver.ToolSet{
|
toolSet := mcpserver.ToolSet{
|
||||||
Capture: tools.NewCaptureTool(db, provider, cfg.Capture, cfg.AI.Metadata.Timeout, activeProjects, metadataRetryer, logger),
|
Capture: tools.NewCaptureTool(db, provider, cfg.Capture, cfg.AI.Metadata.Timeout, activeProjects, enrichmentRetryer, backfillTool, logger),
|
||||||
Search: tools.NewSearchTool(db, provider, cfg.Search, activeProjects),
|
Search: tools.NewSearchTool(db, provider, cfg.Search, activeProjects),
|
||||||
List: tools.NewListTool(db, cfg.Search, activeProjects),
|
List: tools.NewListTool(db, cfg.Search, activeProjects),
|
||||||
Stats: tools.NewStatsTool(db),
|
Stats: tools.NewStatsTool(db),
|
||||||
@@ -180,9 +181,9 @@ func routes(logger *slog.Logger, cfg *config.Config, info buildinfo.Info, db *st
|
|||||||
Summarize: tools.NewSummarizeTool(db, provider, cfg.Search, activeProjects),
|
Summarize: tools.NewSummarizeTool(db, provider, cfg.Search, activeProjects),
|
||||||
Links: tools.NewLinksTool(db, provider, cfg.Search),
|
Links: tools.NewLinksTool(db, provider, cfg.Search),
|
||||||
Files: filesTool,
|
Files: filesTool,
|
||||||
Backfill: tools.NewBackfillTool(db, provider, activeProjects, logger),
|
Backfill: backfillTool,
|
||||||
Reparse: tools.NewReparseMetadataTool(db, provider, cfg.Capture, activeProjects, logger),
|
Reparse: tools.NewReparseMetadataTool(db, provider, cfg.Capture, activeProjects, logger),
|
||||||
RetryMetadata: tools.NewRetryMetadataTool(metadataRetryer),
|
RetryMetadata: tools.NewRetryEnrichmentTool(enrichmentRetryer),
|
||||||
Maintenance: tools.NewMaintenanceTool(db),
|
Maintenance: tools.NewMaintenanceTool(db),
|
||||||
Skills: tools.NewSkillsTool(db, activeProjects),
|
Skills: tools.NewSkillsTool(db, activeProjects),
|
||||||
ChatHistory: tools.NewChatHistoryTool(db, activeProjects),
|
ChatHistory: tools.NewChatHistoryTool(db, activeProjects),
|
||||||
|
|||||||
@@ -58,6 +58,12 @@ func (db *DB) InsertThought(ctx context.Context, thought thoughttypes.Thought, e
|
|||||||
return thoughttypes.Thought{}, fmt.Errorf("commit thought insert: %w", err)
|
return thoughttypes.Thought{}, fmt.Errorf("commit thought insert: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(thought.Embedding) > 0 {
|
||||||
|
created.EmbeddingStatus = "done"
|
||||||
|
} else {
|
||||||
|
created.EmbeddingStatus = "pending"
|
||||||
|
}
|
||||||
|
|
||||||
return created, nil
|
return created, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -51,6 +51,30 @@ func NewBackfillTool(db *store.DB, provider ai.Provider, sessions *session.Activ
|
|||||||
return &BackfillTool{store: db, provider: provider, sessions: sessions, logger: logger}
|
return &BackfillTool{store: db, provider: provider, sessions: sessions, logger: logger}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// QueueThought queues a single thought for background embedding generation.
|
||||||
|
// It is used by capture when the embedding provider is temporarily unavailable.
|
||||||
|
func (t *BackfillTool) QueueThought(ctx context.Context, id uuid.UUID, content string) {
|
||||||
|
go func() {
|
||||||
|
vec, err := t.provider.Embed(ctx, content)
|
||||||
|
if err != nil {
|
||||||
|
t.logger.Warn("background embedding retry failed",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("error", err.Error()),
|
||||||
|
)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
model := t.provider.EmbeddingModel()
|
||||||
|
if err := t.store.UpsertEmbedding(ctx, id, model, vec); err != nil {
|
||||||
|
t.logger.Warn("background embedding upsert failed",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("error", err.Error()),
|
||||||
|
)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.logger.Info("background embedding retry succeeded", slog.String("thought_id", id.String()))
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
func (t *BackfillTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in BackfillInput) (*mcp.CallToolResult, BackfillOutput, error) {
|
func (t *BackfillTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in BackfillInput) (*mcp.CallToolResult, BackfillOutput, error) {
|
||||||
limit := in.Limit
|
limit := in.Limit
|
||||||
if limit <= 0 {
|
if limit <= 0 {
|
||||||
|
|||||||
@@ -6,8 +6,8 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
"github.com/modelcontextprotocol/go-sdk/mcp"
|
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
|
|
||||||
"git.warky.dev/wdevs/amcs/internal/ai"
|
"git.warky.dev/wdevs/amcs/internal/ai"
|
||||||
"git.warky.dev/wdevs/amcs/internal/config"
|
"git.warky.dev/wdevs/amcs/internal/config"
|
||||||
@@ -17,6 +17,11 @@ import (
|
|||||||
thoughttypes "git.warky.dev/wdevs/amcs/internal/types"
|
thoughttypes "git.warky.dev/wdevs/amcs/internal/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// EmbeddingQueuer queues a thought for background embedding generation.
|
||||||
|
type EmbeddingQueuer interface {
|
||||||
|
QueueThought(ctx context.Context, id uuid.UUID, content string)
|
||||||
|
}
|
||||||
|
|
||||||
type CaptureTool struct {
|
type CaptureTool struct {
|
||||||
store *store.DB
|
store *store.DB
|
||||||
provider ai.Provider
|
provider ai.Provider
|
||||||
@@ -24,6 +29,7 @@ type CaptureTool struct {
|
|||||||
sessions *session.ActiveProjects
|
sessions *session.ActiveProjects
|
||||||
metadataTimeout time.Duration
|
metadataTimeout time.Duration
|
||||||
retryer *MetadataRetryer
|
retryer *MetadataRetryer
|
||||||
|
embedRetryer EmbeddingQueuer
|
||||||
log *slog.Logger
|
log *slog.Logger
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -36,8 +42,8 @@ type CaptureOutput struct {
|
|||||||
Thought thoughttypes.Thought `json:"thought"`
|
Thought thoughttypes.Thought `json:"thought"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewCaptureTool(db *store.DB, provider ai.Provider, capture config.CaptureConfig, metadataTimeout time.Duration, sessions *session.ActiveProjects, retryer *MetadataRetryer, log *slog.Logger) *CaptureTool {
|
func NewCaptureTool(db *store.DB, provider ai.Provider, capture config.CaptureConfig, metadataTimeout time.Duration, sessions *session.ActiveProjects, retryer *MetadataRetryer, embedRetryer EmbeddingQueuer, log *slog.Logger) *CaptureTool {
|
||||||
return &CaptureTool{store: db, provider: provider, capture: capture, sessions: sessions, metadataTimeout: metadataTimeout, retryer: retryer, log: log}
|
return &CaptureTool{store: db, provider: provider, capture: capture, sessions: sessions, metadataTimeout: metadataTimeout, retryer: retryer, embedRetryer: embedRetryer, log: log}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *CaptureTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in CaptureInput) (*mcp.CallToolResult, CaptureOutput, error) {
|
func (t *CaptureTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in CaptureInput) (*mcp.CallToolResult, CaptureOutput, error) {
|
||||||
@@ -51,46 +57,10 @@ func (t *CaptureTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in C
|
|||||||
return nil, CaptureOutput{}, err
|
return nil, CaptureOutput{}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var embedding []float32
|
|
||||||
rawMetadata := metadata.Fallback(t.capture)
|
rawMetadata := metadata.Fallback(t.capture)
|
||||||
metadataNeedsRetry := false
|
|
||||||
|
|
||||||
group, groupCtx := errgroup.WithContext(ctx)
|
|
||||||
group.Go(func() error {
|
|
||||||
vector, err := t.provider.Embed(groupCtx, content)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
embedding = vector
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
group.Go(func() error {
|
|
||||||
metaCtx := groupCtx
|
|
||||||
attemptedAt := time.Now().UTC()
|
|
||||||
if t.metadataTimeout > 0 {
|
|
||||||
var cancel context.CancelFunc
|
|
||||||
metaCtx, cancel = context.WithTimeout(groupCtx, t.metadataTimeout)
|
|
||||||
defer cancel()
|
|
||||||
}
|
|
||||||
extracted, err := t.provider.ExtractMetadata(metaCtx, content)
|
|
||||||
if err != nil {
|
|
||||||
t.log.Warn("metadata extraction failed, using fallback", slog.String("provider", t.provider.Name()), slog.String("error", err.Error()))
|
|
||||||
rawMetadata = metadata.MarkMetadataPending(rawMetadata, t.capture, attemptedAt, err)
|
|
||||||
metadataNeedsRetry = true
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
rawMetadata = metadata.MarkMetadataComplete(extracted, t.capture, attemptedAt)
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
if err := group.Wait(); err != nil {
|
|
||||||
return nil, CaptureOutput{}, err
|
|
||||||
}
|
|
||||||
|
|
||||||
thought := thoughttypes.Thought{
|
thought := thoughttypes.Thought{
|
||||||
Content: content,
|
Content: content,
|
||||||
Embedding: embedding,
|
Metadata: rawMetadata,
|
||||||
Metadata: metadata.Normalize(metadata.SanitizeExtracted(rawMetadata), t.capture),
|
|
||||||
}
|
}
|
||||||
if project != nil {
|
if project != nil {
|
||||||
thought.ProjectID = &project.ID
|
thought.ProjectID = &project.ID
|
||||||
@@ -103,9 +73,57 @@ func (t *CaptureTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in C
|
|||||||
if project != nil {
|
if project != nil {
|
||||||
_ = t.store.TouchProject(ctx, project.ID)
|
_ = t.store.TouchProject(ctx, project.ID)
|
||||||
}
|
}
|
||||||
if metadataNeedsRetry && t.retryer != nil {
|
|
||||||
t.retryer.QueueThought(created.ID)
|
if t.retryer != nil || t.embedRetryer != nil {
|
||||||
|
t.launchEnrichment(created.ID, content)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil, CaptureOutput{Thought: created}, nil
|
return nil, CaptureOutput{Thought: created}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (t *CaptureTool) launchEnrichment(id uuid.UUID, content string) {
|
||||||
|
go func() {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
if t.retryer != nil {
|
||||||
|
attemptedAt := time.Now().UTC()
|
||||||
|
rawMetadata := metadata.Fallback(t.capture)
|
||||||
|
extracted, err := t.provider.ExtractMetadata(ctx, content)
|
||||||
|
if err != nil {
|
||||||
|
failed := metadata.MarkMetadataFailed(rawMetadata, t.capture, attemptedAt, err)
|
||||||
|
if _, updateErr := t.store.UpdateThoughtMetadata(ctx, id, failed); updateErr != nil {
|
||||||
|
t.log.Warn("deferred metadata failure could not be persisted",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("error", updateErr.Error()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
t.log.Warn("deferred metadata extraction failed",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("provider", t.provider.Name()),
|
||||||
|
slog.String("error", err.Error()),
|
||||||
|
)
|
||||||
|
t.retryer.QueueThought(id)
|
||||||
|
} else {
|
||||||
|
completed := metadata.MarkMetadataComplete(extracted, t.capture, attemptedAt)
|
||||||
|
if _, updateErr := t.store.UpdateThoughtMetadata(ctx, id, completed); updateErr != nil {
|
||||||
|
t.log.Warn("deferred metadata completion could not be persisted",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("error", updateErr.Error()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if t.embedRetryer != nil {
|
||||||
|
if _, err := t.provider.Embed(ctx, content); err != nil {
|
||||||
|
t.log.Warn("deferred embedding failed",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("provider", t.provider.Name()),
|
||||||
|
slog.String("error", err.Error()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
t.embedRetryer.QueueThought(ctx, id, content)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|||||||
209
internal/tools/enrichment_retry.go
Normal file
209
internal/tools/enrichment_retry.go
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
package tools
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"log/slog"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/modelcontextprotocol/go-sdk/mcp"
|
||||||
|
"golang.org/x/sync/semaphore"
|
||||||
|
|
||||||
|
"git.warky.dev/wdevs/amcs/internal/ai"
|
||||||
|
"git.warky.dev/wdevs/amcs/internal/config"
|
||||||
|
"git.warky.dev/wdevs/amcs/internal/metadata"
|
||||||
|
"git.warky.dev/wdevs/amcs/internal/session"
|
||||||
|
"git.warky.dev/wdevs/amcs/internal/store"
|
||||||
|
thoughttypes "git.warky.dev/wdevs/amcs/internal/types"
|
||||||
|
)
|
||||||
|
|
||||||
|
const enrichmentRetryConcurrency = 4
|
||||||
|
const enrichmentRetryMaxAttempts = 5
|
||||||
|
|
||||||
|
var enrichmentRetryBackoff = []time.Duration{
|
||||||
|
30 * time.Second,
|
||||||
|
2 * time.Minute,
|
||||||
|
10 * time.Minute,
|
||||||
|
30 * time.Minute,
|
||||||
|
2 * time.Hour,
|
||||||
|
}
|
||||||
|
|
||||||
|
type EnrichmentRetryer struct {
|
||||||
|
backgroundCtx context.Context
|
||||||
|
store *store.DB
|
||||||
|
provider ai.Provider
|
||||||
|
capture config.CaptureConfig
|
||||||
|
sessions *session.ActiveProjects
|
||||||
|
metadataTimeout time.Duration
|
||||||
|
logger *slog.Logger
|
||||||
|
}
|
||||||
|
|
||||||
|
type RetryEnrichmentTool struct {
|
||||||
|
retryer *EnrichmentRetryer
|
||||||
|
}
|
||||||
|
|
||||||
|
type RetryEnrichmentInput struct {
|
||||||
|
Project string `json:"project,omitempty" jsonschema:"optional project name or id to scope the retry"`
|
||||||
|
Limit int `json:"limit,omitempty" jsonschema:"maximum number of thoughts to process in one call; defaults to 100"`
|
||||||
|
IncludeArchived bool `json:"include_archived,omitempty" jsonschema:"whether to include archived thoughts; defaults to false"`
|
||||||
|
OlderThanDays int `json:"older_than_days,omitempty" jsonschema:"only retry thoughts whose last metadata attempt was at least N days ago; 0 means no restriction"`
|
||||||
|
DryRun bool `json:"dry_run,omitempty" jsonschema:"report counts without retrying metadata extraction"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type RetryEnrichmentFailure struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Error string `json:"error"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type RetryEnrichmentOutput struct {
|
||||||
|
Scanned int `json:"scanned"`
|
||||||
|
Retried int `json:"retried"`
|
||||||
|
Updated int `json:"updated"`
|
||||||
|
Skipped int `json:"skipped"`
|
||||||
|
Failed int `json:"failed"`
|
||||||
|
DryRun bool `json:"dry_run"`
|
||||||
|
Failures []RetryEnrichmentFailure `json:"failures,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewEnrichmentRetryer(backgroundCtx context.Context, db *store.DB, provider ai.Provider, capture config.CaptureConfig, metadataTimeout time.Duration, sessions *session.ActiveProjects, logger *slog.Logger) *EnrichmentRetryer {
|
||||||
|
if backgroundCtx == nil {
|
||||||
|
backgroundCtx = context.Background()
|
||||||
|
}
|
||||||
|
return &EnrichmentRetryer{
|
||||||
|
backgroundCtx: backgroundCtx,
|
||||||
|
store: db,
|
||||||
|
provider: provider,
|
||||||
|
capture: capture,
|
||||||
|
sessions: sessions,
|
||||||
|
metadataTimeout: metadataTimeout,
|
||||||
|
logger: logger,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRetryEnrichmentTool(retryer *EnrichmentRetryer) *RetryEnrichmentTool {
|
||||||
|
return &RetryEnrichmentTool{retryer: retryer}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *RetryEnrichmentTool) Handle(ctx context.Context, req *mcp.CallToolRequest, in RetryEnrichmentInput) (*mcp.CallToolResult, RetryEnrichmentOutput, error) {
|
||||||
|
return t.retryer.Handle(ctx, req, in)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *EnrichmentRetryer) QueueThought(id uuid.UUID) {
|
||||||
|
go func() {
|
||||||
|
if _, err := r.retryOne(r.backgroundCtx, id); err != nil {
|
||||||
|
r.logger.Warn("background metadata retry failed",
|
||||||
|
slog.String("thought_id", id.String()),
|
||||||
|
slog.String("error", err.Error()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *EnrichmentRetryer) Handle(ctx context.Context, req *mcp.CallToolRequest, in RetryEnrichmentInput) (*mcp.CallToolResult, RetryEnrichmentOutput, error) {
|
||||||
|
limit := in.Limit
|
||||||
|
if limit <= 0 {
|
||||||
|
limit = 100
|
||||||
|
}
|
||||||
|
|
||||||
|
project, err := resolveProject(ctx, r.store, r.sessions, req, in.Project, false)
|
||||||
|
if err != nil {
|
||||||
|
return nil, RetryEnrichmentOutput{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var projectID *uuid.UUID
|
||||||
|
if project != nil {
|
||||||
|
projectID = &project.ID
|
||||||
|
}
|
||||||
|
|
||||||
|
thoughts, err := r.store.ListThoughtsPendingMetadataRetry(ctx, limit, projectID, in.IncludeArchived, in.OlderThanDays)
|
||||||
|
if err != nil {
|
||||||
|
return nil, RetryEnrichmentOutput{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
out := RetryEnrichmentOutput{Scanned: len(thoughts), DryRun: in.DryRun}
|
||||||
|
if in.DryRun || len(thoughts) == 0 {
|
||||||
|
return nil, out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
sem := semaphore.NewWeighted(enrichmentRetryConcurrency)
|
||||||
|
var mu sync.Mutex
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
for _, thought := range thoughts {
|
||||||
|
if ctx.Err() != nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err := sem.Acquire(ctx, 1); err != nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Add(1)
|
||||||
|
go func(thought thoughttypes.Thought) {
|
||||||
|
defer wg.Done()
|
||||||
|
defer sem.Release(1)
|
||||||
|
|
||||||
|
mu.Lock()
|
||||||
|
out.Retried++
|
||||||
|
mu.Unlock()
|
||||||
|
|
||||||
|
updated, err := r.retryOne(ctx, thought.ID)
|
||||||
|
if err != nil {
|
||||||
|
mu.Lock()
|
||||||
|
out.Failures = append(out.Failures, RetryEnrichmentFailure{ID: thought.ID.String(), Error: err.Error()})
|
||||||
|
mu.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if updated {
|
||||||
|
mu.Lock()
|
||||||
|
out.Updated++
|
||||||
|
mu.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
mu.Lock()
|
||||||
|
out.Skipped++
|
||||||
|
mu.Unlock()
|
||||||
|
}(thought)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
out.Failed = len(out.Failures)
|
||||||
|
|
||||||
|
return nil, out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *EnrichmentRetryer) retryOne(ctx context.Context, id uuid.UUID) (bool, error) {
|
||||||
|
thought, err := r.store.GetThought(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
if thought.Metadata.MetadataStatus == metadata.MetadataStatusComplete {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
attemptCtx := ctx
|
||||||
|
if r.metadataTimeout > 0 {
|
||||||
|
var cancel context.CancelFunc
|
||||||
|
attemptCtx, cancel = context.WithTimeout(ctx, r.metadataTimeout)
|
||||||
|
defer cancel()
|
||||||
|
}
|
||||||
|
|
||||||
|
attemptedAt := time.Now().UTC()
|
||||||
|
extracted, extractErr := r.provider.ExtractMetadata(attemptCtx, thought.Content)
|
||||||
|
if extractErr != nil {
|
||||||
|
failedMetadata := metadata.MarkMetadataFailed(thought.Metadata, r.capture, attemptedAt, extractErr)
|
||||||
|
if _, updateErr := r.store.UpdateThoughtMetadata(ctx, thought.ID, failedMetadata); updateErr != nil {
|
||||||
|
return false, updateErr
|
||||||
|
}
|
||||||
|
return false, extractErr
|
||||||
|
}
|
||||||
|
|
||||||
|
completedMetadata := metadata.MarkMetadataComplete(metadata.SanitizeExtracted(extracted), r.capture, attemptedAt)
|
||||||
|
completedMetadata.Attachments = thought.Metadata.Attachments
|
||||||
|
if _, updateErr := r.store.UpdateThoughtMetadata(ctx, thought.ID, completedMetadata); updateErr != nil {
|
||||||
|
return false, updateErr
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
@@ -28,12 +28,42 @@ type MetadataRetryer struct {
|
|||||||
sessions *session.ActiveProjects
|
sessions *session.ActiveProjects
|
||||||
metadataTimeout time.Duration
|
metadataTimeout time.Duration
|
||||||
logger *slog.Logger
|
logger *slog.Logger
|
||||||
|
lock *RetryLocker
|
||||||
}
|
}
|
||||||
|
|
||||||
type RetryMetadataTool struct {
|
type RetryMetadataTool struct {
|
||||||
retryer *MetadataRetryer
|
retryer *MetadataRetryer
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type RetryLocker struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
locks map[uuid.UUID]time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRetryLocker() *RetryLocker {
|
||||||
|
return &RetryLocker{locks: map[uuid.UUID]time.Time{}}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *RetryLocker) Acquire(id uuid.UUID, ttl time.Duration) bool {
|
||||||
|
l.mu.Lock()
|
||||||
|
defer l.mu.Unlock()
|
||||||
|
if l.locks == nil {
|
||||||
|
l.locks = map[uuid.UUID]time.Time{}
|
||||||
|
}
|
||||||
|
now := time.Now()
|
||||||
|
if exp, ok := l.locks[id]; ok && exp.After(now) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
l.locks[id] = now.Add(ttl)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *RetryLocker) Release(id uuid.UUID) {
|
||||||
|
l.mu.Lock()
|
||||||
|
defer l.mu.Unlock()
|
||||||
|
delete(l.locks, id)
|
||||||
|
}
|
||||||
|
|
||||||
type RetryMetadataInput struct {
|
type RetryMetadataInput struct {
|
||||||
Project string `json:"project,omitempty" jsonschema:"optional project name or id to scope the retry"`
|
Project string `json:"project,omitempty" jsonschema:"optional project name or id to scope the retry"`
|
||||||
Limit int `json:"limit,omitempty" jsonschema:"maximum number of thoughts to process in one call; defaults to 100"`
|
Limit int `json:"limit,omitempty" jsonschema:"maximum number of thoughts to process in one call; defaults to 100"`
|
||||||
@@ -69,6 +99,7 @@ func NewMetadataRetryer(backgroundCtx context.Context, db *store.DB, provider ai
|
|||||||
sessions: sessions,
|
sessions: sessions,
|
||||||
metadataTimeout: metadataTimeout,
|
metadataTimeout: metadataTimeout,
|
||||||
logger: logger,
|
logger: logger,
|
||||||
|
lock: NewRetryLocker(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -82,6 +113,10 @@ func (t *RetryMetadataTool) Handle(ctx context.Context, req *mcp.CallToolRequest
|
|||||||
|
|
||||||
func (r *MetadataRetryer) QueueThought(id uuid.UUID) {
|
func (r *MetadataRetryer) QueueThought(id uuid.UUID) {
|
||||||
go func() {
|
go func() {
|
||||||
|
if !r.lock.Acquire(id, 15*time.Minute) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer r.lock.Release(id)
|
||||||
if _, err := r.retryOne(r.backgroundCtx, id); err != nil {
|
if _, err := r.retryOne(r.backgroundCtx, id); err != nil {
|
||||||
r.logger.Warn("background metadata retry failed", slog.String("thought_id", id.String()), slog.String("error", err.Error()))
|
r.logger.Warn("background metadata retry failed", slog.String("thought_id", id.String()), slog.String("error", err.Error()))
|
||||||
}
|
}
|
||||||
@@ -138,7 +173,14 @@ func (r *MetadataRetryer) Handle(ctx context.Context, req *mcp.CallToolRequest,
|
|||||||
out.Retried++
|
out.Retried++
|
||||||
mu.Unlock()
|
mu.Unlock()
|
||||||
|
|
||||||
|
if !r.lock.Acquire(thought.ID, 15*time.Minute) {
|
||||||
|
mu.Lock()
|
||||||
|
out.Skipped++
|
||||||
|
mu.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
updated, err := r.retryOne(ctx, thought.ID)
|
updated, err := r.retryOne(ctx, thought.ID)
|
||||||
|
r.lock.Release(thought.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
mu.Lock()
|
mu.Lock()
|
||||||
out.Failures = append(out.Failures, RetryMetadataFailure{ID: thought.ID.String(), Error: err.Error()})
|
out.Failures = append(out.Failures, RetryMetadataFailure{ID: thought.ID.String(), Error: err.Error()})
|
||||||
|
|||||||
@@ -55,6 +55,7 @@ type Thought struct {
|
|||||||
ID uuid.UUID `json:"id"`
|
ID uuid.UUID `json:"id"`
|
||||||
Content string `json:"content"`
|
Content string `json:"content"`
|
||||||
Embedding []float32 `json:"embedding,omitempty"`
|
Embedding []float32 `json:"embedding,omitempty"`
|
||||||
|
EmbeddingStatus string `json:"embedding_status,omitempty"`
|
||||||
Metadata ThoughtMetadata `json:"metadata"`
|
Metadata ThoughtMetadata `json:"metadata"`
|
||||||
ProjectID *uuid.UUID `json:"project_id,omitempty"`
|
ProjectID *uuid.UUID `json:"project_id,omitempty"`
|
||||||
ArchivedAt *time.Time `json:"archived_at,omitempty"`
|
ArchivedAt *time.Time `json:"archived_at,omitempty"`
|
||||||
|
|||||||
@@ -1,77 +0,0 @@
|
|||||||
# Structured Learnings Schema (v1)
|
|
||||||
|
|
||||||
## Data Model
|
|
||||||
|
|
||||||
| Field | Type | Description |
|
|
||||||
|-------|------|-------------|
|
|
||||||
| **ID** | string | Stable learning identifier |
|
|
||||||
| **Category** | enum | `correction`, `insight`, `knowledge_gap`, `best_practice` |
|
|
||||||
| **Area** | enum | `frontend`, `backend`, `infra`, `tests`, `docs`, `config`, `other` |
|
|
||||||
| **Status** | enum | `pending`, `in_progress`, `resolved`, `wont_f` |
|
|
||||||
| **Priority** | string | e.g., `low`, `medium`, `high` |
|
|
||||||
| **Summary** | string | Brief description |
|
|
||||||
| **Details** | string | Full description / context |
|
|
||||||
| **ProjectID** | string (optional) | Reference to a project |
|
|
||||||
| **ThoughtID** | string (optional) | Reference to a thought |
|
|
||||||
| **SkillID** | string (optional) | Reference to a skill |
|
|
||||||
| **CreatedAt** | timestamp | Creation timestamp |
|
|
||||||
| **UpdatedAt** | timestamp | Last update timestamp |
|
|
||||||
|
|
||||||
## Suggested SQL Definition
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE TABLE learnings (
|
|
||||||
id UUID PRIMARY KEY,
|
|
||||||
category TEXT NOT NULL,
|
|
||||||
area TEXT NOT NULL,
|
|
||||||
status TEXT NOT NULL,
|
|
||||||
priority TEXT,
|
|
||||||
summary TEXT,
|
|
||||||
details TEXT,
|
|
||||||
project_id UUID,
|
|
||||||
thought_id UUID,
|
|
||||||
skill_id UUID,
|
|
||||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
|
||||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tool Surface (MCP)
|
|
||||||
|
|
||||||
- `create_learning` – insert a new learning record
|
|
||||||
- `list_learnings` – query with optional filters (category, area, status, project, etc.)
|
|
||||||
- `get_learning` – retrieve a single learning by ID
|
|
||||||
- `update_learning` – modify fields (e.g., status, priority) and/or links
|
|
||||||
|
|
||||||
## Enums (Go)
|
|
||||||
|
|
||||||
```go
|
|
||||||
type LearningCategory string
|
|
||||||
const (
|
|
||||||
LearningCategoryCorrection LearningCategory = "correction"
|
|
||||||
LearningCategoryInsight LearningCategory = "insight"
|
|
||||||
LearningCategoryKnowledgeGap LearningCategory = "knowledge_gap"
|
|
||||||
LearningCategoryBestPractice LearningCategory = "best_practice"
|
|
||||||
)
|
|
||||||
|
|
||||||
type LearningArea string
|
|
||||||
const (
|
|
||||||
LearningAreaFrontend LearningArea = "frontend"
|
|
||||||
LearningAreaBackend LearningArea = "backend"
|
|
||||||
LearningAreaInfra LearningArea = "infra"
|
|
||||||
LearningAreaTests LearningArea = "tests"
|
|
||||||
LearningAreaDocs LearningArea = "docs"
|
|
||||||
LearningAreaConfig LearningArea = "config"
|
|
||||||
LearningAreaOther LearningArea = "other"
|
|
||||||
)
|
|
||||||
|
|
||||||
type LearningStatus string
|
|
||||||
const (
|
|
||||||
LearningStatusPending LearningStatus = "pending"
|
|
||||||
LearningStatusInProgress LearningStatus = "in_progress"
|
|
||||||
LearningStatusResolved LearningStatus = "resolved"
|
|
||||||
LearningStatusWontF LearningStatus = "wont_f"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Let me know if this alignment works or if you’d like any adjustments before I proceed with the implementation.
|
|
||||||
@@ -1,14 +0,0 @@
|
|||||||
{
|
|
||||||
"id": "123e4567-e89b-12d3-a456-426614174000",
|
|
||||||
"category": "insight",
|
|
||||||
"area": "frontend",
|
|
||||||
"status": "pending",
|
|
||||||
"priority": "high",
|
|
||||||
"summary": "Understanding React hooks lifecycle",
|
|
||||||
"details": "React hooks provide a way to use state and other React features without writing a class. This learning note captures key insights about hooks lifecycle and common pitfalls.",
|
|
||||||
"project_id": "proj-001",
|
|
||||||
"thought_id": "th-001",
|
|
||||||
"skill_id": "skill-001",
|
|
||||||
"created_at": "2026-04-05T19:30:00Z",
|
|
||||||
"updated_at": "2026-04-05T19:30:00Z"
|
|
||||||
}
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
# Structured Learnings
|
|
||||||
|
|
||||||
This directory is intended to hold structured learning modules and resources.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Add your learning materials here.*
|
|
||||||
Reference in New Issue
Block a user