Files
amcs/README.md
Hein 66370a7f0e feat(tools): implement CRUD operations for thoughts and projects
* Add tools for creating, retrieving, updating, and deleting thoughts.
* Implement project management tools for creating and listing projects.
* Introduce linking functionality between thoughts.
* Add search and recall capabilities for thoughts based on semantic queries.
* Implement statistics and summarization tools for thought analysis.
* Create database migrations for thoughts, projects, and links.
* Add helper functions for UUID parsing and project resolution.
2026-03-24 15:38:59 +02:00

3.2 KiB

Avalon Memory Crystal Server (amcs)

Avalon Memory Crystal

A Go MCP server for capturing and retrieving thoughts, memory, and project context. Exposes tools over Streamable HTTP, backed by Postgres with pgvector for semantic search.

What it does

  • Capture thoughts with automatic embedding and metadata extraction
  • Search thoughts semantically via vector similarity
  • Organise thoughts into projects and retrieve full project context
  • Summarise and recall memory across topics and time windows
  • Link related thoughts and traverse relationships

Stack

  • Go — MCP server over Streamable HTTP
  • Postgres + pgvector — storage and vector search
  • LiteLLM — primary AI provider (embeddings + metadata extraction)
  • OpenRouter — default upstream behind LiteLLM

Tools

Tool Purpose
capture_thought Store a thought with embedding and metadata
search_thoughts Semantic similarity search
list_thoughts Filter thoughts by type, topic, person, date
thought_stats Counts and top topics/people
get_thought Retrieve a thought by ID
update_thought Patch content or metadata
delete_thought Hard delete
archive_thought Soft delete
create_project Register a named project
list_projects List projects with thought counts
get_project_context Recent + semantic context for a project
set_active_project Set session project scope
get_active_project Get current session project
summarize_thoughts LLM prose summary over a filtered set
recall_context Semantic + recency context block for injection
link_thoughts Create a typed relationship between thoughts
related_thoughts Explicit links + semantic neighbours

Configuration

Config is YAML-driven. Copy configs/config.example.yaml and set:

  • database.url — Postgres connection string
  • auth.keys — API keys for MCP endpoint access
  • ai.litellm.base_url and ai.litellm.api_key — LiteLLM proxy

See llm/plan.md for full architecture and implementation plan.

Development

Run the SQL migrations against a local database with:

DATABASE_URL=postgres://... make migrate

Containers

The repo now includes a Dockerfile and Compose files for running the app with Postgres + pgvector.

  1. Set a real LiteLLM key in your shell: export OB1_LITELLM_API_KEY=your-key
  2. Start the stack with your runtime: docker compose -f docker-compose.yml -f docker-compose.docker.yml up --build podman compose -f docker-compose.yml up --build
  3. Call the service on http://localhost:8080

Notes:

  • The app uses configs/docker.yaml inside the container.
  • OB1_LITELLM_BASE_URL overrides the LiteLLM endpoint, so you can retarget it without editing YAML.
  • The base Compose file uses host.containers.internal, which is Podman-friendly.
  • The Docker override file adds host-gateway aliases so Docker can resolve the same host endpoint.
  • Database migrations 001 through 005 run automatically when the Postgres volume is created for the first time.
  • migrations/006_rls_and_grants.sql is intentionally skipped during container bootstrap because it contains deployment-specific grants for a role named amcs_user.