Hein 8d0a91a961 feat(llm): add LLM integration instructions and handler
* Serve LLM instructions at `/llm`
* Include markdown content for memory instructions
* Update README with LLM integration details
* Add tests for LLM instructions handler
* Modify database migrations to use GUIDs for thoughts and projects
2026-03-25 18:02:42 +02:00
2026-03-23 19:00:36 +00:00

Avalon Memory Crystal Server (amcs)

Avalon Memory Crystal

A Go MCP server for capturing and retrieving thoughts, memory, and project context. Exposes tools over Streamable HTTP, backed by Postgres with pgvector for semantic search.

What it does

  • Capture thoughts with automatic embedding and metadata extraction
  • Search thoughts semantically via vector similarity
  • Organise thoughts into projects and retrieve full project context
  • Summarise and recall memory across topics and time windows
  • Link related thoughts and traverse relationships

Stack

  • Go — MCP server over Streamable HTTP
  • Postgres + pgvector — storage and vector search
  • LiteLLM — primary hosted AI provider (embeddings + metadata extraction)
  • OpenRouter — default upstream behind LiteLLM
  • Ollama — supported local or self-hosted OpenAI-compatible provider

Tools

Tool Purpose
capture_thought Store a thought with embedding and metadata
search_thoughts Semantic similarity search
list_thoughts Filter thoughts by type, topic, person, date
thought_stats Counts and top topics/people
get_thought Retrieve a thought by ID
update_thought Patch content or metadata
delete_thought Hard delete
archive_thought Soft delete
create_project Register a named project
list_projects List projects with thought counts
get_project_context Recent + semantic context for a project
set_active_project Set session project scope
get_active_project Get current session project
summarize_thoughts LLM prose summary over a filtered set
recall_context Semantic + recency context block for injection
link_thoughts Create a typed relationship between thoughts
related_thoughts Explicit links + semantic neighbours

Configuration

Config is YAML-driven. Copy configs/config.example.yaml and set:

  • database.url — Postgres connection string
  • auth.keys — API keys for MCP endpoint access
  • ai.litellm.base_url and ai.litellm.api_key — LiteLLM proxy
  • ai.ollama.base_url and ai.ollama.api_key — Ollama local or remote server

See llm/plan.md for full architecture and implementation plan.

Development

Run the SQL migrations against a local database with:

DATABASE_URL=postgres://... make migrate

LLM integration instructions are served at /llm.

Containers

The repo now includes a Dockerfile and Compose files for running the app with Postgres + pgvector.

  1. Set a real LiteLLM key in your shell: export OB1_LITELLM_API_KEY=your-key
  2. Start the stack with your runtime: docker compose -f docker-compose.yml -f docker-compose.docker.yml up --build podman compose -f docker-compose.yml up --build
  3. Call the service on http://localhost:8080

Notes:

  • The app uses configs/docker.yaml inside the container.
  • OB1_LITELLM_BASE_URL overrides the LiteLLM endpoint, so you can retarget it without editing YAML.
  • OB1_OLLAMA_BASE_URL overrides the Ollama endpoint for local or remote servers.
  • The base Compose file uses host.containers.internal, which is Podman-friendly.
  • The Docker override file adds host-gateway aliases so Docker can resolve the same host endpoint.
  • Database migrations 001 through 005 run automatically when the Postgres volume is created for the first time.
  • migrations/006_rls_and_grants.sql is intentionally skipped during container bootstrap because it contains deployment-specific grants for a role named amcs_user.

Ollama

Set ai.provider: "ollama" to use a local or self-hosted Ollama server through its OpenAI-compatible API.

Example:

ai:
  provider: "ollama"
  embeddings:
    model: "nomic-embed-text"
    dimensions: 768
  metadata:
    model: "llama3.2"
    temperature: 0.1
  ollama:
    base_url: "http://localhost:11434/v1"
    api_key: "ollama"
    request_headers: {}

Notes:

  • For remote Ollama servers, point ai.ollama.base_url at the remote /v1 endpoint.
  • The client always sends Bearer auth; Ollama ignores it locally, so api_key: "ollama" is a safe default.
  • ai.embeddings.dimensions must match the embedding model you actually use, or startup will fail the database vector-dimension check.
Description
Avalon Memory Crystal Server
Readme Apache-2.0 1.4 MiB
v0.0.3 Latest
2026-04-05 09:15:48 +00:00
Languages
Go 94.6%
Svelte 2%
PLpgSQL 1.7%
Makefile 0.8%
Dockerfile 0.3%
Other 0.5%