64024193e90abb6b23bcbb5b58d953af406e1dd2
Avalon Memory Crystal Server (amcs)
A Go MCP server for capturing and retrieving thoughts, memory, and project context. Exposes tools over Streamable HTTP, backed by Postgres with pgvector for semantic search.
What it does
- Capture thoughts with automatic embedding and metadata extraction
- Search thoughts semantically via vector similarity
- Organise thoughts into projects and retrieve full project context
- Summarise and recall memory across topics and time windows
- Link related thoughts and traverse relationships
Stack
- Go — MCP server over Streamable HTTP
- Postgres + pgvector — storage and vector search
- LiteLLM — primary AI provider (embeddings + metadata extraction)
- OpenRouter — default upstream behind LiteLLM
Tools
| Tool | Purpose |
|---|---|
capture_thought |
Store a thought with embedding and metadata |
search_thoughts |
Semantic similarity search |
list_thoughts |
Filter thoughts by type, topic, person, date |
thought_stats |
Counts and top topics/people |
get_thought |
Retrieve a thought by ID |
update_thought |
Patch content or metadata |
delete_thought |
Hard delete |
archive_thought |
Soft delete |
create_project |
Register a named project |
list_projects |
List projects with thought counts |
get_project_context |
Recent + semantic context for a project |
set_active_project |
Set session project scope |
get_active_project |
Get current session project |
summarize_thoughts |
LLM prose summary over a filtered set |
recall_context |
Semantic + recency context block for injection |
link_thoughts |
Create a typed relationship between thoughts |
related_thoughts |
Explicit links + semantic neighbours |
Configuration
Config is YAML-driven. Copy configs/config.example.yaml and set:
database.url— Postgres connection stringauth.keys— API keys for MCP endpoint accessai.litellm.base_urlandai.litellm.api_key— LiteLLM proxy
See llm/plan.md for full architecture and implementation plan.
Description
