Open RLM Memory is a local-first memory server for AI agents.
- Backend and frontend are served by one FastAPI app on port
8000. - Memory data is stored in PostgreSQL with
pgvector. - Search caching uses an app-owned PostgreSQL table:
pg_cache. - Identity and data isolation use
X-Memory-Namespace(no OAuth/JWT/Auth0). - LLM and embedding calls use an OpenAI-compatible endpoint (LM Studio by default).