Skip to content

Latest commit

 

History

History
191 lines (132 loc) · 5.11 KB

File metadata and controls

191 lines (132 loc) · 5.11 KB

Running gigaevo-core-internal With memory_platform

This is the current end-to-end flow for using gigaevo-memory as the backend for:

  • runtime retrieval with memory_enabled=true
  • final ideas-tracker memory write with ideas_tracker=true

When api.use_api: true, gigaevo-core-internal automatically uses gigaevo.memory_platform.

What is used now

  • Persistent memory storage: gigaevo-memory PostgreSQL database
  • Vector search: gigaevo-memory backend search, not local Chroma
  • Query embeddings: generated by gigaevo-memory during search when needed
  • Local checkpoint directory: only runtime/cache artifacts for GAM

namespace is the main selector for which remote memory bank is used. It is a logical partition inside the same backend database, not a separate Postgres instance.

1. Start backend dependencies

Use a Postgres image with pgvector:

docker run -d --name gigaevo-memory-postgres \
  -e POSTGRES_DB=gigaevo \
  -e POSTGRES_USER=gigaevo \
  -e POSTGRES_PASSWORD=gigaevo \
  -p 5432:5432 \
  pgvector/pgvector:pg15

docker run -d --name gigaevo-memory-redis \
  -p 6379:6379 \
  redis:7-alpine

2. Start gigaevo-memory

Use a Python 3.12 env for gigaevo-memory.

conda create -n gigaevo-memory python=3.12 -y
conda activate gigaevo-memory

cd /home/petranokhin/projects/gigaevo_memory/gigaevo-memory
python -m pip install --upgrade pip
python -m pip install -e ./api
python -m pip install sentence-transformers

Export backend env vars:

export POSTGRES_DSN='postgresql+asyncpg://gigaevo:gigaevo@localhost:5432/gigaevo'
export REDIS_URL='redis://localhost:6379/0'
export ENABLE_VECTOR_SEARCH=true
export EMBEDDING_PROVIDER=sentencetransformers
export EMBEDDING_MODEL=all-MiniLM-L6-v2

Run migrations and start the API:

cd /home/petranokhin/projects/gigaevo_memory/gigaevo-memory/api
alembic -c app/db/alembic.ini upgrade head
uvicorn app.main:app --host 0.0.0.0 --port 8000

Health check:

curl http://localhost:8000/health

3. Configure gigaevo-core-internal

In config/memory.yaml, set:

api:
  base_url: http://localhost:8000
  use_api: true
  namespace: exp9
  channel: latest

If you want final memory writing at the end of the run, also keep:

ideas_tracker:
  memory_write_pipeline:
    enabled: true

4. Run core

Use your normal gigaevo-core-internal env.

If gigaevo_memory is not installed in that env, install the lightweight client:

python -m pip install -e /home/petranokhin/projects/gigaevo_memory/gigaevo-memory/client/python

Then run:

conda activate <your-gigaevo-core-env>
cd /home/petranokhin/projects/gigaevo_memory/gigaevo-core-internal

export MEMORY_API_URL=http://localhost:8000
python run.py problem.name=heilbron memory_enabled=true ideas_tracker=true namespace=exp9 redis.db=1

Notes:

  • namespace=exp9 selects the remote memory bank to read/write
  • memory_enabled=true tests runtime retrieval
  • ideas_tracker=true tests the final write pipeline
  • checkpoint_dir is optional in API mode; it only changes local runtime artifacts

5. What success looks like

In the core log you should see:

  • Memory namespace override: exp9
  • Using memory backend (class=gigaevo.memory_platform.shared_memory.memory, use_api=True, namespace=exp9, ...)
  • Selected ... memory idea(s) via red agent ...

If you instead see:

  • namespace=default

then the run is reading the wrong remote memory bank.

6. Verify data in the backend

List cards:

curl "http://localhost:8000/v1/memory-cards?limit=20&offset=0&channel=latest"

Vector search in a namespace:

curl -X POST "http://localhost:8000/v1/search/unified" \
  -H "Content-Type: application/json" \
  -d '{
    "search_type": "vector",
    "query": "test retrieval query",
    "entity_type": "memory_card",
    "namespace": "exp9",
    "channel": "latest",
    "document_kind": "full_card",
    "top_k": 5
  }'

If runtime retrieval is working, the cards returned here should match the IDs reported by the selector logs.

7. Important behavior

  • With api.use_api: true, memory cards are saved in gigaevo-memory, not in the local checkpoint.
  • Memory written by ideas tracker is typically available for later runs, because the final write happens after the main evolution loop.
  • namespace and channel are shared by runtime retrieval and final write. There is no separate read/write namespace setting right now.

8. Common failures

alembic says No 'script_location' key found

Use:

alembic -c app/db/alembic.ini upgrade head

Postgres says extension "vector" is not available

Use pgvector/pgvector:pg15, not plain Postgres.

Core says No module named 'gigaevo_memory'

Install the client package in the core env:

python -m pip install -e /home/petranokhin/projects/gigaevo_memory/gigaevo-memory/client/python

Selector uses namespace=default

Pass namespace=... on run.py, or set api.namespace in config/memory.yaml.