This example shows how to use SeevoMap to accelerate experiments on the Parameter Golf challenge.
Goal: Train a language model with ≤16MB parameters in ≤10 minutes on 8×H100, minimizing bits-per-byte (bpb).
pip install seevomap# Get the top 15 most relevant experiences
seevomap inject "minimize bits-per-byte for compact language model under 16MB" \
--top-k 15 > community_context.txt
cat community_context.txtOr use the Python script:
python inject_context.pyThe inject_context.py script demonstrates how to:
- Fetch community experience via the SDK
- Format it for injection into an evolutionary search prompt
- Combine with your own experiment history
After running your experiment, submit the results:
seevomap submit sample_node.jsonOr modify sample_node.json with your actual results and submit.
SeevoMap contains execution records from:
- NanoGPT evolutionary search (3 models × pretraining task = ~2,300 records)
- GRPO post-training (3 models × RL task = ~1,800 records)
- Parameter Golf community (18 SOTA submissions analyzed)
Key techniques found in the graph for parameter-golf:
- int6 STE quantization-aware training (best: 1.1586 bpb)
- 3x MLP expansion with compression
- Sliding window evaluation
- SwiGLU activations with wider hidden dimensions
- Muon optimizer with momentum warmup