Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

README.md

Parameter Golf Example

This example shows how to use SeevoMap to accelerate experiments on the Parameter Golf challenge.

Goal: Train a language model with ≤16MB parameters in ≤10 minutes on 8×H100, minimizing bits-per-byte (bpb).

Setup

pip install seevomap

Usage

1. Fetch Community Context

# Get the top 15 most relevant experiences
seevomap inject "minimize bits-per-byte for compact language model under 16MB" \
  --top-k 15 > community_context.txt

cat community_context.txt

Or use the Python script:

python inject_context.py

2. Use in Your Evolutionary Search

The inject_context.py script demonstrates how to:

  • Fetch community experience via the SDK
  • Format it for injection into an evolutionary search prompt
  • Combine with your own experiment history

3. Submit Results Back

After running your experiment, submit the results:

seevomap submit sample_node.json

Or modify sample_node.json with your actual results and submit.

What's in the Graph

SeevoMap contains execution records from:

  • NanoGPT evolutionary search (3 models × pretraining task = ~2,300 records)
  • GRPO post-training (3 models × RL task = ~1,800 records)
  • Parameter Golf community (18 SOTA submissions analyzed)

Key techniques found in the graph for parameter-golf:

  • int6 STE quantization-aware training (best: 1.1586 bpb)
  • 3x MLP expansion with compression
  • Sliding window evaluation
  • SwiGLU activations with wider hidden dimensions
  • Muon optimizer with momentum warmup