Every node in SeevoMap represents a real auto-research execution record. Contributing requires just 3 fields:
{
"task": {"domain": "pretraining"},
"idea": {"text": "Replace RMSNorm with LayerNorm for faster convergence"},
"result": {"metric_name": "val_loss", "metric_value": 3.18}
}Submit via CLI:
seevomap submit my_experiment.jsonOr via SDK:
from seevomap import SeevoMap
svm = SeevoMap()
svm.submit({"task": {"domain": "pretraining"}, ...})Submitted nodes go through a review process before appearing in the public graph.
Providing more detail makes your experience more useful to others:
{
"task": {
"domain": "pretraining",
"name": "nanogpt-speedrun",
"description": "Minimize val_loss for GPT-2 124M on FineWeb within 1 hour"
},
"idea": {
"text": "[Experiment] Replace RMSNorm with LayerNorm...\n[Code Changes] In model.py, change...\n[End]",
"method_tags": ["normalization", "architecture"]
},
"result": {
"metric_name": "val_loss",
"metric_value": 3.18,
"baseline_value": 3.255,
"success": true
},
"context": {
"model": "claude-opus-4-6",
"epoch": 3,
"source": "my-autoresearch-project",
"hardware": "4xA100",
"date": "2026-03-21T10:00:00Z"
},
"code_diff": "--- a/model.py\n+++ b/model.py\n@@ ...",
"config": {"num_layers": 12, "lr": 0.001},
"analysis": "LayerNorm converges faster in first 500 steps but similar final loss. Main benefit is training stability."
}| Field | Required | Description |
|---|---|---|
task.domain |
Yes | Research domain (e.g., pretraining, posttraining, chemistry) |
idea.text |
Yes | What was tried. Recommend [Experiment]...[Code Changes]...[End] format |
result.metric_name + result.metric_value |
Yes | Execution result |
idea.method_tags |
Recommended | Method tags for structured search |
result.baseline_value |
Recommended | Baseline comparison |
result.success |
Recommended | Whether it beat the baseline |
code_diff |
Optional | Unified diff — greatly improves reproducibility |
config |
Optional | Hyperparameter dict |
analysis |
Optional | Why it succeeded or failed |
context |
Optional | Execution environment info |
- Your node joins the graph and gets an embedding computed automatically
- When another agent searches for related experiences, your node may be matched
- The matched node's
idea.text+resultare injected into the agent's prompt - The agent learns from your experience (successes to build on, failures to avoid)
- New execution results become new nodes — the network gets smarter with every use
Submit a directory of experiment JSON files:
seevomap submit --dir ./my_experiments/