The init command bootstraps ExplainThisRepo by creating a local, persistent configuration file.
Runnig
It configures:
- your selected LLM provider
- optional GitHub access (for private repos and higher rate limits)
-
Prompts you to select an LLM provider:
- Gemini
- OpenAI
- Ollama
- Anthropic
- Groq
- OpenRouter
-
Overwrites any existing configuration file
-
Collects only the configuration required for that provider
-
Optionally prompts for a GitHub token
-
Writes a minimal
config.tomlfile to the OS-appropriate config directory -
Writes the selected provider as the active provider
-
Exits immediately
During setup, you can configure a GitHub token.
This enables:
- Access to private repositories
- Higher API rate limits for public repositories
If skipped:
- public repositories still work
- rate limits are lower
- private repositories will fail
If not set in config, ExplainThisRepo will also check:
- GITHUB_TOKEN
- GH_TOKEN
environment variables.
If both config and environment variables are set, the config value takes precedence.
Depending on the provider selected:
- Prompts for
api_key
- Prompts for
api_key
- Prompts for
model(e.g.llama3,gemma3:4b,glm-5:cloud) - Prompts for
host(default:http://localhost:11434)
- Prompts for
api_key
- Prompts for
api_key - Prompts for model selection
- Prompts for
api_key - Prompts for model selection or manual input
Only the selected provider configuration and optional GitHub configuration are written. Empty input for optional fields skips configuration safely.
- No repository analysis is performed during initialization.
- No model execution
- No API key validation
- No dependency installation
- No environment variable modification
The configuration is written locally only.
- API keys and tokens are read using hidden terminal input
- Characters are not echoed
- Paste works normally
- Ctrl+C exits cleanly without writing partial state
A single authoritative config path is used per OS.
%APPDATA%\ExplainThisRepo\config.toml
$XDG_CONFIG_HOME/explainthisrepo/config.toml
Fallback: ~/.config/explainthisrepo/config.toml
[github]
token = "ghp_xxx"[llm]
provider = "gemini"
[providers.gemini]
api_key = "..."[llm]
provider = "openai"
[providers.openai]
api_key = "..."[llm]
provider = "ollama"
[providers.ollama]
model = "<user-selected>"
host = "http://localhost:11434"[llm]
provider = "anthropic"
[providers.anthropic]
api_key = "..."[llm]
provider = "groq"
[providers.groq]
api_key = "..."
model = "<user-selected>"[llm]
provider = "openrouter"
[providers.openrouter]
api_key = "..."
model = "<user-selected>"init exists to separate configuration from execution.
After initialization:
-
All analysis commands run without repeated prompts
-
GitHub authentication is resolved automatically from config or environment
-
Provider selection can be overridden via
--llmwithout re-running init. -
Only one provider is active at a time
This establishes a stable foundation for:
-
single-provider execution with optional runtime override
-
authenticated GitHub access