Skip to content

Commit 5e20ac0

Browse files
stephanjclaude
andcommitted
feat: add FAQ, Ollama setup guide, and comparison page
Three new Getting Started pages targeting high-intent search queries: docs/getting-started/faq.md - 10 Q&As covering pricing, privacy, LLM support, Agent Mode, MCP, SDD, RAG, Skills, offline usage, and IDE compatibility - FAQPage JSON-LD schema for Google People Also Ask eligibility - Accurate cloud provider list: 13 providers including DeepInfra, Kimi, and GLM (Zhipu AI) which were previously missing docs/getting-started/use-ollama-in-intellij.md - Step-by-step Ollama setup targeting "ollama intellij idea" queries - Tabbed model recommendations for chat, coding, Agent Mode (glm-4.7-flash, qwen3), and inline completion - Hardware performance table and troubleshooting section docs/getting-started/why-devoxxgenie.md - Comparison page targeting "github copilot alternative intellij" - Feature comparison table vs. Copilot, Cursor, JetBrains AI - Covers BYOK model, privacy, multi-provider flexibility Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent d00ac39 commit 5e20ac0

4 files changed

Lines changed: 449 additions & 0 deletions

File tree

Lines changed: 196 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,196 @@
1+
---
2+
sidebar_position: 7
3+
title: Frequently Asked Questions - DevoxxGenie
4+
description: Answers to common questions about DevoxxGenie — the free, open-source AI code assistant plugin for IntelliJ IDEA. Covers pricing, privacy, LLM support, Ollama, MCP, Agent Mode, and more.
5+
keywords: [devoxxgenie faq, intellij ai plugin questions, is devoxxgenie free, devoxxgenie ollama, devoxxgenie privacy, devoxxgenie agent mode, devoxxgenie mcp, devoxxgenie vs copilot]
6+
image: /img/devoxxgenie-social-card.jpg
7+
---
8+
9+
import Head from '@docusaurus/Head';
10+
11+
<Head>
12+
<script type="application/ld+json">{`
13+
{
14+
"@context": "https://schema.org",
15+
"@type": "FAQPage",
16+
"mainEntity": [
17+
{
18+
"@type": "Question",
19+
"name": "Is DevoxxGenie free?",
20+
"acceptedAnswer": {
21+
"@type": "Answer",
22+
"text": "Yes, DevoxxGenie itself is completely free and open source. It uses a BYOK (Bring Your Own Keys) model — you supply your own API keys for cloud LLMs, or run local models with Ollama or LM Studio at no API cost. There is no DevoxxGenie subscription fee."
23+
}
24+
},
25+
{
26+
"@type": "Question",
27+
"name": "Does DevoxxGenie send my code to the cloud?",
28+
"acceptedAnswer": {
29+
"@type": "Answer",
30+
"text": "Only to the LLM provider you explicitly configure. If you use a cloud provider like OpenAI or Anthropic, your prompts (which may include code) are sent to that provider's API under their privacy policy. If you use a local model via Ollama or LM Studio, nothing leaves your machine. DevoxxGenie itself does not collect or transmit your code."
31+
}
32+
},
33+
{
34+
"@type": "Question",
35+
"name": "Which LLMs does DevoxxGenie support?",
36+
"acceptedAnswer": {
37+
"@type": "Answer",
38+
"text": "DevoxxGenie supports a wide range of LLMs. Cloud providers include OpenAI (GPT-4o, o3, o4-mini), Anthropic (Claude 3.5/4), Google (Gemini 1.5/2.x), Grok (xAI), Mistral, Groq, DeepInfra, DeepSeek (R1, Coder), Kimi (Moonshot AI), GLM (Zhipu AI), OpenRouter, Azure OpenAI, and Amazon Bedrock. Local providers include Ollama, LM Studio, GPT4All, Llama.cpp, and Jan. It also supports any OpenAI-compatible custom endpoint."
39+
}
40+
},
41+
{
42+
"@type": "Question",
43+
"name": "How do I use Ollama with DevoxxGenie?",
44+
"acceptedAnswer": {
45+
"@type": "Answer",
46+
"text": "Install Ollama from ollama.com, pull a model (e.g., 'ollama pull llama3.2'), then in DevoxxGenie settings go to Tools > DevoxxGenie > LLM Providers > Ollama, leave the base URL as http://localhost:11434, click Refresh Models, and select your model. See the full guide at https://genie.devoxx.com/docs/getting-started/use-ollama-in-intellij-idea"
47+
}
48+
},
49+
{
50+
"@type": "Question",
51+
"name": "What is Agent Mode?",
52+
"acceptedAnswer": {
53+
"@type": "Answer",
54+
"text": "Agent Mode enables the LLM to autonomously explore and modify your codebase using built-in tools — reading files, listing directories, searching for patterns, running tests, and making targeted edits. Instead of you manually providing code context, the agent investigates your project on-demand. It works with both local Ollama models and cloud providers."
55+
}
56+
},
57+
{
58+
"@type": "Question",
59+
"name": "What is MCP in DevoxxGenie?",
60+
"acceptedAnswer": {
61+
"@type": "Answer",
62+
"text": "MCP stands for Model Context Protocol — an open standard that lets LLMs connect to external tools and services. DevoxxGenie includes a built-in MCP Marketplace where you can browse and install MCP servers for filesystem access, web browsing, databases, APIs, and more. Once installed, the LLM can use these tools automatically during conversations."
63+
}
64+
},
65+
{
66+
"@type": "Question",
67+
"name": "What is Spec-driven Development (SDD)?",
68+
"acceptedAnswer": {
69+
"@type": "Answer",
70+
"text": "Spec-driven Development is a workflow where you define what needs to be built as structured task specs with acceptance criteria (stored as markdown files), and the LLM agent figures out how to build it. The DevoxxGenie Specs tool window shows tasks in a Kanban board and task list. Clicking 'Implement with Agent' injects the full spec into the LLM prompt, and the agent checks off acceptance criteria as it works."
71+
}
72+
},
73+
{
74+
"@type": "Question",
75+
"name": "What is RAG in DevoxxGenie?",
76+
"acceptedAnswer": {
77+
"@type": "Answer",
78+
"text": "RAG stands for Retrieval-Augmented Generation. DevoxxGenie can index your project's source code into a local ChromaDB vector database (running in Docker) using Ollama embeddings. When you ask a question, it retrieves the most semantically relevant code snippets and includes them in the prompt automatically — giving the LLM accurate project-specific context without you having to manually select files."
79+
}
80+
},
81+
{
82+
"@type": "Question",
83+
"name": "Does DevoxxGenie work offline?",
84+
"acceptedAnswer": {
85+
"@type": "Answer",
86+
"text": "Yes, if you use a local model provider like Ollama. Once you've downloaded a model, DevoxxGenie can run entirely offline — no internet connection is needed. Cloud providers (OpenAI, Anthropic, etc.) require an internet connection."
87+
}
88+
},
89+
{
90+
"@type": "Question",
91+
"name": "Which IntelliJ versions does DevoxxGenie support?",
92+
"acceptedAnswer": {
93+
"@type": "Answer",
94+
"text": "DevoxxGenie requires IntelliJ IDEA 2023.3.4 or later. It works with IntelliJ IDEA Community and Ultimate editions, as well as other JetBrains IDEs built on the IntelliJ platform (like PyCharm, GoLand, WebStorm)."
95+
}
96+
}
97+
]
98+
}
99+
`}</script>
100+
</Head>
101+
102+
# Frequently Asked Questions
103+
104+
## General
105+
106+
### Is DevoxxGenie free?
107+
108+
Yes, DevoxxGenie itself is completely free and open source. It uses a **BYOK (Bring Your Own Keys)** model — you supply your own API keys for cloud LLMs, or run local models with Ollama or LM Studio at no API cost. There is no DevoxxGenie subscription fee.
109+
110+
### Does DevoxxGenie send my code to the cloud?
111+
112+
Only to the LLM provider **you** explicitly configure:
113+
114+
- **Cloud providers** (OpenAI, Anthropic, etc.): your prompts, which may include code, are sent to that provider's API under their own privacy policy.
115+
- **Local providers** (Ollama, LM Studio, etc.): nothing leaves your machine. The model runs locally.
116+
117+
DevoxxGenie itself does not collect, store, or transmit your code.
118+
119+
### Which IntelliJ versions are supported?
120+
121+
DevoxxGenie requires **IntelliJ IDEA 2023.3.4 or later**. It works with the Community and Ultimate editions, and with other JetBrains IDEs on the IntelliJ platform (PyCharm, GoLand, WebStorm, etc.).
122+
123+
### Does DevoxxGenie work offline?
124+
125+
Yes — if you use a local model provider like **Ollama**. Once you've downloaded a model, DevoxxGenie runs entirely offline. Cloud providers (OpenAI, Anthropic, Gemini, etc.) require an internet connection.
126+
127+
---
128+
129+
## LLM Providers
130+
131+
### Which LLMs does DevoxxGenie support?
132+
133+
**Cloud providers**: OpenAI (GPT-4o, o3, o4-mini), Anthropic (Claude 3.5/4), Google (Gemini 1.5/2.x), Grok (xAI), Mistral, Groq, DeepInfra, DeepSeek (R1, Coder), Kimi (Moonshot AI), GLM (Zhipu AI), OpenRouter, Azure OpenAI, Amazon Bedrock
134+
135+
**Local providers**: Ollama, LM Studio, GPT4All, Llama.cpp, Jan, any OpenAI-compatible endpoint
136+
137+
### How do I use Ollama with DevoxxGenie?
138+
139+
See the [full Ollama setup guide](/docs/getting-started/use-ollama-in-intellij-idea). The short version:
140+
141+
1. Install Ollama and pull a model: `ollama pull llama3.2` or `ollama pull llama4`
142+
2. In DevoxxGenie settings → **LLM Providers****Ollama**
143+
3. Leave the base URL as `http://localhost:11434`
144+
4. Click **Refresh Models** and select your model
145+
146+
### Can I use my own API endpoint (OpenAI-compatible)?
147+
148+
Yes. DevoxxGenie supports [custom providers](../llm-providers/custom-providers.md) — any endpoint that speaks the OpenAI chat completions API, including self-hosted models, DeepSeek R1, Grok, JLama, and enterprise AI platforms.
149+
150+
---
151+
152+
## Features
153+
154+
### What is Agent Mode?
155+
156+
[Agent Mode](../features/agent-mode.md) enables the LLM to **autonomously explore and modify your codebase** using built-in tools — reading files, listing directories, searching for patterns, running tests, and making targeted edits. Instead of manually providing code context, the agent investigates your project on-demand.
157+
158+
It works with both local Ollama models (e.g. Qwen2.5, Llama 4, Mistral Small) and cloud providers.
159+
160+
### What is Spec-driven Development (SDD)?
161+
162+
[Spec-driven Development](../features/spec-driven-development.md) is a workflow where you define **what** needs to be built as structured task specs with acceptance criteria (stored as markdown files), and the LLM agent figures out **how** to build it.
163+
164+
The DevoxxGenie Specs tool window shows tasks in a Kanban board. Click **"Implement with Agent"** and the agent checks off acceptance criteria as it works.
165+
166+
### What is MCP?
167+
168+
[MCP (Model Context Protocol)](../features/mcp_expanded.md) is an open standard that lets LLMs connect to external tools and services. DevoxxGenie includes a built-in **MCP Marketplace** where you can install servers for filesystem access, web browsing, databases, APIs, and more. The LLM uses these tools automatically during conversations.
169+
170+
### What is RAG?
171+
172+
[RAG (Retrieval-Augmented Generation)](../features/rag.md) indexes your project's source code into a local vector database (ChromaDB via Docker) using Ollama embeddings. When you ask a question, the most semantically relevant code snippets are retrieved and included in the prompt automatically — giving the LLM accurate project context without manual file selection.
173+
174+
### What are Skills?
175+
176+
[Skills](../features/skills.md) are reusable slash commands you define in settings. Type `/explain`, `/test`, `/review`, or any custom command in the prompt input to trigger a predefined prompt template. Built-in skills include `/test`, `/explain`, `/review`, `/find` (RAG search), `/tdg`, and `/init`.
177+
178+
### What is inline code completion?
179+
180+
[Inline completion](../features/inline-completion.md) provides GitHub Copilot-style ghost-text suggestions as you type, powered by Fill-in-the-Middle (FIM) models via Ollama or LM Studio. Enable it in **Settings****DevoxxGenie****Inline Completion**.
181+
182+
---
183+
184+
## Troubleshooting
185+
186+
### Ollama models aren't showing up in the dropdown
187+
188+
Click **Refresh Models** in DevoxxGenie settings after pulling a new model. Make sure Ollama is running (`ollama serve` or verify `http://localhost:11434` is reachable in a browser).
189+
190+
### Responses are very slow with local models
191+
192+
Switch to a smaller quantized model. For chat, try `llama3.2:3b` or `llama4:scout`. For inline completion, try `qwen2.5-coder:0.5b`. See the [Ollama performance tips](use-ollama-in-intellij.md#performance-tips).
193+
194+
### Where do I report bugs or request features?
195+
196+
Open an issue on [GitHub](https://github.com/devoxx/DevoxxGenieIDEAPlugin/issues) or start a discussion in [GitHub Discussions](https://github.com/devoxx/DevoxxGenieIDEAPlugin/discussions).
Lines changed: 170 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,170 @@
1+
---
2+
sidebar_position: 6
3+
title: How to Use Ollama in IntelliJ IDEA with DevoxxGenie
4+
description: Step-by-step guide to running local AI models in IntelliJ IDEA using Ollama and DevoxxGenie. No API keys, no cloud, no cost — run models like Llama, Mistral, and DeepSeek entirely on your machine.
5+
keywords: [ollama intellij idea, ollama intellij plugin, local llm intellij, run ollama intellij, devoxxgenie ollama setup, offline ai coding intellij, free ai coding intellij]
6+
image: /img/devoxxgenie-social-card.jpg
7+
slug: /getting-started/use-ollama-in-intellij-idea
8+
---
9+
10+
import Tabs from '@theme/Tabs';
11+
import TabItem from '@theme/TabItem';
12+
13+
# How to Use Ollama in IntelliJ IDEA with DevoxxGenie
14+
15+
[Ollama](https://ollama.com) lets you run large language models locally — on your own CPU or GPU — with no API keys, no cloud dependency, and no per-token cost. DevoxxGenie integrates with Ollama out of the box, so you can use powerful models like **Llama 3**, **Mistral**, **DeepSeek Coder**, and **Qwen** directly inside IntelliJ IDEA.
16+
17+
This guide walks you through the full setup in under five minutes.
18+
19+
## Why Use Ollama?
20+
21+
- **Free**: No API costs. Run as many prompts as you want.
22+
- **Private**: Your code never leaves your machine.
23+
- **Offline**: Works without an internet connection after the initial model download.
24+
- **Flexible**: Switch models instantly as new ones are released.
25+
26+
## Step 1: Install Ollama
27+
28+
Download and install Ollama from [ollama.com](https://ollama.com/download) for macOS, Windows, or Linux.
29+
30+
After installation, verify it's running:
31+
32+
```bash
33+
ollama --version
34+
```
35+
36+
Ollama runs as a local HTTP server on `http://localhost:11434` by default.
37+
38+
## Step 2: Pull a Model
39+
40+
Download the model you want to use. For general coding assistance, these are good starting points:
41+
42+
<Tabs>
43+
<TabItem value="general" label="General Purpose" default>
44+
45+
```bash
46+
ollama pull llama3.2 # Meta's Llama 3.2 (3B) — fast, good for chat
47+
ollama pull llama3.1:8b # Llama 3.1 8B — better quality, needs ~8GB RAM
48+
ollama pull mistral # Mistral 7B — strong reasoning
49+
ollama pull qwen3:8b # Alibaba Qwen3 — excellent coding ability
50+
```
51+
52+
</TabItem>
53+
<TabItem value="coding" label="Coding Focused">
54+
55+
```bash
56+
ollama pull deepseek-coder-v2 # DeepSeek Coder V2 — top-tier code model
57+
ollama pull qwen3:8b # Qwen3 — strong at Java/Kotlin
58+
ollama pull codellama:13b # Meta Code Llama 13B
59+
```
60+
61+
</TabItem>
62+
<TabItem value="agent" label="Agent Mode">
63+
64+
```bash
65+
ollama pull glm-4.7-flash # GLM-4.7 Flash — excellent tool-use, fast, great for Agent Mode
66+
ollama pull qwen3:14b # Qwen3 14B — strong reasoning for agents
67+
ollama pull llama3.1:70b # Llama 3.1 70B — best quality if your GPU can handle it
68+
```
69+
70+
</TabItem>
71+
<TabItem value="completion" label="Inline Completion (FIM)">
72+
73+
```bash
74+
ollama pull qwen3:0.6b # Tiny and fast for inline completion
75+
ollama pull starcoder2:3b # StarCoder2 3B — excellent FIM model
76+
ollama pull deepseek-coder:1.3b # DeepSeek Coder 1.3B — lightweight FIM
77+
```
78+
79+
</TabItem>
80+
</Tabs>
81+
82+
The first pull downloads the model weights (1–20 GB depending on model size). Subsequent runs use the cached version.
83+
84+
## Step 3: Install DevoxxGenie
85+
86+
If you haven't already, install DevoxxGenie from the JetBrains Marketplace:
87+
88+
1. Open IntelliJ IDEA
89+
2. Go to **Settings****Plugins****Marketplace**
90+
3. Search for **DevoxxGenie**
91+
4. Click **Install** and restart the IDE
92+
93+
## Step 4: Configure Ollama in DevoxxGenie
94+
95+
1. Open **Settings****Tools****DevoxxGenie**
96+
2. In the **LLM Providers** section, find **Ollama**
97+
3. The base URL defaults to `http://localhost:11434` — leave this as-is unless you're running Ollama on a different host or port
98+
4. Click **Refresh Models** — DevoxxGenie queries the Ollama API and populates the model list automatically
99+
5. Select your model from the **Model Name** dropdown
100+
6. Click **Apply**
101+
102+
:::tip Running Ollama on a Different Machine?
103+
If Ollama runs on a server or another computer on your network, change the base URL to `http://your-server-ip:11434`. Make sure the Ollama server is accessible from your development machine.
104+
:::
105+
106+
## Step 5: Start Chatting
107+
108+
Open the DevoxxGenie tool window (the genie lamp icon in the right sidebar) and start asking questions. The response comes from your local Ollama instance — no data leaves your machine.
109+
110+
**Example prompts to try:**
111+
- Select a Java class → ask "Explain this class and its responsibilities"
112+
- Select a method → ask "Write a JUnit 5 test for this method"
113+
- Ask "What are the design patterns used in this codebase?" with relevant files added to context
114+
115+
## Inline Code Completion with Ollama
116+
117+
DevoxxGenie supports Fill-in-the-Middle (FIM) inline completion powered by Ollama. This provides GitHub Copilot-style ghost-text suggestions as you type.
118+
119+
1. Go to **Settings****Tools****DevoxxGenie****Inline Completion**
120+
2. Enable **Inline Completion**
121+
3. Set the provider to **Ollama**
122+
4. Select a FIM-capable model (e.g., `qwen3:0.6b`, `starcoder2:3b`)
123+
5. Click **Apply**
124+
125+
Use a small, fast model (1–3B parameters) for inline completion — responsiveness matters more than raw quality here.
126+
127+
## Agent Mode with Ollama
128+
129+
[Agent Mode](../features/agent-mode.md) lets the LLM autonomously read, edit, and search your codebase. It works with local Ollama models — no cloud API key required.
130+
131+
For Agent Mode, use a model with strong tool-use (function-calling) support:
132+
- `glm-4.7-flash` — excellent tool-use reliability, fast and efficient, great for Agent Mode
133+
- `qwen3:14b` — strong reasoning and code understanding
134+
- `llama3.1:8b` — good all-round choice for lighter setups
135+
136+
Enable Agent Mode in the DevoxxGenie toolbar, select your Ollama model, and the LLM will be able to use built-in tools to explore your project autonomously.
137+
138+
## Performance Tips
139+
140+
| Hardware | Recommended Model Size | Expected Speed |
141+
|----------|----------------------|----------------|
142+
| 8GB RAM, no dedicated GPU | 3–7B (quantized) | 5–15 tokens/sec |
143+
| 16GB RAM / 8GB VRAM | 7–13B | 15–40 tokens/sec |
144+
| 32GB RAM / 16GB VRAM | 13–30B | 20–60 tokens/sec |
145+
| 64GB+ RAM / 24GB VRAM | 70B | 10–30 tokens/sec |
146+
147+
- **Apple Silicon Macs** (M1/M2/M3/M4) run Ollama models very efficiently using unified memory — a 16GB M2 Mac handles 13B models comfortably.
148+
- Prefer **quantized models** (Q4_K_M or Q5_K_M) for the best speed/quality trade-off.
149+
- For inline completion, always use the smallest model that gives acceptable results.
150+
151+
## Troubleshooting
152+
153+
**DevoxxGenie says "No models found"**
154+
Ensure Ollama is running (`ollama serve`) and that you've pulled at least one model (`ollama list`).
155+
156+
**Slow responses**
157+
Switch to a smaller/more quantized model. For chat, try `llama3.2:3b`. For completion, try `qwen3:0.6b`.
158+
159+
**Connection refused**
160+
Check that Ollama is running on the correct port. Try opening `http://localhost:11434` in a browser — you should see `Ollama is running`.
161+
162+
**Model not showing in dropdown**
163+
Click **Refresh Models** in DevoxxGenie settings after pulling a new model with `ollama pull`.
164+
165+
## Next Steps
166+
167+
- [Local LLM Providers](../llm-providers/local-models.md) — full reference for all local provider options
168+
- [Agent Mode](../features/agent-mode.md) — autonomous codebase exploration with local models
169+
- [Inline Completion](../features/inline-completion.md) — FIM-based ghost-text suggestions with Ollama
170+
- [Why DevoxxGenie](why-devoxxgenie.md) — how DevoxxGenie compares to subscription-based tools

0 commit comments

Comments
 (0)