|
| 1 | +--- |
| 2 | +sidebar_position: 7 |
| 3 | +title: Frequently Asked Questions - DevoxxGenie |
| 4 | +description: Answers to common questions about DevoxxGenie — the free, open-source AI code assistant plugin for IntelliJ IDEA. Covers pricing, privacy, LLM support, Ollama, MCP, Agent Mode, and more. |
| 5 | +keywords: [devoxxgenie faq, intellij ai plugin questions, is devoxxgenie free, devoxxgenie ollama, devoxxgenie privacy, devoxxgenie agent mode, devoxxgenie mcp, devoxxgenie vs copilot] |
| 6 | +image: /img/devoxxgenie-social-card.jpg |
| 7 | +--- |
| 8 | + |
| 9 | +import Head from '@docusaurus/Head'; |
| 10 | + |
| 11 | +<Head> |
| 12 | + <script type="application/ld+json">{` |
| 13 | + { |
| 14 | + "@context": "https://schema.org", |
| 15 | + "@type": "FAQPage", |
| 16 | + "mainEntity": [ |
| 17 | + { |
| 18 | + "@type": "Question", |
| 19 | + "name": "Is DevoxxGenie free?", |
| 20 | + "acceptedAnswer": { |
| 21 | + "@type": "Answer", |
| 22 | + "text": "Yes, DevoxxGenie itself is completely free and open source. It uses a BYOK (Bring Your Own Keys) model — you supply your own API keys for cloud LLMs, or run local models with Ollama or LM Studio at no API cost. There is no DevoxxGenie subscription fee." |
| 23 | + } |
| 24 | + }, |
| 25 | + { |
| 26 | + "@type": "Question", |
| 27 | + "name": "Does DevoxxGenie send my code to the cloud?", |
| 28 | + "acceptedAnswer": { |
| 29 | + "@type": "Answer", |
| 30 | + "text": "Only to the LLM provider you explicitly configure. If you use a cloud provider like OpenAI or Anthropic, your prompts (which may include code) are sent to that provider's API under their privacy policy. If you use a local model via Ollama or LM Studio, nothing leaves your machine. DevoxxGenie itself does not collect or transmit your code." |
| 31 | + } |
| 32 | + }, |
| 33 | + { |
| 34 | + "@type": "Question", |
| 35 | + "name": "Which LLMs does DevoxxGenie support?", |
| 36 | + "acceptedAnswer": { |
| 37 | + "@type": "Answer", |
| 38 | + "text": "DevoxxGenie supports a wide range of LLMs. Cloud providers include OpenAI (GPT-4o, o3, o4-mini), Anthropic (Claude 3.5/4), Google (Gemini 1.5/2.x), Grok (xAI), Mistral, Groq, DeepInfra, DeepSeek (R1, Coder), Kimi (Moonshot AI), GLM (Zhipu AI), OpenRouter, Azure OpenAI, and Amazon Bedrock. Local providers include Ollama, LM Studio, GPT4All, Llama.cpp, and Jan. It also supports any OpenAI-compatible custom endpoint." |
| 39 | + } |
| 40 | + }, |
| 41 | + { |
| 42 | + "@type": "Question", |
| 43 | + "name": "How do I use Ollama with DevoxxGenie?", |
| 44 | + "acceptedAnswer": { |
| 45 | + "@type": "Answer", |
| 46 | + "text": "Install Ollama from ollama.com, pull a model (e.g., 'ollama pull llama3.2'), then in DevoxxGenie settings go to Tools > DevoxxGenie > LLM Providers > Ollama, leave the base URL as http://localhost:11434, click Refresh Models, and select your model. See the full guide at https://genie.devoxx.com/docs/getting-started/use-ollama-in-intellij-idea" |
| 47 | + } |
| 48 | + }, |
| 49 | + { |
| 50 | + "@type": "Question", |
| 51 | + "name": "What is Agent Mode?", |
| 52 | + "acceptedAnswer": { |
| 53 | + "@type": "Answer", |
| 54 | + "text": "Agent Mode enables the LLM to autonomously explore and modify your codebase using built-in tools — reading files, listing directories, searching for patterns, running tests, and making targeted edits. Instead of you manually providing code context, the agent investigates your project on-demand. It works with both local Ollama models and cloud providers." |
| 55 | + } |
| 56 | + }, |
| 57 | + { |
| 58 | + "@type": "Question", |
| 59 | + "name": "What is MCP in DevoxxGenie?", |
| 60 | + "acceptedAnswer": { |
| 61 | + "@type": "Answer", |
| 62 | + "text": "MCP stands for Model Context Protocol — an open standard that lets LLMs connect to external tools and services. DevoxxGenie includes a built-in MCP Marketplace where you can browse and install MCP servers for filesystem access, web browsing, databases, APIs, and more. Once installed, the LLM can use these tools automatically during conversations." |
| 63 | + } |
| 64 | + }, |
| 65 | + { |
| 66 | + "@type": "Question", |
| 67 | + "name": "What is Spec-driven Development (SDD)?", |
| 68 | + "acceptedAnswer": { |
| 69 | + "@type": "Answer", |
| 70 | + "text": "Spec-driven Development is a workflow where you define what needs to be built as structured task specs with acceptance criteria (stored as markdown files), and the LLM agent figures out how to build it. The DevoxxGenie Specs tool window shows tasks in a Kanban board and task list. Clicking 'Implement with Agent' injects the full spec into the LLM prompt, and the agent checks off acceptance criteria as it works." |
| 71 | + } |
| 72 | + }, |
| 73 | + { |
| 74 | + "@type": "Question", |
| 75 | + "name": "What is RAG in DevoxxGenie?", |
| 76 | + "acceptedAnswer": { |
| 77 | + "@type": "Answer", |
| 78 | + "text": "RAG stands for Retrieval-Augmented Generation. DevoxxGenie can index your project's source code into a local ChromaDB vector database (running in Docker) using Ollama embeddings. When you ask a question, it retrieves the most semantically relevant code snippets and includes them in the prompt automatically — giving the LLM accurate project-specific context without you having to manually select files." |
| 79 | + } |
| 80 | + }, |
| 81 | + { |
| 82 | + "@type": "Question", |
| 83 | + "name": "Does DevoxxGenie work offline?", |
| 84 | + "acceptedAnswer": { |
| 85 | + "@type": "Answer", |
| 86 | + "text": "Yes, if you use a local model provider like Ollama. Once you've downloaded a model, DevoxxGenie can run entirely offline — no internet connection is needed. Cloud providers (OpenAI, Anthropic, etc.) require an internet connection." |
| 87 | + } |
| 88 | + }, |
| 89 | + { |
| 90 | + "@type": "Question", |
| 91 | + "name": "Which IntelliJ versions does DevoxxGenie support?", |
| 92 | + "acceptedAnswer": { |
| 93 | + "@type": "Answer", |
| 94 | + "text": "DevoxxGenie requires IntelliJ IDEA 2023.3.4 or later. It works with IntelliJ IDEA Community and Ultimate editions, as well as other JetBrains IDEs built on the IntelliJ platform (like PyCharm, GoLand, WebStorm)." |
| 95 | + } |
| 96 | + } |
| 97 | + ] |
| 98 | + } |
| 99 | + `}</script> |
| 100 | +</Head> |
| 101 | + |
| 102 | +# Frequently Asked Questions |
| 103 | + |
| 104 | +## General |
| 105 | + |
| 106 | +### Is DevoxxGenie free? |
| 107 | + |
| 108 | +Yes, DevoxxGenie itself is completely free and open source. It uses a **BYOK (Bring Your Own Keys)** model — you supply your own API keys for cloud LLMs, or run local models with Ollama or LM Studio at no API cost. There is no DevoxxGenie subscription fee. |
| 109 | + |
| 110 | +### Does DevoxxGenie send my code to the cloud? |
| 111 | + |
| 112 | +Only to the LLM provider **you** explicitly configure: |
| 113 | + |
| 114 | +- **Cloud providers** (OpenAI, Anthropic, etc.): your prompts, which may include code, are sent to that provider's API under their own privacy policy. |
| 115 | +- **Local providers** (Ollama, LM Studio, etc.): nothing leaves your machine. The model runs locally. |
| 116 | + |
| 117 | +DevoxxGenie itself does not collect, store, or transmit your code. |
| 118 | + |
| 119 | +### Which IntelliJ versions are supported? |
| 120 | + |
| 121 | +DevoxxGenie requires **IntelliJ IDEA 2023.3.4 or later**. It works with the Community and Ultimate editions, and with other JetBrains IDEs on the IntelliJ platform (PyCharm, GoLand, WebStorm, etc.). |
| 122 | + |
| 123 | +### Does DevoxxGenie work offline? |
| 124 | + |
| 125 | +Yes — if you use a local model provider like **Ollama**. Once you've downloaded a model, DevoxxGenie runs entirely offline. Cloud providers (OpenAI, Anthropic, Gemini, etc.) require an internet connection. |
| 126 | + |
| 127 | +--- |
| 128 | + |
| 129 | +## LLM Providers |
| 130 | + |
| 131 | +### Which LLMs does DevoxxGenie support? |
| 132 | + |
| 133 | +**Cloud providers**: OpenAI (GPT-4o, o3, o4-mini), Anthropic (Claude 3.5/4), Google (Gemini 1.5/2.x), Grok (xAI), Mistral, Groq, DeepInfra, DeepSeek (R1, Coder), Kimi (Moonshot AI), GLM (Zhipu AI), OpenRouter, Azure OpenAI, Amazon Bedrock |
| 134 | + |
| 135 | +**Local providers**: Ollama, LM Studio, GPT4All, Llama.cpp, Jan, any OpenAI-compatible endpoint |
| 136 | + |
| 137 | +### How do I use Ollama with DevoxxGenie? |
| 138 | + |
| 139 | +See the [full Ollama setup guide](/docs/getting-started/use-ollama-in-intellij-idea). The short version: |
| 140 | + |
| 141 | +1. Install Ollama and pull a model: `ollama pull llama3.2` or `ollama pull llama4` |
| 142 | +2. In DevoxxGenie settings → **LLM Providers** → **Ollama** |
| 143 | +3. Leave the base URL as `http://localhost:11434` |
| 144 | +4. Click **Refresh Models** and select your model |
| 145 | + |
| 146 | +### Can I use my own API endpoint (OpenAI-compatible)? |
| 147 | + |
| 148 | +Yes. DevoxxGenie supports [custom providers](../llm-providers/custom-providers.md) — any endpoint that speaks the OpenAI chat completions API, including self-hosted models, DeepSeek R1, Grok, JLama, and enterprise AI platforms. |
| 149 | + |
| 150 | +--- |
| 151 | + |
| 152 | +## Features |
| 153 | + |
| 154 | +### What is Agent Mode? |
| 155 | + |
| 156 | +[Agent Mode](../features/agent-mode.md) enables the LLM to **autonomously explore and modify your codebase** using built-in tools — reading files, listing directories, searching for patterns, running tests, and making targeted edits. Instead of manually providing code context, the agent investigates your project on-demand. |
| 157 | + |
| 158 | +It works with both local Ollama models (e.g. Qwen2.5, Llama 4, Mistral Small) and cloud providers. |
| 159 | + |
| 160 | +### What is Spec-driven Development (SDD)? |
| 161 | + |
| 162 | +[Spec-driven Development](../features/spec-driven-development.md) is a workflow where you define **what** needs to be built as structured task specs with acceptance criteria (stored as markdown files), and the LLM agent figures out **how** to build it. |
| 163 | + |
| 164 | +The DevoxxGenie Specs tool window shows tasks in a Kanban board. Click **"Implement with Agent"** and the agent checks off acceptance criteria as it works. |
| 165 | + |
| 166 | +### What is MCP? |
| 167 | + |
| 168 | +[MCP (Model Context Protocol)](../features/mcp_expanded.md) is an open standard that lets LLMs connect to external tools and services. DevoxxGenie includes a built-in **MCP Marketplace** where you can install servers for filesystem access, web browsing, databases, APIs, and more. The LLM uses these tools automatically during conversations. |
| 169 | + |
| 170 | +### What is RAG? |
| 171 | + |
| 172 | +[RAG (Retrieval-Augmented Generation)](../features/rag.md) indexes your project's source code into a local vector database (ChromaDB via Docker) using Ollama embeddings. When you ask a question, the most semantically relevant code snippets are retrieved and included in the prompt automatically — giving the LLM accurate project context without manual file selection. |
| 173 | + |
| 174 | +### What are Skills? |
| 175 | + |
| 176 | +[Skills](../features/skills.md) are reusable slash commands you define in settings. Type `/explain`, `/test`, `/review`, or any custom command in the prompt input to trigger a predefined prompt template. Built-in skills include `/test`, `/explain`, `/review`, `/find` (RAG search), `/tdg`, and `/init`. |
| 177 | + |
| 178 | +### What is inline code completion? |
| 179 | + |
| 180 | +[Inline completion](../features/inline-completion.md) provides GitHub Copilot-style ghost-text suggestions as you type, powered by Fill-in-the-Middle (FIM) models via Ollama or LM Studio. Enable it in **Settings** → **DevoxxGenie** → **Inline Completion**. |
| 181 | + |
| 182 | +--- |
| 183 | + |
| 184 | +## Troubleshooting |
| 185 | + |
| 186 | +### Ollama models aren't showing up in the dropdown |
| 187 | + |
| 188 | +Click **Refresh Models** in DevoxxGenie settings after pulling a new model. Make sure Ollama is running (`ollama serve` or verify `http://localhost:11434` is reachable in a browser). |
| 189 | + |
| 190 | +### Responses are very slow with local models |
| 191 | + |
| 192 | +Switch to a smaller quantized model. For chat, try `llama3.2:3b` or `llama4:scout`. For inline completion, try `qwen2.5-coder:0.5b`. See the [Ollama performance tips](use-ollama-in-intellij.md#performance-tips). |
| 193 | + |
| 194 | +### Where do I report bugs or request features? |
| 195 | + |
| 196 | +Open an issue on [GitHub](https://github.com/devoxx/DevoxxGenieIDEAPlugin/issues) or start a discussion in [GitHub Discussions](https://github.com/devoxx/DevoxxGenieIDEAPlugin/discussions). |
0 commit comments