Skip to content

Commit f2756a3

Browse files
authored
Core LLM integration infrastructure to allow pgAdmin to connect to AI providers. #9641
* Core infrastructure for LLM integration. * Add support for a number of different AI generated reports on security, performance, and schema design on servers, databases, and schemas, as appropriate. * Add a Natural Language AI assistant to the Query Tool. * Add an AI Insights panel to the EXPLAIN tool in the Query Tool, to analyse and report on issues in query plans.
1 parent 2715932 commit f2756a3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+14704
-322
lines changed

docs/en_US/ai_tools.rst

Lines changed: 242 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,242 @@
1+
.. _ai_tools:
2+
3+
*******************
4+
`AI Reports`:index:
5+
*******************
6+
7+
**AI Reports** is a feature that provides AI-powered database analysis and insights
8+
using Large Language Models (LLMs). Use the *Tools → AI Reports* menu to access
9+
the various AI-powered reports.
10+
11+
The AI Reports feature allows you to:
12+
13+
* Generate security reports to identify potential security vulnerabilities and configuration issues.
14+
15+
* Create performance reports with optimization recommendations for queries and configurations.
16+
17+
* Perform design reviews to analyze database schema structure and suggest improvements.
18+
19+
**Prerequisites:**
20+
21+
Before using AI Reports, you must:
22+
23+
1. Ensure AI features are enabled in the server configuration (set ``LLM_ENABLED`` to ``True`` in ``config.py``).
24+
25+
2. Configure an LLM provider in :ref:`Preferences → AI <preferences>`.
26+
27+
**Note:**
28+
29+
* AI Reports using cloud providers (Anthropic, OpenAI) require an active internet connection.
30+
Local providers (Ollama, Docker Model Runner) do not require internet access.
31+
32+
* API usage may incur costs depending on your LLM provider's pricing model.
33+
Local providers (Ollama, Docker Model Runner) are free to use.
34+
35+
* The quality and accuracy of reports depend on the LLM provider and model configured.
36+
37+
38+
Configuring AI Reports
39+
**********************
40+
41+
To configure AI Reports, navigate to *File → Preferences → AI* (or click the *Settings*
42+
button and select *AI*).
43+
44+
.. image:: images/preferences_ai.png
45+
:alt: AI preferences
46+
:align: center
47+
48+
Select your preferred LLM provider from the dropdown:
49+
50+
**Anthropic**
51+
Use Claude models from Anthropic. Requires an Anthropic API key.
52+
53+
* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/).
54+
* **Model**: Select from available Claude models (e.g., claude-sonnet-4-20250514).
55+
56+
**OpenAI**
57+
Use GPT models from OpenAI. Requires an OpenAI API key.
58+
59+
* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/).
60+
* **Model**: Select from available GPT models (e.g., gpt-4).
61+
62+
**Ollama**
63+
Use locally-hosted open-source models via Ollama. Requires a running Ollama instance.
64+
65+
* **API URL**: The URL of your Ollama server (default: http://localhost:11434).
66+
* **Model**: Enter the name of the Ollama model to use (e.g., llama2, mistral).
67+
68+
**Docker Model Runner**
69+
Use models running in Docker Desktop's built-in model runner (available in Docker Desktop 4.40+).
70+
No API key is required.
71+
72+
* **API URL**: The URL of the Docker Model Runner API (default: http://localhost:12434).
73+
* **Model**: Select from available models or enter a custom model name.
74+
75+
After configuring your provider, click *Save* to apply the changes.
76+
77+
78+
Security Reports
79+
****************
80+
81+
Security Reports analyze your PostgreSQL server, database, or schema for potential
82+
security vulnerabilities and configuration issues.
83+
84+
To generate a security report:
85+
86+
1. In the *Browser* tree, select a server, database, or schema.
87+
88+
2. Choose *Tools → AI Reports → Security* from the menu, or right-click the
89+
object and select *Security* from the context menu.
90+
91+
3. The report will be generated and displayed in a new tab.
92+
93+
.. image:: images/ai_security_report.png
94+
:alt: AI security report
95+
:align: center
96+
97+
**Security Report Scope:**
98+
99+
* **Server Level**: Analyzes server configuration, authentication settings, roles, and permissions.
100+
101+
* **Database Level**: Reviews database-specific security settings, roles with database access, and object permissions.
102+
103+
* **Schema Level**: Examines schema permissions, object ownership, and access controls.
104+
105+
Each report includes:
106+
107+
* **Security Findings**: Identified vulnerabilities or security concerns.
108+
109+
* **Risk Assessment**: Severity levels for each finding (Critical, High, Medium, Low).
110+
111+
* **Recommendations**: Specific actions to remediate security issues.
112+
113+
* **Best Practices**: General security recommendations for PostgreSQL.
114+
115+
116+
Performance Reports
117+
*******************
118+
119+
Performance Reports analyze query performance, configuration settings, and provide
120+
optimization recommendations.
121+
122+
To generate a performance report:
123+
124+
1. In the *Browser* tree, select a server or database.
125+
126+
2. Choose *Tools → AI Reports → Performance* from the menu, or right-click the
127+
object and select *Performance* from the context menu.
128+
129+
3. The report will be generated and displayed in a new tab.
130+
131+
**Performance Report Scope:**
132+
133+
* **Server Level**: Analyzes server configuration parameters, resource utilization, and overall server performance metrics.
134+
135+
* **Database Level**: Reviews database-specific configuration, query performance, index usage, and table statistics.
136+
137+
Each report includes:
138+
139+
* **Performance Metrics**: Key performance indicators and statistics.
140+
141+
* **Configuration Analysis**: Review of relevant configuration parameters.
142+
143+
* **Query Optimization**: Recommendations for improving slow queries.
144+
145+
* **Index Recommendations**: Suggestions for adding, removing, or modifying indexes.
146+
147+
* **Capacity Planning**: Resource utilization trends and recommendations.
148+
149+
150+
Design Review Reports
151+
*********************
152+
153+
Design Review Reports analyze your database schema structure and suggest
154+
improvements for normalization, naming conventions, and best practices.
155+
156+
To generate a design review report:
157+
158+
1. In the *Browser* tree, select a database or schema.
159+
160+
2. Choose *Tools → AI Reports → Design* from the menu, or right-click the
161+
object and select *Design* from the context menu.
162+
163+
3. The report will be generated and displayed in a new tab.
164+
165+
**Design Review Scope:**
166+
167+
* **Database Level**: Reviews overall database structure, schema organization, and cross-schema dependencies.
168+
169+
* **Schema Level**: Analyzes tables, views, functions, and other objects within the schema.
170+
171+
Each report includes:
172+
173+
* **Schema Structure Analysis**: Review of table structures, relationships, and constraints.
174+
175+
* **Normalization Review**: Recommendations for database normalization (1NF, 2NF, 3NF, etc.).
176+
177+
* **Naming Conventions**: Suggestions for consistent naming patterns.
178+
179+
* **Data Type Usage**: Review of data type choices and recommendations.
180+
181+
* **Index Design**: Analysis of indexing strategy.
182+
183+
* **Best Practices**: General PostgreSQL schema design recommendations.
184+
185+
186+
Working with Reports
187+
********************
188+
189+
All AI reports are displayed in a dedicated panel with the following features:
190+
191+
**Report Display**
192+
Reports are formatted as Markdown and rendered with syntax highlighting for SQL code.
193+
194+
**Toolbar Actions**
195+
196+
* **Stop** - Cancel the current report generation. This is useful if the report
197+
is taking too long or if you want to change parameters.
198+
199+
* **Regenerate** - Generate a new report for the same object. Useful when you
200+
want to get a fresh analysis or if data has changed.
201+
202+
* **Download** - Download the report as a Markdown (.md) file. The filename
203+
includes the report type, object name, and date for easy identification.
204+
205+
**Multiple Reports**
206+
You can generate and view multiple reports simultaneously. Each report opens in
207+
a new tab, allowing you to compare reports across different servers, databases,
208+
or schemas.
209+
210+
**Report Management**
211+
Each report tab can be closed individually by clicking the *X* in the tab.
212+
Panel titles show the object name and report type for easy identification.
213+
214+
**Copying Content**
215+
You can select and copy text from reports to use in documentation or share with
216+
your team.
217+
218+
219+
Troubleshooting
220+
***************
221+
222+
**"AI features are disabled in the server configuration"**
223+
The administrator has disabled AI features on the server. Contact your
224+
pgAdmin administrator to enable the ``LLM_ENABLED`` configuration option.
225+
226+
**"Please configure an LLM provider in Preferences"**
227+
You need to configure an LLM provider before using AI Reports. See *Configuring AI Reports* above.
228+
229+
**"Please connect to the server/database first"**
230+
You must establish a connection to the server or database before generating reports.
231+
232+
**API Connection Errors**
233+
* Verify your API key is correct (for Anthropic and OpenAI).
234+
* Check your internet connection (for cloud providers).
235+
* For Ollama, ensure the Ollama server is running and accessible.
236+
* For Docker Model Runner, ensure Docker Desktop 4.40+ is running with the model runner enabled.
237+
* Check that your firewall allows connections to the LLM provider's API.
238+
239+
**Report Generation Fails**
240+
* Check the pgAdmin logs for detailed error messages.
241+
* Verify the database connection is still active.
242+
* Ensure the selected model is available for your account/subscription.

docs/en_US/developer_tools.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,4 @@ PL/SQL code.
1717
schema_diff
1818
erd_tool
1919
psql_tool
20+
ai_tools
237 KB
Loading
136 KB
Loading
158 KB
Loading
247 KB
Loading

docs/en_US/menu_bar.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -156,6 +156,12 @@ Use the *Tools* menu to access the following options (in alphabetical order):
156156
+------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
157157
| *Search Objects...* | Click to open the :ref:`Search Objects... <search_objects>` and start searching any kind of objects in a database. |
158158
+------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
159+
| *AI Reports* | Click to access a submenu with AI-powered analysis options (requires :ref:`AI configuration <ai_tools>`): |
160+
| | |
161+
| | - *Security Report* - Generate an AI-powered security analysis for the selected server, database, or schema. |
162+
| | - *Performance Report* - Generate an AI-powered performance analysis for the selected server or database. |
163+
| | - *Design Report* - Generate an AI-powered design review for the selected database or schema. |
164+
+------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
159165
| *Add named restore point* | Click to open the :ref:`Add named restore point... <add_restore_point_dialog>` dialog to take a point-in-time snapshot of the current |
160166
| | server state. |
161167
+------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+

docs/en_US/preferences.rst

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,58 @@ The left pane of the *Preferences* tab displays a tree control; each node of
2727
the tree control provides access to options that are related to the node under
2828
which they are displayed.
2929

30+
The AI Node
31+
***********
32+
33+
Use preferences found in the *AI* node of the tree control to configure
34+
AI-powered features and LLM (Large Language Model) providers.
35+
36+
.. image:: images/preferences_ai.png
37+
:alt: Preferences AI section
38+
:align: center
39+
40+
**Note:** AI features must be enabled in the server configuration (``LLM_ENABLED = True``
41+
in ``config.py``) for these preferences to be available.
42+
43+
Use the fields on the *AI* panel to configure your LLM provider:
44+
45+
* Use the *Default Provider* drop-down to select your LLM provider. Options include:
46+
*Anthropic*, *OpenAI*, *Ollama*, or *Docker Model Runner*.
47+
48+
**Anthropic Settings:**
49+
50+
* Use the *API Key File* field to specify the path to a file containing your
51+
Anthropic API key.
52+
53+
* Use the *Model* field to select from the available Claude models. Click the
54+
refresh button to fetch the latest available models from Anthropic.
55+
56+
**OpenAI Settings:**
57+
58+
* Use the *API Key File* field to specify the path to a file containing your
59+
OpenAI API key.
60+
61+
* Use the *Model* field to select from the available GPT models. Click the
62+
refresh button to fetch the latest available models from OpenAI.
63+
64+
**Ollama Settings:**
65+
66+
* Use the *API URL* field to specify the Ollama server URL
67+
(default: ``http://localhost:11434``).
68+
69+
* Use the *Model* field to select from the available models or enter a custom
70+
model name (e.g., ``llama2``, ``mistral``). Click the refresh button to fetch
71+
the latest available models from your Ollama server.
72+
73+
**Docker Model Runner Settings:**
74+
75+
* Use the *API URL* field to specify the Docker Model Runner API URL
76+
(default: ``http://localhost:12434``). Available in Docker Desktop 4.40+.
77+
78+
* Use the *Model* field to select from the available models or enter a custom
79+
model name. Click the refresh button to fetch the latest available models
80+
from your Docker Model Runner.
81+
3082
The Browser Node
3183
****************
3284

0 commit comments

Comments
 (0)