You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/articles/running-a-local-ai-inside-obsidian-with-ollama.mdx
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,8 +2,8 @@
2
2
title: 'Running a Local AI Inside Obsidian with Ollama'
3
3
description: 'Learn to set up Ollama with Obsidian for a completely local AI writing assistant. Step-by-step guide to private, offline AI that works with your notes without sending data to the cloud.'
4
4
date: 2026-02-01
5
-
lastUpdated: 2026-02-01
6
-
excerpt: "**What if your Obsidian notes could think with you, without sending a single word to the cloud?** This article shows you how to build a fully local AI setup inside Obsidian using Ollama and the open Gemma 3 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, and how to connect it to Obsidian using community plugins for a seamless writing experience.<br><br>By the end of this tutorial, you'll have a private AI assistant that can rewrite text, summarize notes, brainstorm ideas, and help draft content all while keeping full control over your data. This setup combines Obsidian's local-first philosophy with Ollama's open source AI capabilities to create a knowledge workspace that's private, hackable, and built to last, proving that powerful tools don't have to come at the cost of control."
5
+
lastUpdated: 2026-04-11
6
+
excerpt: "**What if your Obsidian notes could think with you, without sending a single word to the cloud?** This article shows you how to build a fully local AI setup inside Obsidian using Ollama and the open Gemma 4 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, and how to connect it to Obsidian using community plugins for a seamless writing experience.<br><br>By the end of this tutorial, you'll have a private AI assistant that can rewrite text, summarize notes, brainstorm ideas, and help draft content all while keeping full control over your data. This setup combines Obsidian's local-first philosophy with Ollama's open source AI capabilities to create a knowledge workspace that's private, hackable, and built to last, proving that powerful tools don't have to come at the cost of control."
7
7
tags:
8
8
- AI
9
9
- Tools
@@ -21,7 +21,7 @@ import ContentImage from '../../../components/ContentImage.astro';
<pclass="lead">**What if your notes could think with you, without sending a single word to the cloud?** In this article, we'll build a fully local AI setup inside Obsidian using Ollama and the open Gemma 3 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, how to connect it to Obsidian using community plugins, and how to use it for practical, everyday note-taking workflows, all while keeping full control over your data.</p>
24
+
<pclass="lead">**What if your notes could think with you, without sending a single word to the cloud?** In this article, we'll build a fully local AI setup inside Obsidian using Ollama and the open Gemma 4 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, how to connect it to Obsidian using community plugins, and how to use it for practical, everyday note-taking workflows, all while keeping full control over your data.</p>
25
25
26
26
---
27
27
@@ -56,11 +56,11 @@ Together, Obsidian and Ollama create a powerful combination that aligns with ope
56
56
By the end of this article, you'll have:
57
57
58
58
- Ollama installed and verified from the command line
59
-
- The Gemma 3 model running locally
59
+
- The Gemma 4 model running locally
60
60
- Obsidian configured with community plugins
61
61
- A fully local AI assistant embedded in your notes
62
62
63
-
Everything described below has been tested using Gemma 3 with Ollama.
63
+
Everything described below has been tested using Gemma 4 with Ollama.
64
64
65
65
Let's consider that Obsidian is already installed on your machine. If you haven't done so yet, you can download it from [obsidian.md](https://obsidian.md/).
66
66
@@ -70,10 +70,10 @@ Let's consider that Obsidian is already installed on your machine. If you haven'
70
70
71
71
<Steps>
72
72
73
-
1. Install Ollama from the official site at https://ollama.com, or via Homebrew on macOS via
73
+
1. Install Ollama from the official site at https://ollama.com, or on macOS via
74
74
75
75
```sh
76
-
brew install ollama
76
+
curl -fsSL https://ollama.com/install.sh | sh
77
77
```
78
78
79
79
1. Verify the installation by running in your terminal:
@@ -84,10 +84,10 @@ Let's consider that Obsidian is already installed on your machine. If you haven'
84
84
85
85
The output should show the installed version of Ollama.
86
86
87
-
1. Pull and run the Gemma 3 model with:
87
+
1. Pull and run the Gemma 4 model with:
88
88
89
89
```sh
90
-
ollama run gemma3
90
+
ollama run gemma4
91
91
```
92
92
93
93
Once the model is downloaded, you'll get an interactive prompt. Test it with:
@@ -148,7 +148,7 @@ Let's start by installing the **AI Providers** plugin.
148
148
- In the "AI providers" section, click "+" to add a new provider.
149
149
- In the "Add new provider" dialog:
150
150
- Select "Ollama" as the provider type.
151
-
- Click on the refresh icon of the "Model" field to fetch Gemma 3 from your local Ollama installation.
151
+
- Click on the refresh icon of the "Model" field to fetch Gemma 4 from your local Ollama installation.
@@ -253,7 +253,7 @@ The model would then generate a description of the image directly in my note.
253
253
254
254
## Everyday Use Cases That Actually Feel Useful
255
255
256
-
**I just scratched the surface of what you can do with this setup, and honestly, it's pretty useful for specific tasks.** Gemma 3 works well for close-to-the-text stuff: cleaning up messy first drafts, turning rambling meeting notes into actual bullet points, or getting a quick summary when I don't want to reread a long note.
256
+
**I just scratched the surface of what you can do with this setup, and honestly, it's pretty useful for specific tasks.** Gemma 4 works well for close-to-the-text stuff: cleaning up messy first drafts, turning rambling meeting notes into actual bullet points, or getting a quick summary when I don't want to reread a long note.
257
257
258
258
Response times are decent for most tasks. Not instant, but fast enough to keep you in the flow.
0 commit comments