Skip to content

Commit 62e9d63

Browse files
feat(articles): update 'Running a Local AI Inside Obsidian with Ollama' with Gemma 4
1 parent 1816edb commit 62e9d63

File tree

3 files changed

+13
-13
lines changed

3 files changed

+13
-13
lines changed
-6.36 KB
Loading
-4.23 KB
Loading

src/content/docs/articles/running-a-local-ai-inside-obsidian-with-ollama.mdx

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
title: 'Running a Local AI Inside Obsidian with Ollama'
33
description: 'Learn to set up Ollama with Obsidian for a completely local AI writing assistant. Step-by-step guide to private, offline AI that works with your notes without sending data to the cloud.'
44
date: 2026-02-01
5-
lastUpdated: 2026-02-01
6-
excerpt: "**What if your Obsidian notes could think with you, without sending a single word to the cloud?** This article shows you how to build a fully local AI setup inside Obsidian using Ollama and the open Gemma 3 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, and how to connect it to Obsidian using community plugins for a seamless writing experience.<br><br>By the end of this tutorial, you'll have a private AI assistant that can rewrite text, summarize notes, brainstorm ideas, and help draft content all while keeping full control over your data. This setup combines Obsidian's local-first philosophy with Ollama's open source AI capabilities to create a knowledge workspace that's private, hackable, and built to last, proving that powerful tools don't have to come at the cost of control."
5+
lastUpdated: 2026-04-11
6+
excerpt: "**What if your Obsidian notes could think with you, without sending a single word to the cloud?** This article shows you how to build a fully local AI setup inside Obsidian using Ollama and the open Gemma 4 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, and how to connect it to Obsidian using community plugins for a seamless writing experience.<br><br>By the end of this tutorial, you'll have a private AI assistant that can rewrite text, summarize notes, brainstorm ideas, and help draft content all while keeping full control over your data. This setup combines Obsidian's local-first philosophy with Ollama's open source AI capabilities to create a knowledge workspace that's private, hackable, and built to last, proving that powerful tools don't have to come at the cost of control."
77
tags:
88
- AI
99
- Tools
@@ -21,7 +21,7 @@ import ContentImage from '../../../components/ContentImage.astro';
2121
import ShowcaseGitHubRepo from '../../../components/ShowcaseGitHubRepo.astro';
2222

2323
{/* prettier-ignore */}
24-
<p class="lead">**What if your notes could think with you, without sending a single word to the cloud?** In this article, we'll build a fully local AI setup inside Obsidian using Ollama and the open Gemma 3 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, how to connect it to Obsidian using community plugins, and how to use it for practical, everyday note-taking workflows, all while keeping full control over your data.</p>
24+
<p class="lead">**What if your notes could think with you, without sending a single word to the cloud?** In this article, we'll build a fully local AI setup inside Obsidian using Ollama and the open Gemma 4 model. You'll learn why local AI makes sense for knowledge work, how to install and test Ollama from the command line, how to connect it to Obsidian using community plugins, and how to use it for practical, everyday note-taking workflows, all while keeping full control over your data.</p>
2525

2626
---
2727

@@ -56,11 +56,11 @@ Together, Obsidian and Ollama create a powerful combination that aligns with ope
5656
By the end of this article, you'll have:
5757

5858
- Ollama installed and verified from the command line
59-
- The Gemma 3 model running locally
59+
- The Gemma 4 model running locally
6060
- Obsidian configured with community plugins
6161
- A fully local AI assistant embedded in your notes
6262

63-
Everything described below has been tested using Gemma 3 with Ollama.
63+
Everything described below has been tested using Gemma 4 with Ollama.
6464

6565
Let's consider that Obsidian is already installed on your machine. If you haven't done so yet, you can download it from [obsidian.md](https://obsidian.md/).
6666

@@ -70,10 +70,10 @@ Let's consider that Obsidian is already installed on your machine. If you haven'
7070

7171
<Steps>
7272

73-
1. Install Ollama from the official site at https://ollama.com, or via Homebrew on macOS via
73+
1. Install Ollama from the official site at https://ollama.com, or on macOS via
7474

7575
```sh
76-
brew install ollama
76+
curl -fsSL https://ollama.com/install.sh | sh
7777
```
7878

7979
1. Verify the installation by running in your terminal:
@@ -84,10 +84,10 @@ Let's consider that Obsidian is already installed on your machine. If you haven'
8484

8585
The output should show the installed version of Ollama.
8686

87-
1. Pull and run the Gemma 3 model with:
87+
1. Pull and run the Gemma 4 model with:
8888

8989
```sh
90-
ollama run gemma3
90+
ollama run gemma4
9191
```
9292

9393
Once the model is downloaded, you'll get an interactive prompt. Test it with:
@@ -148,7 +148,7 @@ Let's start by installing the **AI Providers** plugin.
148148
- In the "AI providers" section, click "+" to add a new provider.
149149
- In the "Add new provider" dialog:
150150
- Select "Ollama" as the provider type.
151-
- Click on the refresh icon of the "Model" field to fetch Gemma 3 from your local Ollama installation.
151+
- Click on the refresh icon of the "Model" field to fetch Gemma 4 from your local Ollama installation.
152152
- Click "Save" to add the provider.
153153

154154
<ContentImage src="/images/running-a-local-ai-inside-obsidian-with-ollama-2.png" alt="" width="420" height="320" />
@@ -168,7 +168,7 @@ Then, install the **Local GPT** plugin.
168168
1. Click "Install", then "Enable" to enable the plugin.
169169

170170
1. Click "Options" to configure the plugin:
171-
- In the "Main AI Provider", "Embedding AI Provider", and "Vision AI Provider" sections, select "Ollama ~ gemma3:latest" from the dropdown.
171+
- In the "Main AI Provider", "Embedding AI Provider", and "Vision AI Provider" sections, select "Ollama ~ gemma4:latest" from the dropdown.
172172

173173
<ContentImage src="/images/running-a-local-ai-inside-obsidian-with-ollama-4.png" alt="" width="720" height="320" />
174174

@@ -226,7 +226,7 @@ Let's say I've written this article in Obsidian first and wanted to summarize it
226226

227227
<ContentImage src="/images/running-a-local-ai-inside-obsidian-with-ollama-7.png" alt="" width="420" height="320" />
228228

229-
Gemma 3 would then generate a concise summary of the selected content directly at the end of my note (this can be configured).
229+
Gemma 4 would then generate a concise summary of the selected content directly at the end of my note (this can be configured).
230230

231231
<ContentImage src="/images/running-a-local-ai-inside-obsidian-with-ollama-8.png" alt="" width="420" height="320" />
232232

@@ -253,7 +253,7 @@ The model would then generate a description of the image directly in my note.
253253

254254
## Everyday Use Cases That Actually Feel Useful
255255

256-
**I just scratched the surface of what you can do with this setup, and honestly, it's pretty useful for specific tasks.** Gemma 3 works well for close-to-the-text stuff: cleaning up messy first drafts, turning rambling meeting notes into actual bullet points, or getting a quick summary when I don't want to reread a long note.
256+
**I just scratched the surface of what you can do with this setup, and honestly, it's pretty useful for specific tasks.** Gemma 4 works well for close-to-the-text stuff: cleaning up messy first drafts, turning rambling meeting notes into actual bullet points, or getting a quick summary when I don't want to reread a long note.
257257

258258
Response times are decent for most tasks. Not instant, but fast enough to keep you in the flow.
259259

0 commit comments

Comments
 (0)