Run a multi-platform AI assistant on your own machine with full privacy (Telegram, Discord, WhatsApp, Slack).
π 100% Private Β· π₯οΈ Runs Locally Β· π§ Powered by Open-Source LLMs Β· π Multi-Platform
- πΊπΈ English
- π§π· PortuguΓͺs
CrustAI is a self-hosted AI assistant that runs 100% locally using Ollama. It integrates with Telegram, Discord, WhatsApp and Slack so you can chat with your assistant in tools you already useβwithout sending your conversation data to cloud LLM providers.
Built with Node.js and powered by Ollama (local LLM runtime), CrustAI is designed for developers, privacy enthusiasts, and anyone who wants an AI assistant that truly belongs to them.
| Feature | Description |
|---|---|
| π 100% Local & Private | Conversations stay on your machine |
| π§ LLM via Ollama | Use tinyllama, llama3.2, phi3 and more |
| π± Multi-platform Adapters | Telegram, WhatsApp, Discord, Slack |
| 𧬠Long-term Memory | Store and retrieve user facts |
| β‘ REST API | Integrate CrustAI into external workflows |
| π Personality Config | Customize tone, style and identity |
| π Bilingual UX | English + Portuguese support |
Watch the system boot up and connect to the local AI model
The bot responds instantly β running 100% offline
Ask anything β the answer comes from your own machine
/ping β Check if the bot is alive
/help β Show all commands
/model β Show which AI model is running
/remember β Store a fact in long-term memory
/forget β Erase all stored facts
/clear β Clear conversation history
Adapters (Telegram / Discord / WhatsApp / Slack)
β
βΌ
Message Orchestrator
ββββββββββ΄βββββββββ
βΌ βΌ
Ollama Client Memory Store
β β
ββββββββββ¬βββββββββ
βΌ
REST API
Design note: adapter boundaries make it easy to add new channels without changing core conversation logic.
- Node.js β₯ 20.0
- Ollama installed and running
- A Telegram Bot Token from @BotFather
# 1. Clone the repository
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI
# 2. Install dependencies
npm install
# 3. Start Ollama and pull a model
ollama serve
ollama pull tinyllama # lightweight (600MB)
# or
ollama pull llama3.2 # more powerful (2GB, needs 8GB RAM)
# 4. Configure the project
cp config/config.example.yml config/config.yml
# Edit config/config.yml with your Telegram token and model
# 5. Run CrustAI
npm startEdit config/config.yml:
model: tinyllama # or llama3.2, phi3, mistral...
ollama_url: http://localhost:11434
language: pt-BR
telegram:
enabled: true
token: YOUR_BOT_TOKEN_HERE
allowed_user_ids: [] # leave empty to allow all users
discord:
enabled: false
token: ""
whatsapp:
enabled: false
voice:
enabled: false
port: 8765| Technology | Purpose |
|---|---|
| Node.js | Runtime environment |
| Ollama | Local LLM inference engine |
| node-telegram-bot-api | Telegram integration |
| @whiskeysockets/baileys | WhatsApp integration |
| discord.js | Discord integration |
| @slack/bolt | Slack integration |
| Fastify | REST API server |
| sql.js | Embedded database for memory |
| yaml | Configuration management |
crustai/
βββ src/
β βββ core/
β β βββ index.js # Main orchestrator
β β βββ llm.js # Ollama LLM client
β β βββ commands.js # Command handler
β βββ adapters/
β β βββ telegram/ # Telegram bot
β β βββ discord/ # Discord bot
β β βββ whatsapp/ # WhatsApp bot
β β βββ slack/ # Slack bot
β βββ memory/
β β βββ store.js # Long-term memory
β βββ personality/
β β βββ prompt.js # System prompt builder
β βββ voice/
β β βββ server.js # Voice WebSocket server
β βββ api/
β βββ server.js # REST API
βββ config/
β βββ config.yml # Your configuration (git-ignored)
β βββ config.example.yml # Template
β βββ personality.yml # Assistant personality
βββ demo/
β βββ terminal.gif # Boot demo
β βββ ping.gif # Telegram connection demo
β βββ chat.gif # AI conversation demo
βββ data/ # Local database (git-ignored)
CrustAI was built with privacy as its core principle:
- β All conversations stay on your machine
- β No API keys sent to external AI services
- β No telemetry or usage tracking
- β Open source β inspect every line of code
- β Your data, your rules
- Web UI dashboard
- Image understanding (multimodal LLMs)
- Plugin system for custom tools
- Docker one-click deployment
- Mobile app companion
Dave Simoes
- π GitHub: @DaveSimoes
- πΌ LinkedIn: Dave Simoes
This project is licensed under the MIT License β see the LICENSE file for details.
CrustAI Γ© um assistente de IA totalmente privado e auto-hospedado que roda inteiramente na sua prΓ³pria mΓ‘quina β nenhum dado sai do seu computador. Ele se conecta a plataformas de mensagens populares como Telegram, WhatsApp, Discord e Slack, oferecendo o poder de uma IA conversacional sem abrir mΓ£o da sua privacidade.
ConstruΓdo com Node.js e alimentado pelo Ollama (motor de LLM local), o CrustAI foi projetado para desenvolvedores, entusiastas de privacidade e qualquer pessoa que queira um assistente de IA que realmente lhe pertenΓ§a.
| Funcionalidade | DescriΓ§Γ£o |
|---|---|
| π 100% Privado | Todos os dados ficam na sua mΓ‘quina. Sem nuvem |
| π§ LLM Local | Powered by Ollama β suporta llama3.2, tinyllama e mais |
| π± Multi-Plataforma | Telegram, WhatsApp, Discord, Slack β um sΓ³ bot |
| 𧬠MemΓ³ria Longa | Lembra fatos sobre vocΓͺ entre conversas |
| π£οΈ Voz Offline | Fala e escuta sem internet (pt-BR) |
| β‘ REST API | API integrada para integraΓ§Γ΅es customizadas |
| π Personalidade | Configure o nome, tom e comportamento do assistente |
O sistema inicializando e conectando ao modelo de IA local
O bot respondendo instantaneamente β 100% offline
Pergunte qualquer coisa β a resposta vem da sua prΓ³pria mΓ‘quina
# 1. Clone o repositΓ³rio
git clone https://github.com/DaveSimoes/CrustAI.git
cd CrustAI
# 2. Instale as dependΓͺncias
npm install
# 3. Inicie o Ollama e baixe um modelo
ollama serve
ollama pull tinyllama
# 4. Configure o projeto
cp config/config.example.yml config/config.yml
# Edite config/config.yml com seu token do Telegram
# 5. Inicie o CrustAI
npm startDave Simoes β Desenvolvedor apaixonado por IA, privacidade e cΓ³digo aberto.
- π GitHub: @DaveSimoes
- πΌ LinkedIn: Dave Simoes
β Se este projeto te ajudou, deixe uma estrela! / If this project helped you, leave a star! β
Made with π¦ and β€οΈ by Dave Simoes









