pls is a minimalist, blazingly fast Linux command-line utility written in C++. It translates natural language requests into ready-to-use bash/zsh commands using local LLMs via Ollama.
Unlike hundreds of bloated Python wrappers, pls runs instantly (zero-overhead), has no third-party dependencies, and is completely context-aware of your system environment.
IMG_7068.MOV
- ⚡ Zero-Overhead: Written in raw C++ using
libcurl. Manual JSON parsing ensures microsecond execution times. - 🐧 OS Context-Aware: Automatically reads
/etc/os-releaseto detect your Linux distribution and preferred package manager (zypper, apt, pacman, dnf). - 👁️ Directory Context-Aware: The AI "sees" the files in your current directory. Asking to "delete log files" will generate a command specifically for the files in your folder.
- 🚰 UNIX Pipes (
stdin) Support: Feed logs or errors directly into the AI:cat error.log | pls "find the issue and fix it". - 📋 Wayland Native: Out-of-the-box clipboard integration via
wl-copy. - 🔒 100% Private: Everything runs locally on your hardware. No API keys, no data harvesting.
Dependencies: libcurl and cmake.
(e.g., for openSUSE: sudo zypper in libcurl-devel cmake gcc-c++)
git clone https://github.com/YOUR_USERNAME/pls-ai-cli.git
cd pls-ai-cli
mkdir build && cd build
cmake ..
cmake --build .
sudo cp pls /usr/local/bin/First, tell pls which local Ollama model to use (we highly recommend qwen2.5-coder:3b for the best speed/intelligence ratio):
pls --set qwen2.5-coder:3b
(you can see ollama active models via pls --list)
Basic translation:
pls update my system
Execution (-e) and Clipboard (-c) flags:
pls -e clear docker cache
Pipe Magic (stdin):
ps aux | pls -e "find the command to kill the process consuming the most memory"
LLM generation temperature is hardcoded to 0.0 for maximum logical accuracy and zero hallucinations. Active model config is safely stored in ~/.config/pls_model.txt. Directory and Pipe context sizes are strictly limited to prevent memory overflow and VRAM crashes.