A privacy-focused, offline-capable AI tool for generating comprehensive software test cases. Built with the BLAST protocol (Blueprint, Link, Architect, Stylize, Trigger), this tool leverages a local Ollama model to ensure your sensitive requirements never leave your machine.
The application follows a Decentralized Client-Server architecture where the frontend interacts directly with the local AI service, bypassing the need for a heavy backend server.
graph TD
subgraph "User Environment (Local Machine)"
Browser[("🌐 Web Browser (Frontend)")]
ServeScript[("🐍 Python Server (Port 3002)")]
Ollama[("🦙 Ollama AI Service (Port 11434)")]
end
User((👤 User)) -->|"1. Opens http://localhost:3002"| Browser
ServeScript -->|"2. Serves HTML/CSS/JS"| Browser
Browser -->|"3. Users Enters Requirements"| Browser
Browser -->|"4. Sends Prompt (POST)"| Ollama
Ollama -->|"5. Returns Test Cases (JSON)"| Browser
Browser -->|"6. Renders Cards & PDF"| User
+-------------------+ +----------------------------+
| 👤 User | | 💻 Local Machine |
+-------------------+ | |
| | +------------------+ |
1. Open Browser | | 🌐 Browser (UI) | |
v | | (index.html) | |
+-------------------+ | +--------+---------+ |
| 🌐 http://localhost| <------->+| | |
| :3002 | 2. Serves Assets | 3. Send Prompt (POST)
+-------------------+ | +--------v---------+ |
| | 🦙 Ollama API | |
| | (Port 11434) | |
| +--------+---------+ |
| | |
| 4. JSON Response |
| | |
| +--------v---------+ |
| | 📄 JS PDF Gen | |
| +------------------+ |
+----------------------------+
- Frontend (UI): Vanilla HTML5, CSS3 (Cyberpunk/Neon Theme), and JavaScript. Handles all logic, state management, and PDF generation (via
jsPDF). - Server (Host): A simple
Pythonscript (serve.py) that useshttp.serverto serve static files locally. - Intelligence (AI): Ollama running the
tinyllamamodel (or any compatible model) to process natural language requirements into structured test cases.
| Home Page | Generating |
|---|---|
![]() |
![]() |
| Result View | Generated Output |
|---|---|
![]() |
![]() |
- 100% Local & Private: No data is sent to the cloud. Perfect for enterprise or sensitive projects.
- Smart PDF Export: Instantly generate timestamped PDF reports (
Test_Cases_Report_MM-DD-YYYY...pdf). - Cyberpunk Speed UI: A high-performance, neon-styled interface designed for focus and speed.
- Multi-Vector Generation: Automatically generates 5-7 detailed test cases (Functional, Edge, Negative) from a single prompt.
- Cross-Platform: Runs on any Windows machine with Python (no heavy backend required).
- Ollama: Download and install from ollama.com.
- TinyLlama Model: Run
ollama run tinyllamain your terminal to pull the model. - CORS Configuration:
- For the browser to talk to Ollama, set the environment variable:
- PowerShell:
[System.Environment]::SetEnvironmentVariable("OLLAMA_ORIGINS", "*", "User") - Restart your terminal/computer after setting this.
- Clone/Download this repository.
- Start the Server:
- Ensure you have Python installed.
- Open terminal in project directory.
- Run:
python tools/serve.py
- Access the App:
- Open your browser and navigate to the address shown in terminal (e.g.,
http://localhost:3002).
- Open your browser and navigate to the address shown in terminal (e.g.,
Project Root/
├── tools/
│ └── serve.py # Python Web Server
├── architecture/
│ ├── SOP_Generation_Logic.md
│ └── SOP_UI_State.md
├── index.html # Main Application Structure
├── style.css # Cyberpunk Visual Styles
├── main.js # Frontend Logic & API Integration
├── BLAST.md # Development Protocol
├── README.md # Documentation
└── Test_Case_Generated_Report.pdf # Sample Output
- "Ollama connection failed": Ensure Ollama is running (
ollama serve) and theOLLAMA_ORIGINSvariable is set correctly. - 404 Errors: Ensure you are running the
serve.pyscript from the Project Root directory.
This project is licensed under the MIT License - see the LICENSE file for details.



