StillMe isn't just code - it's an EXPERIMENT in AI-human collaboration.
Built by a solo founder with AI assistance, now we need YOUR human expertise!
StillMe started as a bold experiment: Can a non-technical founder build a complete AI system from scratch using AI-assisted development?
The answer? Yes, but with a crucial realization: AI tools are incredibly powerful for rapid prototyping, code generation, and documentation. But they need human judgment, strategic thinking, and ethical oversight to create something truly excellent.
Through AI-assisted development, StillMe now has:
- ✅ Vector Database (ChromaDB): Semantic search and knowledge retrieval
- ✅ RAG System: Retrieval-Augmented Generation for context-aware responses
- ✅ Validator Chain: Reduces hallucinations through multiple validation checks (citation, evidence overlap, confidence scoring, ethics validation)
- ✅ Multi-Source Learning: RSS, arXiv, CrossRef, Wikipedia integration
- ✅ Continuum Memory System: Tiered memory architecture (L0-L3)
- ✅ Interactive Dashboard: Complete transparency into learning processes
This is where you come in.
StillMe proves that vision + AI tools = possibility. But to reach excellence, we need:
- 🧠 Human strategic thinking - Architecture decisions, design patterns, trade-offs
- 👁️ Human code review - Spotting edge cases, security issues, performance bottlenecks
- ⚖️ Human ethical judgment - Ensuring AI learns responsibly and fairly
- 🎨 Human creativity - UI/UX design, user experience, community building
- 📚 Human knowledge - Documentation, tutorials, knowledge sharing
❌ "Either AI writes it or humans write it" - This is a false dichotomy.
❌ "AI will replace developers" - AI augments, humans guide.
❌ "Only senior developers can contribute" - Everyone has something valuable to offer.
✅ "AI + Human collaboration creates the best results" - AI handles rapid prototyping, repetitive coding, documentation. Humans provide strategy, judgment, creativity.
✅ "Learning-focused environment" - Whether you're a beginner or senior, you'll learn and grow here.
✅ "Pro-AI assistance" - We encourage using AI tools to understand the codebase, generate code, and learn. It's how StillMe was born!
Your Human Expertise:
- 🎯 Strategic thinking and architecture decisions
- 🔍 Code review and quality assurance
- ⚖️ Ethical judgment and safety considerations
- 🎨 Creative problem-solving and design
- 📖 Knowledge sharing and mentorship
AI's Strengths (Use Them!):
- ⚡ Rapid prototyping and code generation
- 📝 Documentation and comments
- 🔄 Repetitive coding tasks
- 🧪 Test case generation
- 🔍 Codebase exploration and understanding
What we need:
- 🐛 Bug hunting - Find edge cases, security vulnerabilities, performance issues
- 🏗️ Architecture review - Help design scalable, maintainable systems
- 🧪 Testing - Write comprehensive tests, improve coverage
- 🔧 Code quality - Refactoring, optimization, best practices
- 🔐 Security - Security audits, vulnerability assessments
Your expertise matters: Even if AI generated the initial code, human review catches what AI misses - subtle bugs, security flaws, architectural improvements.
What we need:
- 🎨 UI/UX Design - Make StillMe beautiful and intuitive
- 📚 Documentation - Write clear guides, tutorials, examples
- 🌍 Community Building - Help others get started, answer questions
- 🧪 Testing & Feedback - Use StillMe, report issues, suggest improvements
- 📝 Content Creation - Blog posts, tutorials, case studies
You don't need to code to contribute! Your perspective as a user is invaluable.
Never contributed to open source before? PERFECT! StillMe is the perfect place to start.
We encourage you to use AI tools to:
- 🤖 Understand the codebase - Ask AI to explain functions, classes, architecture
- 📖 Read documentation - Use AI to summarize and explain complex concepts
- 💻 Generate code - Use AI to create initial implementations (then review and refine!)
- 🧪 Write tests - Use AI to generate test cases (then verify they're correct!)
- 🔍 Debug issues - Use AI to help diagnose problems
Examples:
- "Explain how the RAG system works in StillMe"
- "What does the Validator Chain do?"
- "Help me understand the Continuum Memory System"
- "Generate a test for function X"
Then: Review the AI's output with human judgment, test it, refine it, and contribute!
StillMe was built this way. We're not hiding it - we're celebrating it. AI-assisted development is the future, and we're proving it works when combined with human expertise.
Are you a senior developer? We especially need you!
We're experimenting with a new model:
- 🤖 AI generates initial code, documentation, tests
- 👨💻 Human reviews for quality, security, architecture
- 🤝 Together we create better code faster
Your role as a mentor:
- Review AI-generated code with critical eyes
- Guide architectural decisions
- Share knowledge with the community
- Help beginners learn through code review
- Ensure StillMe maintains high quality standards
This is cutting-edge: We're not just building an AI system - we're pioneering a new way of building software.
Perfect if you're new to open source or Python:
- ✨ Add type hints to functions without them
- 📝 Write docstrings for undocumented functions
- 🧪 Add unit tests for existing features
- 📚 Improve documentation - fix typos, clarify explanations
- 🐛 Report bugs - use StillMe and document issues
- 💬 Answer questions - help others in discussions
How to start:
- Look for issues labeled
good-first-issue - Pick a small task (adding type hints is perfect!)
- Use AI to help understand the code
- Make your changes
- Submit a PR
For developers with some experience:
- 🔧 Refactor code - improve structure, apply design patterns
- 🧠 Implement SPICE - complete SPICE implementation
- 🤖 Add AI models - integrate new AI providers (see guide below)
- 🔍 Improve RAG - enhance retrieval, add new sources
- 🧪 Integration tests - write end-to-end tests
- 🏗️ Architecture improvements - suggest and implement better patterns
For senior developers and architects:
- 🗄️ Database migration - design and implement schema changes
- 🔐 Security audits - comprehensive security reviews
- ⚡ Performance optimization - profiling, caching, scaling
- 🏛️ System architecture - design scalable, maintainable systems
- 🧬 Core algorithms - improve RAG, memory, validation logic
- 🔄 CI/CD improvements - enhance testing, deployment pipelines
Stuck? Use AI to help! Ask AI tools to explain any step you don't understand.
- Python 3.11 or 3.12
- Git
- (Optional) Docker for containerized development
# Fork the repository on GitHub, then:
git clone https://github.com/YOUR_USERNAME/StillMe-Learning-AI-System-RAG-Foundation.git
cd StillMe-Learning-AI-System-RAG-Foundation# Create virtual environment
python -m venv venv
# Activate (Windows)
venv\Scripts\activate
# Activate (Linux/Mac)
source venv/bin/activate# Install project dependencies
pip install -r requirements.txt
# Install development tools (optional but recommended)
pip install ruff mypy pytest pytest-cov pytest-asyncio# Copy example env file
cp env.example .env
# Edit .env and add your API keys (at minimum, one of):
# DEEPSEEK_API_KEY=your_key_here
# OPENAI_API_KEY=your_key_here# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=backend --cov-report=html
# Run specific test file
pytest tests/test_router_smoke.py -v# Check code style with Ruff
ruff check .
# Auto-fix issues
ruff check . --fix
# Check formatting
ruff format --check .
# Auto-format
ruff format .# Start backend API
python start_backend.py
# Or with uvicorn directly
uvicorn backend.api.main:app --reload --port 8000
# Start dashboard (in another terminal)
streamlit run dashboard.py- Open an issue on GitHub with a clear description
- Include steps to reproduce
- Provide error logs if available
- Use the bug report template if available
Tip: Use AI to help format your bug report or generate reproduction steps!
- Open a discussion or issue
- Explain the use case and benefits
- Be open to feedback and iteration
- Check existing issues/discussions first
Tip: Use AI to help brainstorm and refine your feature ideas!
- Fork the repository on GitHub
- Create a feature branch from
main:git checkout -b feature/your-feature-name
- Make your changes following code style guidelines
- Feel free to use AI to generate initial code!
- Then review and refine with human judgment
- Run tests and linting before committing:
pytest tests/ -v ruff check . - Commit with clear messages:
git commit -m "feat: Add new feature description" - Push to your fork:
git push origin feature/your-feature-name
- Submit a pull request to
mainbranch - Ensure CI checks pass (tests, linting)
Looking for a place to start? Check issues labeled:
good-first-issue- Great for newcomershelp-wanted- Community contributions welcomedocumentation- Improve docs
Common contribution areas:
- Add type hints to functions
- Refactor to dependency injection
- Complete SPICE implementation
- Add integration tests
- Improve documentation
StillMe supports multiple AI providers (DeepSeek, OpenAI, etc.). To add support for a new model:
Create a new function in backend/api/main.py:
async def call_[model]_api(prompt: str, api_key: str, detected_lang: str = 'en') -> str:
"""
Call [Model Name] API
IMPORTANT: Use build_system_prompt_with_language() to ensure
output language matches input language.
Args:
prompt: User prompt
api_key: API key or endpoint URL
detected_lang: Detected language code
Returns:
AI-generated response string
"""
try:
# ✅ Use centralized system prompt builder
system_content = build_system_prompt_with_language(detected_lang)
# Make API call with your model's specific format
async with httpx.AsyncClient(timeout=60.0) as client:
response = await client.post(
"[YOUR_API_ENDPOINT]",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
},
json={
"model": "[model-name]",
"system": system_content, # ✅ Use system prompt
"messages": [
{"role": "user", "content": prompt}
],
"max_tokens": 2000,
"temperature": 0.7
}
)
data = response.json()
# Parse response according to your API's format
return data["choices"][0]["message"]["content"]
except Exception as e:
logger.error(f"[Model] API error: {e}")
return f"[Model] API error: {str(e)}"In generate_ai_response() function, add your model check:
# Check for API keys (priority order)
[model]_key = os.getenv("[MODEL]_API_KEY")
if [model]_key:
return await call_[model]_api(prompt, [model]_key, detected_lang=detected_lang)- Add your model to
README.mdunder supported models - Update
env.examplewith your API key variable - Add any model-specific configuration notes
- Test with different languages (English, Vietnamese, etc.)
- Verify language matching works correctly
- Test error handling
# 1. Create function
async def call_claude_api(prompt: str, api_key: str, detected_lang: str = 'en') -> str:
system_content = build_system_prompt_with_language(detected_lang)
async with httpx.AsyncClient(timeout=60.0) as client:
response = await client.post(
"https://api.anthropic.com/v1/messages",
headers={
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"Content-Type": "application/json"
},
json={
"model": "claude-3-opus-20240229",
"max_tokens": 2000,
"system": system_content,
"messages": [{"role": "user", "content": prompt}]
}
)
data = response.json()
return data["content"][0]["text"]
# 2. Add to router
anthropic_key = os.getenv("ANTHROPIC_API_KEY")
if anthropic_key:
return await call_claude_api(prompt, anthropic_key, detected_lang=detected_lang)- Consistency: All models use the same language matching logic
- Maintainability: One place to update language instructions
- Community-Friendly: Clear, simple steps for contributors
- Future-Proof: Easy to extend without breaking existing code
- Follow PEP 8 Python style guide
- Use type hints for function parameters and return types
- Add docstrings to all public functions and classes
- Write clear, descriptive variable names
- Run Ruff before committing:
ruff check . --fix
from typing import Optional, List, Dict, Any
async def process_data(
items: List[str],
config: Optional[Dict[str, Any]] = None
) -> Dict[str, int]:
"""Process items and return statistics."""
# Implementation
return {"count": len(items)}CRITICAL: StillMe has a unique "voice" that makes it different from other AIs. This voice comes from the Identity Layer in backend/identity/injector.py.
StillMe's voice is characterized by:
- Intellectual Humility: Admits uncertainty, doesn't pretend to know everything
- Meta-cognition: Self-questioning, challenges its own answers
- Philosophical Courage: Dares to challenge assumptions, even its own
- Transparency: Honest about AI limitations, not marketing language
- Collaborative: Works with users, not defensive
backend/identity/injector.py) is a protected zone.
Before modifying Identity Layer:
- Understand why: Read
docs/CONSTITUTION.mdanddocs/IDENTITY_VOICE_MAINTENANCE_ANALYSIS.md - Test voice consistency: Run
pytest tests/test_voice_consistency.py -v - Propose in issue: Open GitHub issue explaining the change and rationale
- Get approval: Wait for maintainer review (this is critical code)
- Verify: Ensure voice consistency tests still pass
What you CAN modify:
- ✅ Adding new response patterns (e.g., "Future Questions handling")
- ✅ Refining existing instructions (e.g., replacing "siêu năng lực" with humble alternatives)
- ✅ Adding examples or clarifications
What you SHOULD NOT modify without discussion:
- ❌ Core principles (Intellectual Humility, Meta-cognition, Philosophical Courage)
- ❌ Fundamental tone requirements
- ❌ Identity Check Validator logic (without understanding impact)
Why this matters:
- StillMe's voice is its core differentiator
- Losing this voice = losing what makes StillMe unique
- Small changes can have big impact on voice consistency
Testing Voice Consistency:
# Run voice consistency tests
pytest tests/test_voice_consistency.py -v
# Test with different LLM providers
pytest tests/test_voice_consistency.py::test_cross_provider -vResources:
docs/CONSTITUTION.md- StillMe's core identity and principlesdocs/IDENTITY_VOICE_MAINTENANCE_ANALYSIS.md- Analysis of voice maintenance strategiestests/test_voice_consistency.py- Voice consistency test suite
- Add unit tests for new features in
tests/directory - Test error handling and edge cases
- Verify language matching works correctly
- Run voice consistency tests if modifying Identity Layer or response style
- Test with different input languages (English, Vietnamese, etc.)
- Aim for 80%+ coverage for new code
- Audit guides:
docs/AUDIT_GUIDE.md(audit checklist)docs/VALIDATION_CHAIN_SPEC.md(must-pass vs warning validators)docs/NO_SOURCE_POLICY.md(no-source refusal rules)
# tests/test_your_feature.py
import pytest
from backend.your_module import your_function
def test_your_function_success():
"""Test successful case."""
result = your_function("input")
assert result == "expected_output"
def test_your_function_error():
"""Test error handling."""
with pytest.raises(ValueError):
your_function("invalid_input")Before submitting a PR, ensure:
- All tests pass:
pytest tests/ -v - Linting passes:
ruff check . - Code is formatted:
ruff format . - Type hints added (where applicable)
- Docstrings added for public functions
- README/docs updated if needed
- No
# type: ignorecomments (unless absolutely necessary) - Human review completed - AI-generated code has been reviewed and refined
StillMe proves: Vision + AI Tools = Possibility
Now we need: Your Human Expertise = Excellence
Whether you're:
- 🟢 A beginner learning to code
- 🟡 An intermediate developer looking to grow
- 🔴 A senior developer wanting to mentor
- 🎨 A designer passionate about AI
- 📚 A writer who loves documentation
- 🌍 A community builder
You have something valuable to contribute.
StillMe is more than code - it's a living experiment in AI-human collaboration. Join us in proving that the future of software development is AI-assisted, human-guided.
- 💬 Open a discussion on GitHub
- 🐛 Check existing issues
- 📖 Review the codebase (use AI to help understand it!)
- 🤝 Ask the community - we're friendly and helpful!
Thank you for contributing to StillMe! 🎉
Together, we're building the future of AI-human collaboration. Let's make it transparent, ethical, and excellent.