SQORA is an AI-powered competitive exam preparation platform for JEE/NEET aspirants. It combines a real-time AI mentor, auto-generated math and science animations, mock exams, contest features, and local text-to-speech into one learning workspace.
| Feature | Description |
|---|---|
| AI Mentor | Chat with a Gemini-powered tutor that explains concepts, solves doubts, and maintains context across turns |
| Manim Animations | AI explanations trigger auto-generated animated videos rendered with Manim |
| Mock Exams | Take timed exams with auto-grading and review |
| Contest Arena | Browse upcoming and past contests |
| Context Compaction | Rolling Gemini-based context summarization keeps long chats efficient |
| Text-to-Speech | Local TTS server delivers narrated audio explanations |
| Admin Panel | Configure mentor greetings, voice settings, and platform options |
| 3D Landing Page | React Three Fiber-powered interactive landing page |
| Google Auth | Firebase Google Sign-In β unified sign-in/sign-up, no passwords |
| Multi-User Support | Fully isolated data per Firebase UID on the server |
| Token Auth | Firebase ID tokens verified server-side β strict data isolation |
ββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββββββββββ
β Vercel (CDN) β β IITJ RAID Server β
β β β β
β React SPA (static build) ββββββββΆβ FastAPI Backend (port 8000) β
β β β Manim Worker (background) β
β VITE_API_URL = raid server β β TTS Server (port 8882) β
ββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββ¬ββββββββ
β
ββββββββββββββββββββββββ
β Firebase (Google Cloud)
β β’ Authentication (JWT)
βββββββββββββββββββββββ
| Service | Stack | Deployment |
|---|---|---|
| Frontend | React 18, Vite, React Router, Three.js, KaTeX | Vercel |
| Backend API | FastAPI, Gemini, Firebase Auth | RAID Server |
| Manim Worker | Python, Manim 0.19, Gemini code gen | RAID Server |
| TTS Server | HeadTTS (Node.js + WebSocket/REST) | RAID Server |
user_data/ β on RAID server
βββ {firebase_uid}/
βββ chat_history.json
βββ ai_cache.json
βββ video_cache.json
βββ incoming_jobs/ β Manim job queue
βββ rendered_videos/ β generated MP4 files
- Node.js 18+
- Python 3.12+
- ffmpeg (required by Manim)
git clone https://github.com/devlup-labs/sqora.git
cd sqora
# Frontend
cd Frontend && npm install && cd ..
# Backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtCopy the example and fill in your values:
cp .env.example .envEdit .env:
GEMINI_API_KEY=your_api_key
VITE_FIREBASE_API_KEY=...
VITE_FIREBASE_AUTH_DOMAIN=...
VITE_FIREBASE_PROJECT_ID=...
VITE_FIREBASE_STORAGE_BUCKET=...
VITE_FIREBASE_MESSAGING_SENDER_ID=...
VITE_FIREBASE_APP_ID=...
VITE_FIREBASE_MEASUREMENT_ID=...
# For local dev: frontend hits backend on port 8000
VITE_API_URL=http://localhost:8000# Terminal 1 β Frontend
cd Frontend && npm run dev
# Terminal 2 β Backend API
./start_backend.sh
# Terminal 3 β Manim Worker
./start_manim.sh
# Terminal 4 β HeadTTS Server (optional)
./start_tts.sh- Frontend:
http://localhost:5173 - Backend:
http://localhost:8000
git push origin main- Go to vercel.com/new
- Import the sqora repository
- Vercel reads
vercel.jsonautomatically β no extra config needed
Under Project β Settings β Environment Variables, add:
| Variable | Value |
|---|---|
VITE_FIREBASE_API_KEY |
From Firebase Console |
VITE_FIREBASE_AUTH_DOMAIN |
your-project.firebaseapp.com |
VITE_FIREBASE_PROJECT_ID |
Your project ID |
VITE_FIREBASE_STORAGE_BUCKET |
your-project.appspot.com |
VITE_FIREBASE_MESSAGING_SENDER_ID |
Sender ID |
VITE_FIREBASE_APP_ID |
App ID |
VITE_FIREBASE_MEASUREMENT_ID |
Measurement ID |
VITE_API_URL |
Public URL of your RAID server backend, e.g. https://sqora.iitj.ac.in |
VITE_API_URLis required on Vercel β it tells the frontend where to send API requests.
Firebase Console β Authentication β Authorized domains β Add your *.vercel.app URL.
git clone https://github.com/devlup-labs/sqora.git
cd sqora
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txtSame as local dev, but without VITE_* variables (those are frontend-only):
GEMINI_API_KEY=your_keyThe frontend is on Vercel and the backend is on the RAID server β this is a cross-origin setup. The backend already includes CORS middleware. Expose it through Nginx:
server {
listen 80;
server_name your-raid-domain-or-ip;
location /api/ {
proxy_pass http://localhost:8000;
proxy_set_header Authorization $http_authorization;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# Required for SSE streaming
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 300s;
# Allow Vercel frontend
add_header 'Access-Control-Allow-Origin' 'https://your-app.vercel.app' always;
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type' always;
}
}For HTTPS (strongly recommended):
sudo certbot --nginx -d your-domain.com
For production security, enable cryptographic token verification:
- Firebase Console β Project Settings β Service Accounts β Generate new private key
- Save as
Unmute/firebase-service-account.json
β οΈ Never commit this file β it is in.gitignore pip install firebase-admin- Restart the backend
nohup ./start_backend.sh &
nohup ./start_manim.sh &
nohup ./start_tts.sh & # optionalOr use tmux / systemd for persistent sessions.
- Users authenticate with Google via Firebase β no passwords
- Firebase issues a short-lived JWT (ID token) on login
- Every API request carries it as
Authorization: Bearer <token> - Backend verifies the token and extracts
uidto scope all data - All data lives under
user_data/{uid}/β users can only access their own
All chat and user endpoints require Authorization: Bearer <firebase_id_token>.
| Method | Endpoint | Auth | Description |
|---|---|---|---|
POST |
/api/chat |
β | Send a message, get AI reply + video_id |
GET |
/api/chat/stream?message=... |
β | SSE stream of AI response tokens |
GET |
/api/users/{uid}/chat |
β | Fetch user's chat history |
POST |
/api/users/{uid}/chat |
β | Save chat history |
GET |
/api/users/{uid}/videos/{id}/status |
β | Check if a video is rendered |
GET |
/api/users/{uid}/videos/{id}/ready |
β token param | SSE β fires ready when MP4 is done |
GET |
/api/users/{uid}/videos/{id} |
β token param | Stream rendered .mp4 (byte-range) |
GET |
/api/contests |
β | List contests |
GET |
/api/exams/{code} |
β | Fetch exam questions |
GET/PUT |
/api/admin/config |
β | Read/update platform settings |
Long conversations are summarized before sending to Gemini:
- Full history is always stored locally and shown in UI
- LLM receives: compacted memory + recent turns + new message
- When estimated token count exceeds
trigger_tokens, old context is summarized via Gemini - Configure in
Unmute/config.jsonβllm.context_compaction
| Layer | Technologies |
|---|---|
| Frontend | React 18, Vite, React Router, Three.js / React Three Fiber, KaTeX, react-markdown |
| Backend | FastAPI, Google Gemini (google-genai SDK), Firebase Admin SDK |
| Animation | Manim 0.19, Gemini code generation, ffmpeg |
| Auth | Firebase Authentication (Google Sign-In) |
| TTS | HeadTTS |
| Deployment | Vercel (frontend), IITJ RAID Server (backend + worker) |
MIT