Skip to content

Latest commit

 

History

History
276 lines (204 loc) · 10.7 KB

File metadata and controls

276 lines (204 loc) · 10.7 KB

AI Intelligent Learning Assistant System

语言: 中文 | English

A full-stack AI learning loop built around material management -> RAG retrieval -> AI summary and question generation -> online practice -> mistake review -> learning analytics -> agent-based study assistant.
This project is suitable as a reference case for AI application practice, RAG engineering, agent tool calling, graduation projects, course projects, and portfolio projects.

AI Intelligent Learning Assistant System Social Preview

Highlights

  • Complete AI learning loop: material parsing, RAG retrieval, AI summaries, AI question generation, online practice, mistake review, and learning analytics.
  • Engineering-oriented RAG: Qdrant-based vector retrieval with page references, similarity scores, section metadata, and Hit@K / Recall@K / MRR evaluation.
  • Agent tool calling: the study assistant supports SSE streaming and can query system context such as materials, question sets, practice records, and task status.
  • Portfolio-friendly full stack: Spring Boot, Vue3, MySQL, Redis/Valkey, Qdrant, Docker, Vercel, Render, and Aiven in one reproducible project.

Contact

If you have questions or suggestions about this project, you can email: 3111471949@qq.com

Note: advertising or promotional messages may not receive a reply. Please include "AI Learning System + your purpose" in the email subject.

Live Demo

Note: access to Vercel, Render, and Qdrant Cloud may be unstable from mainland China. Render free services can cold-start, so the first request may take several seconds.

The production environment is deployed with Vercel + Render + Aiven MySQL + Aiven Valkey + Qdrant Cloud. See the deployment guide: Free Cloud Deployment: Render + Aiven + Qdrant Cloud.

Screenshots

Home / Learning Overview Agent Study Assistant
主页 Agent Study Assistant
Material Management RAG Evaluation
Material Management RAG Evaluation
Learning Analytics AI Configuration
Learning Analytics AI Configuration

Core Features

Material-Driven Learning Loop

  • Upload or create learning materials, with support for TXT, PDF, DOCX, and other parsed content.
  • Generate AI summaries, review outlines, and practice questions from materials.
  • Support online practice, automatic scoring, mistake collection, and knowledge mastery statistics.
  • Summarize accuracy, mistake count, weak knowledge points, and practice trends in the learning analytics dashboard.

RAG and Vector Retrieval

  • Split material content into chunks and generate embeddings.
  • Store material vectors in Qdrant and support semantic retrieval.
  • Preview RAG retrieval results with similarity score, page number, paragraph, and section metadata.
  • Manage RAG evaluation datasets and calculate Hit@K, Recall@K, MRR, and latency.
  • Import the CMRC2018 dataset for batch retrieval quality evaluation.

Agent Study Assistant

  • Support general study conversations and SSE streaming responses.
  • Support session lists, new conversations, conversation deletion, and tool-call traces.
  • Query system context such as materials, question sets, practice records, and task status.
  • New conversations do not automatically bind to materials by default. Users can explicitly specify a material title or ID in the message.
  • Display a notice when the current AI configuration is running in mock mode.

AI Configuration and Task Center

  • Configure chat models and embedding models separately.
  • Support OpenAI-compatible APIs, DeepSeek, Doubao Ark, and similar compatible providers.
  • Support mock mode for demonstrating the workflow without a real model API key.
  • Route long-running operations such as AI summaries, AI question generation, short-answer grading, and embedding into the task center.
  • Track task status, view details, retry failures, redispatch tasks, and delete task records.

Tech Stack

Layer Technologies
Frontend Vue 3, TypeScript, Vite, Pinia, Vue Router, Element Plus, Axios
Backend Spring Boot 3.3, Java 17, MyBatis Plus, JWT, Bean Validation
Database MySQL 8, Redis
RAG Qdrant, Embedding API, material chunking, Hit@K / Recall@K / MRR evaluation
AI OpenAI-compatible Chat API, SSE streaming, mock mode
Deployment Docker Compose, Vercel, Render, Aiven MySQL, Aiven Valkey, Qdrant Cloud

Project Structure

AI-Intelligent-Learning-Assistant-System
├─ backend/                  # Spring Boot backend
│  ├─ src/main/java/          # Business code
│  ├─ src/main/resources/     # Configuration files
│  └─ Dockerfile
├─ frontend/                 # Vue 3 frontend
│  ├─ src/api/                # API clients
│  ├─ src/components/         # Shared components
│  ├─ src/views/              # Page views
│  └─ Dockerfile
├─ db/                        # Database schema and migration scripts
├─ docker/mysql/init/         # MySQL container initialization scripts
├─ docs/deploy-render-aiven-qdrant.md
│                              # Free cloud deployment guide
├─ docs/images/               # README screenshots and social preview
├─ runtime/                   # AI runtime config, sensitive values are not committed by default
├─ docker-compose.yml         # One-command local startup
├─ render.yaml                # Render Blueprint config
├─ vercel.json                # Vercel frontend deployment config
├─ README.md                  # Chinese README
└─ README.en.md               # English README

Quick Start

Option 1: Docker Compose

  1. Prepare environment variables:
Copy-Item .env.example .env
  1. Start all services:
docker compose up -d --build
  1. Open services:
  1. View logs:
docker compose logs -f backend
docker compose logs -f frontend
  1. Stop services:
docker compose down

Option 2: Manual Local Startup

Backend:

Set-Location backend
mvn spring-boot:run

Frontend:

Set-Location frontend
npm install
npm run dev

If vector retrieval is needed, start Qdrant as well:

docker run -d --name qdrant -p 6333:6333 -p 6334:6334 -v qdrant_storage:/qdrant/storage qdrant/qdrant

Core Configuration

The backend supports overriding configuration through environment variables:

Variable 描述
SPRING_DATASOURCE_URL MySQL connection URL
SPRING_DATASOURCE_USERNAME MySQL username
SPRING_DATASOURCE_PASSWORD MySQL password
SPRING_REDIS_HOST Redis host
SPRING_REDIS_SSL_ENABLED Whether Redis / Valkey TLS is enabled
APP_AI_ENABLED Whether AI capabilities are enabled
APP_AI_MOCK_MODE Whether mock mode is enabled
APP_AI_CHAT_PROVIDER_TYPE Chat model provider
APP_AI_BASE_URL Chat API base URL
APP_AI_CHAT_PATH Chat API path
APP_AI_API_KEY Chat API key
APP_AI_DEFAULT_MODEL Default chat model
APP_AI_EMBEDDING_BASE_URL Embedding API base URL
APP_AI_EMBEDDING_API_KEY Embedding API key
APP_AI_DEFAULT_EMBEDDING_MODEL Default embedding model
APP_QDRANT_BASE_URL Qdrant base URL
APP_QDRANT_API_KEY Qdrant API key
APP_QDRANT_COLLECTION_NAME Qdrant collection name
APP_JWT_SECRET JWT secret
VITE_API_BASE_URL Frontend production API base URL

Runtime AI configuration saved from the AI configuration page is written to:

runtime/ai-config.json

Common APIs

Module Example APIs
Auth POST /api/auth/register, POST /api/auth/login, GET /api/user/profile
Materials POST /api/material/upload, GET /api/material/page, POST /api/material/{id}/parse
AI Tasks POST /api/ai/tasks/material/{materialId}/summary, POST /api/ai/tasks/material/{materialId}/question-set
Practice POST /api/practice/start, POST /api/practice/submit, GET /api/practice/page
Analytics GET /api/wrong-questions/page, GET /api/knowledge-mastery/overview, GET /api/learning-analytics/overview
RAG GET /api/rag/material/{materialId}/retrieve-preview, POST /api/rag-eval/datasets/{datasetId}/run
Agent POST /api/assistant/sessions, POST /api/assistant/sessions/{sessionId}/messages/stream

Deployment Notes

Current production deployment:

  • Frontend: Vercel
  • Backend: Render Web Service
  • MySQL: Aiven MySQL Free
  • Redis / Valkey: Aiven Valkey Free
  • Vector database: Qdrant Cloud Free

For the full migration and configuration process, see: Free Cloud Deployment: Render + Aiven + Qdrant Cloud.

Frontend production build:

Set-Location frontend
npm run build

Backend production build:

Set-Location backend
mvn -DskipTests package

Vercel uses the root-level vercel.json and requires this production environment variable:

VITE_API_BASE_URL=https://<RENDER_BACKEND_DOMAIN>/api

vercel.json:

{
  "installCommand": "cd frontend && npm ci",
  "buildCommand": "cd frontend && npm run build",
  "outputDirectory": "frontend/dist"
}

For Render backend deployment, use the root-level render.yaml Blueprint or manually create a Docker Web Service. Make sure the service root points to backend and configure MySQL, Valkey, Qdrant, AI provider, and JWT environment variables.

Roadmap

  • Add more complete agent tool orchestration and permission control.
  • Add hybrid retrieval, reranking models, and RAG A/B testing.
  • Add study plans, spaced repetition, and personalized recommendations.
  • Add model call cost statistics and request logs.
  • Add GitHub Actions for automated build and deployment.

License

This project is intended for learning, course projects, and portfolio showcases. For commercial use, please verify the licenses of all dependencies and model providers first.