You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file provides guidance to AI agents when working with code in this repository.
4
+
5
+
## Project Overview
6
+
7
+
This is a boilerplate for creating AI educational chatbots that integrate with the **Lambda-Feedback** educational platform. It deploys as an AWS Lambda function (containerized via Docker) that receives student chat messages with educational context and returns LLM-powered chatbot responses.
8
+
9
+
## Commands
10
+
11
+
**Testing:**
12
+
```bash
13
+
pytest # Run all unit tests
14
+
python tests/manual_agent_run.py # Test agent locally with example inputs
15
+
python tests/manual_agent_requests.py # Test running Docker container
16
+
```
17
+
18
+
**Docker:**
19
+
```bash
20
+
docker build -t llm_chat .
21
+
docker run --env-file .env -p 8080:8080 llm_chat
22
+
```
23
+
24
+
**Manual API test (while Docker is running):**
25
+
```bash
26
+
curl -X POST http://localhost:8080/2015-03-31/functions/function/invocations \
|`src/agent/agent.py`| LangGraph stateful graph; manages message history and summarization |
60
+
|`src/agent/prompts.py`| System prompts for tutor behavior, summarization, style detection |
61
+
|`src/agent/llm_factory.py`| Factory classes for each LLM provider (OpenAI, Google, Azure, Ollama) |
62
+
|`src/agent/context.py`| Converts muEd question/submission context dicts to LLM prompt text |
63
+
|`tests/utils.py`| Shared test helpers: `assert_valid_chat_request`, `assert_valid_chat_response`|
64
+
|`tests/example_inputs/`| Real muEd payloads used for end-to-end tests |
65
+
66
+
### Agent Logic (LangGraph)
67
+
68
+
`BaseAgent` maintains a state graph with two nodes:
69
+
-**`call_llm`**: Invokes the LLM with system prompt + conversation summary + conversational style preference
70
+
-**`summarize_conversation`**: Triggered when message count exceeds ~11; summarizes history and also extracts the student's preferred conversational style
71
+
72
+
Messages are trimmed after summarization to keep context window manageable. The `summary` and `conversationalStyle` fields persist across calls via the `ChatRequest` metadata.
73
+
74
+
### muEd API Format
75
+
76
+
`src/module.py` handles the muEd request format (https://mued.org/). The `context` field in `ChatRequest` contains nested educational data (question parts, student submissions, task info) that gets parsed into a tutoring prompt via `src/agent/context.py`.
77
+
78
+
### LLM Configuration
79
+
80
+
LLM provider and model are set via environment variables (see `.env.example`). The `llm_factory.py` selects the provider at runtime. The Lambda function name/identity is set in `config.json`.
81
+
82
+
The agent uses **two separate LLM instances** — `self.llm` for chat responses and `self.summarisation_llm` for conversation summarisation and style analysis. By default both use the same provider, but you can point them at different models (e.g. a cheaper model for summarisation) by changing the class in `agent.py`.
83
+
84
+
## Deployment
85
+
86
+
- Pushing to `dev` branch triggers the dev deployment GitHub Actions workflow
87
+
- Pushing to `main` triggers staging deployment, with manual approval required for production
88
+
- All environment variables (API keys, model names) are injected via GitHub Actions secrets/variables — do not hardcode them
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+
## Project Overview
6
+
7
+
This is a boilerplate for creating AI educational chatbots that integrate with the **Lambda-Feedback** educational platform. It deploys as an AWS Lambda function (containerized via Docker) that receives student chat messages with educational context and returns LLM-powered chatbot responses.
8
+
9
+
## Commands
10
+
11
+
**Testing:**
12
+
```bash
13
+
pytest # Run all unit tests
14
+
python tests/manual_agent_run.py # Test agent locally with example inputs
15
+
python tests/manual_agent_requests.py # Test running Docker container
16
+
```
17
+
18
+
**Docker:**
19
+
```bash
20
+
docker build -t llm_chat .
21
+
docker run --env-file .env -p 8080:8080 llm_chat
22
+
```
23
+
24
+
**Manual API test (while Docker is running):**
25
+
```bash
26
+
curl -X POST http://localhost:8080/2015-03-31/functions/function/invocations \
|`src/agent/agent.py`| LangGraph stateful graph; manages message history and summarization |
60
+
|`src/agent/prompts.py`| System prompts for tutor behavior, summarization, style detection |
61
+
|`src/agent/llm_factory.py`| Factory classes for each LLM provider (OpenAI, Google, Azure, Ollama) |
62
+
|`src/agent/context.py`| Converts muEd question/submission context dicts to LLM prompt text |
63
+
|`tests/utils.py`| Shared test helpers: `assert_valid_chat_request`, `assert_valid_chat_response`|
64
+
|`tests/example_inputs/`| Real muEd payloads used for end-to-end tests |
65
+
66
+
### Agent Logic (LangGraph)
67
+
68
+
`BaseAgent` maintains a state graph with two nodes:
69
+
-**`call_llm`**: Invokes the LLM with system prompt + conversation summary + conversational style preference
70
+
-**`summarize_conversation`**: Triggered when message count exceeds ~11; summarizes history and also extracts the student's preferred conversational style
71
+
72
+
Messages are trimmed after summarization to keep context window manageable. The `summary` and `conversationalStyle` fields persist across calls via the `ChatRequest` metadata.
73
+
74
+
### muEd API Format
75
+
76
+
`src/module.py` handles the muEd request format (https://mued.org/). The `context` field in `ChatRequest` contains nested educational data (question parts, student submissions, task info) and the `user` field contains user-specific information (e.g., user type, preferences, task progress) that gets parsed into a tutoring prompt via `src/agent/context.py`.
77
+
78
+
### LLM Configuration
79
+
80
+
LLM provider and model are set via environment variables (see `.env.example`). The `llm_factory.py` selects the provider at runtime. The Lambda function name/identity is set in `config.json`.
81
+
82
+
The agent uses **two separate LLM instances** — `self.llm` for chat responses and `self.summarisation_llm` for conversation summarisation and style analysis. By default both use the same provider, but you can point them at different models (e.g. a cheaper model for summarisation) by changing the class in `agent.py`.
83
+
84
+
## Deployment
85
+
86
+
- Pushing to `dev` branch triggers the dev deployment GitHub Actions workflow
87
+
- Pushing to `main` triggers staging deployment, with manual approval required for production
88
+
- All environment variables (API keys, model names) are injected via GitHub Actions secrets/variables — do not hardcode them
@@ -80,7 +80,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m
80
80
81
81
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
82
82
83
-
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.
83
+
Your agent can be based on an LLM hosted anywhere. OpenAI, Google AI, Azure OpenAI, and Ollama are available out of the box via `src/agent/llm_factory.py`, and you can add your own provider there too.
84
+
85
+
The agent uses **two separate LLM instances** — `self.llm` for chat responses and `self.summarisation_llm` for conversation summarisation and style analysis. By default both use the same provider, but you can point them at different models (e.g. a cheaper or faster model for summarisation) by changing the class in `agent.py`.
84
86
85
87
### Prerequisites
86
88
@@ -98,13 +100,17 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
98
100
├── docs/ # docs for devs and users
99
101
├── src/
100
102
│ ├── agent/
101
-
│ │ ├── utils/ # utils for the agent, including the llm_factory
102
-
│ │ ├── agent.py # the agent logic
103
-
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
104
-
│ └── module.py
103
+
│ │ ├── agent.py # LangGraph stateful agent logic
104
+
│ │ ├── context.py # converts muEd context dicts to LLM prompt text
105
+
│ │ ├── llm_factory.py # factory classes for each LLM provider
106
+
│ │ └── prompts.py # system prompts defining the behaviour of the chatbot
107
+
│ └── module.py
105
108
└── tests/ # contains all tests for the chat function
109
+
├── example_inputs/ # muEd example payloads for end-to-end tests
106
110
├── manual_agent_requests.py # allows testing of the docker container through API requests
107
111
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
112
+
├── utils.py # shared test helpers
113
+
├── test_example_inputs.py # pytests for the example input files
108
114
├── test_index.py # pytests
109
115
└── test_module.py # pytests
110
116
```
@@ -164,7 +170,7 @@ This will start the chat function and expose it on port `8080` and it will be op
0 commit comments