Skip to content

Commit a99b3c9

Browse files
committed
Merge branch 'mued_api_adopted' into dev
2 parents a5b8f70 + 9a6cc87 commit a99b3c9

17 files changed

+935
-960
lines changed

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1-
# Lambda Feedback Chat Function Boilerplate
1+
# reflectiveChatFunction
22

3-
This repository contains the code needed to develop a modular chatbot to be used on Lambda-Feedback platform [written in Python].
3+
This repository contains the code for a modular Socratic chatbot to be used on Lambda-Feedback platform [written in Python].
4+
More details about the chatbot's behaviour in [User Documentation](docs/user.md).
45

56
## Quickstart
67

config.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
"ChatFunctionName": ""
2+
"ChatFunctionName": "reflectiveChatFunction"
33
}

docs/dev.md

Lines changed: 33 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,40 @@
1-
# YourFunctionName
2-
*Brief description of what this chat function does, from the developer perspective*
1+
# reflectiveChatFunction
2+
This chatbot aims to respond to all relevant tasks the student requests by emphasising self-reflection through asking the student follow-up questions. The Chatbot is aware of the Question details, answer, worked solution and guidance from the lecturer.
3+
4+
Some technical details:
5+
<pre style="white-space: pre-wrap;">
6+
<code>LLM model: gpt-4o-mini (OpenAI)
7+
response time (on average): 10 seconds
8+
9+
Helping approach: always responds with a follow-up question
10+
</code>
11+
</pre>
312

413
## Inputs
5-
*Specific input parameters which can be supplied when the calling this chat function.*
14+
Body:
15+
```JSON
16+
{
17+
"message":"hi",
18+
"params":{
19+
"conversation_id":"12345Test",
20+
"conversation_history":[{"type":"user","content":"hi"}],
21+
"include_test_data": true,
22+
}
23+
}
24+
```
625

726
## Outputs
8-
*Output schema/values for this function*
9-
10-
## Examples
11-
*List of example inputs and outputs for this function, each under a different sub-heading*
27+
```JSON
28+
{
29+
"chatbotResponse":"hi back",
30+
"metadata": {
31+
"summary": "",
32+
"conversational_style": "",
33+
"conversation_history": [],
34+
},
35+
"processing_time": 0
36+
}
37+
```
1238

1339
## Testing the Chat Function
1440

docs/user.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1-
# YourChatFunctionName
1+
# Reflective Chatbot
22

3-
Teacher- & Student-facing documentation for this function.
3+
This chatbot aims to respond to all relevant tasks the student requests by emphasising self-reflection through asking the student follow-up questions. The Chatbot is aware of the Question details, answer, worked solution and guidance from the lecturer.
4+
5+
Some technical details:
6+
<pre style="white-space: pre-wrap;">
7+
<code>LLM model: gpt-4o-mini (OpenAI)
8+
response time (on average): 10 seconds
9+
10+
Helping approach: always responds with a follow-up question
11+
</code>
12+
</pre>

index.py

Lines changed: 14 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,36 @@
11
import json
2+
from pydantic import ValidationError
3+
4+
from lf_toolkit.chat import ChatRequest
25
from src.module import chat_module
3-
from src.agent.utils.types import JsonType
46

5-
def handler(event: JsonType, context):
7+
8+
def handler(event, context):
69
"""
710
Lambda handler function
811
"""
9-
# Log the input event for debugging purposes
10-
# print("Received event:", " ".join(json.dumps(event, indent=2).splitlines()))
11-
1212
if "body" in event:
1313
try:
1414
event = json.loads(event["body"])
1515
except json.JSONDecodeError:
1616
return {
1717
"statusCode": 400,
18-
"body": "Invalid JSON format in the body or body not found. Please check the input."
18+
"body": "Invalid JSON format in the body. Please check the input.",
1919
}
2020

21-
if "message" not in event:
22-
return {
23-
"statusCode": 400,
24-
"body": "Missing 'message' key in event. Please confirm the key in the json body."
25-
}
26-
if "params" not in event:
27-
return {
28-
"statusCode": 400,
29-
"body": "Missing 'params' key in event. Please confirm the key in the json body. Make sure it contains the necessary conversation_id."
30-
}
31-
32-
message = event.get("message")
33-
params = event.get("params")
21+
try:
22+
request = ChatRequest.model_validate(event)
23+
except ValidationError as e:
24+
return {"statusCode": 400, "body": e.json()}
3425

3526
try:
36-
chatbot_response = chat_module(message, params)
27+
result = chat_module(request)
3728
except Exception as e:
3829
return {
3930
"statusCode": 500,
40-
"body": f"An error occurred within the chat_module(): {str(e)}"
31+
"body": f"An error occurred within the chat_module(): {str(e)}",
4132
}
4233

43-
# Create a response
44-
response = {
45-
"statusCode": 200,
46-
"body": json.dumps(chatbot_response)
47-
}
48-
49-
# Log the response for debugging purposes
34+
response = {"statusCode": 200, "body": result.model_dump_json()}
5035
print("Returning response:", " ".join(json.dumps(response, indent=2).splitlines()))
51-
52-
return response
36+
return response

src/agent/agent.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,8 @@ def invoke_base_agent(query: str, conversation_history: list, summary: str, conv
181181
print(f'in invoke_base_agent(), thread_id = {session_id}')
182182

183183
config = {"configurable": {"thread_id": session_id, "summary": summary, "conversational_style": conversationalStyle, "question_response_details": question_response_details}}
184-
response_events = agent.app.invoke({"messages": conversation_history, "summary": summary, "conversational_style": conversationalStyle}, config=config, stream_mode="values") #updates
184+
messages = conversation_history + [HumanMessage(content=query)]
185+
response_events = agent.app.invoke({"messages": messages, "summary": summary, "conversational_style": conversationalStyle}, config=config, stream_mode="values") #updates
185186
pretty_printed_response = agent.pretty_response_value(response_events) # get last event/ai answer in the response
186187

187188
# Gather Metadata from the agent

src/agent/prompts.py

Lines changed: 32 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
#
2-
# NOTE: Default prompts generated with the help of ChatGPT GPT-4o Nov 2024
1+
# NOTE:
2+
# PROMPTS generated with the help of Claude 4
33
#
44
# Description of the prompts:
55
#
@@ -14,8 +14,36 @@
1414
#
1515

1616
# 1. Role Prompt
17-
role_prompt = "You are an excellent tutor that aims to provide clear and concise explanations to students. I am the student. Your task is to answer my questions and provide guidance on the topic discussed. Ensure your responses are accurate, informative, and tailored to my level of understanding and conversational preferences. If I seem to be struggling or am frustrated, refer to my progress so far and the time I spent on the question vs the expected guidance. If I ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the [topic]. You do not need to end your messages with a concluding statement.\n\n"
18-
17+
role_prompt = """You are a Socratic tutor who guides students to discover knowledge through thoughtful questioning rather than direct instruction. Your primary goal is to help students think critically and arrive at understanding through their own reasoning.
18+
19+
**Core Behavior:**
20+
- ALWAYS end your response with a follow-up question that encourages deeper thinking
21+
- Guide students to discover answers through strategic questioning rather than providing direct explanations
22+
- Ask questions that build upon the student's current understanding
23+
- Use questions to reveal gaps in knowledge or misconceptions
24+
- Encourage students to explain their reasoning and thought processes
25+
26+
**Question Types to Use:**
27+
- Clarifying questions: "What do you mean when you say...?"
28+
- Assumption-probing questions: "What assumptions are you making here?"
29+
- Evidence-based questions: "What evidence supports your thinking?"
30+
- Perspective questions: "How might someone who disagrees respond?"
31+
- Implication questions: "If that's true, what does that imply about...?"
32+
- Meta-questions: "Why do you think this question is important?"
33+
34+
**Guidelines:**
35+
- When a student asks a direct question, respond with a counter-question that guides them toward the answer
36+
- If providing any information, immediately follow with a question that challenges them to apply or extend that knowledge
37+
- Adapt your questioning style to the student's level and subject matter
38+
- If a student seems frustrated, ask questions about their thought process to identify where they're getting stuck
39+
- Never provide complete answers—always leave room for the student to think and respond
40+
41+
**Example Interaction Style:**
42+
Student: "What's the derivative of x²?"
43+
Tutor: "Let's think about this step by step. What does a derivative represent in this physics question, and how might we approach finding the rate of change of x²?"
44+
45+
Remember: Your role is to be the question-asker, not the answer-giver. Every response should end with a thoughtful question that moves the student's understanding forward. If the student seems to be struggling or am frustrated, refer to their progress so far and the time they spent on the question vs the expected guidance. If they ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the [topic]. You do not need to end your messages with a concluding statement.
46+
"""
1947
# 2. Summary Prompts
2048
summary_guidelines = """Ensure the summary is:
2149

0 commit comments

Comments
 (0)