Skip to content

Commit 09c1898

Browse files
committed
refactoring: simplified folder structure
1 parent c274bbe commit 09c1898

25 files changed

+174
-498
lines changed

README.md

Lines changed: 34 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -43,11 +43,11 @@ In GitHub, choose Use this template > Create a new repository in the repository
4343

4444
Choose the owner, and pick a name for the new repository.
4545

46-
> [!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
46+
> [!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the `Lambda Feedback` organization as the owner.
4747
48-
Set the visibility to Public or Private.
48+
Set the visibility to `Public` or `Private`.
4949

50-
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to Public.
50+
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to `Public`.
5151
5252
Click on Create repository.
5353

@@ -78,9 +78,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m
7878

7979
## Development
8080

81-
You can create your own invocation to your own agents hosted anywhere. Copy or update the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
81+
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
8282

83-
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
83+
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.
8484

8585
### Prerequisites
8686

@@ -90,23 +90,37 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
9090
### Repository Structure
9191

9292
```bash
93-
.github/workflows/
94-
dev.yml # deploys the DEV function to Lambda Feedback
95-
main.yml # deploys the STAGING function to Lambda Feedback
96-
test-report.yml # gathers Pytest Report of function tests
97-
98-
docs/ # docs for devs and users
99-
100-
src/module.py # chat_module function implementation
101-
src/module_test.py # chat_module function tests
102-
src/agents/ # find all agents developed for the chat functionality
103-
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
93+
.
94+
├── .github/workflows/
95+
│ ├── dev.yml # deploys the DEV function to Lambda Feedback
96+
│ ├── main.yml # deploys the STAGING and PROD functions to Lambda Feedback
97+
│ └── test-report.yml # gathers Pytest Report of function tests
98+
├── docs/ # docs for devs and users
99+
├── src/
100+
│ ├── agent/
101+
│ │ ├── utils/ # utils for the agent, including the llm_factory
102+
│ │ ├── agent.py # the agent logic
103+
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
104+
│ └── module.py
105+
└── tests/ # contains all tests for the chat function
106+
├── manual_agent_requests.py # allows testing of the docker container through API requests
107+
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
108+
├── test_index.py # pytests
109+
└── test_module.py # pytests
104110
```
105111

106112

107113
## Testing the Chat Function
108114

109-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
115+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
116+
117+
### Run Unit Tests
118+
119+
You can run the unit tests using `pytest`.
120+
121+
```bash
122+
pytest
123+
```
110124

111125
### Run the Chat Script
112126

@@ -116,9 +130,9 @@ You can run the Python function itself. Make sure to have a main function in eit
116130
python src/module.py
117131
```
118132

119-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
133+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
120134
```bash
121-
python src/agents/utils/testbench_agents.py
135+
python tests/manual_agent_run.py
122136
```
123137

124138
### Calling the Docker Image Locally
@@ -156,7 +170,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
156170
#### Call Docker Container
157171
##### A. Call Docker with Python Requests
158172
159-
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
173+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
160174
161175
##### B. Call Docker Container through API request
162176
@@ -183,7 +197,6 @@ Body with optional Params:
183197
"conversational_style":" ",
184198
"question_response_details": "",
185199
"include_test_data": true,
186-
"agent_type": {agent_name}
187200
}
188201
}
189202
```

index.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,6 @@
11
import json
2-
try:
3-
from .src.module import chat_module
4-
from .src.agents.utils.types import JsonType
5-
except ImportError:
6-
from src.module import chat_module
7-
from src.agents.utils.types import JsonType
2+
from src.module import chat_module
3+
from src.agent.utils.types import JsonType
84

95
def handler(event: JsonType, context):
106
"""

reports/pytest.xml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
<?xml version="1.0" encoding="utf-8"?><testsuites name="pytest tests"><testsuite name="pytest" errors="0" failures="0" skipped="0" tests="6" time="17.072" timestamp="2025-12-08T16:00:44.975696+00:00" hostname="IC-JN2XP6MM9W"><testcase classname="tests.test_index.TestChatIndexFunction" name="test_correct_arguments" time="2.573" /><testcase classname="tests.test_index.TestChatIndexFunction" name="test_correct_response" time="1.150" /><testcase classname="tests.test_index.TestChatIndexFunction" name="test_missing_argument" time="0.001" /><testcase classname="tests.test_module.TestChatModuleFunction" name="test_agent_output" time="1.017" /><testcase classname="tests.test_module.TestChatModuleFunction" name="test_missing_parameters" time="5.527" /><testcase classname="tests.test_module.TestChatModuleFunction" name="test_processing_time_calc" time="2.562" /></testsuite></testsuites>

src/__init__.py

Whitespace-only changes.
Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,7 @@
1-
try:
2-
from ..llm_factory import OpenAILLMs, GoogleAILLMs
3-
from .base_prompts import \
4-
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
5-
from ..utils.types import InvokeAgentResponseType
6-
except ImportError:
7-
from src.agents.llm_factory import OpenAILLMs, GoogleAILLMs
8-
from src.agents.base_agent.base_prompts import \
9-
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
10-
from src.agents.utils.types import InvokeAgentResponseType
1+
from src.agent.utils.llm_factory import OpenAILLMs, GoogleAILLMs
2+
from src.agent.prompts import \
3+
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
4+
from src.agent.utils.types import InvokeAgentResponseType
115

126
from langgraph.graph import StateGraph, START, END
137
from langchain_core.messages import SystemMessage, RemoveMessage, HumanMessage, AIMessage
@@ -62,7 +56,7 @@ def call_model(self, state: State, config: RunnableConfig) -> str:
6256
system_message = self.role_prompt
6357

6458
# Adding external student progress and question context details from data queries
65-
question_response_details = config["configurable"].get("question_response_details", "")
59+
question_response_details = config.get("configurable", {}).get("question_response_details", "")
6660
if question_response_details:
6761
system_message += f"## Known Question Materials: {question_response_details} \n\n"
6862

@@ -98,8 +92,8 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:
9892
"""Summarize the conversation."""
9993

10094
summary = state.get("summary", "")
101-
previous_summary = config["configurable"].get("summary", "")
102-
previous_conversationalStyle = config["configurable"].get("conversational_style", "")
95+
previous_summary = config.get("configurable", {}).get("summary", "")
96+
previous_conversationalStyle = config.get("configurable", {}).get("conversational_style", "")
10397
if previous_summary:
10498
summary = previous_summary
10599

Lines changed: 38 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,43 @@
1-
# NOTE:
2-
# PROMPTS generated with the help of ChatGPT GPT-4o Nov 2024
3-
1+
#
2+
# NOTE: Default prompts generated with the help of ChatGPT GPT-4o Nov 2024
3+
#
4+
# Description of the prompts:
5+
#
6+
# 1. role_prompt: Sets the overall role and behaviour of the chatbot.
7+
#
8+
# 2. summary_prompt: Used to generate a summary of the conversation.
9+
# 2. update_summary_prompt: Used to update the conversation summary with new messages.
10+
# 2. summary_system_prompt: Provides context for the chatbot based on the existing summary.
11+
#
12+
# 3. conv_pref_prompt: Used to analyze and extract the student's conversational style and learning preferences.
13+
# 3. update_conv_pref_prompt: Used to update the conversational style based on new interactions.
14+
#
15+
16+
# 1. Role Prompt
417
role_prompt = "You are an excellent tutor that aims to provide clear and concise explanations to students. I am the student. Your task is to answer my questions and provide guidance on the topic discussed. Ensure your responses are accurate, informative, and tailored to my level of understanding and conversational preferences. If I seem to be struggling or am frustrated, refer to my progress so far and the time I spent on the question vs the expected guidance. If I ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the [topic]. You do not need to end your messages with a concluding statement.\n\n"
518

19+
# 2. Summary Prompts
20+
summary_guidelines = """Ensure the summary is:
21+
22+
Concise: Keep the summary brief while including all essential information.
23+
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
24+
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
25+
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
26+
Last messages: Include the most recent 5 messages to provide context for the summary.
27+
28+
Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""
29+
30+
summary_prompt = f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.
31+
32+
{summary_guidelines}"""
33+
34+
update_summary_prompt = f"""Update the summary by taking into account the new messages above.
35+
36+
{summary_guidelines}"""
37+
38+
summary_system_prompt = "You are continuing a tutoring session with the student. Background context: {summary}. Use this context to inform your understanding but do not explicitly restate, refer to, or incorporate the details directly in your responses unless the user brings them up. Respond naturally to the user's current input, assuming prior knowledge from the summary."
39+
40+
# 3. Conversational Preference Prompt
641
pref_guidelines = """**Guidelines:**
742
- Use concise, objective language.
843
- Note the student's educational goals, such as understanding foundational concepts, passing an exam, getting top marks, code implementation, hands-on practice, etc.
@@ -57,23 +92,3 @@
5792
5893
{pref_guidelines}
5994
"""
60-
61-
summary_guidelines = """Ensure the summary is:
62-
63-
Concise: Keep the summary brief while including all essential information.
64-
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
65-
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
66-
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
67-
Last messages: Include the most recent 5 messages to provide context for the summary.
68-
69-
Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""
70-
71-
summary_prompt = f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.
72-
73-
{summary_guidelines}"""
74-
75-
update_summary_prompt = f"""Update the summary by taking into account the new messages above.
76-
77-
{summary_guidelines}"""
78-
79-
summary_system_prompt = "You are continuing a tutoring session with the student. Background context: {summary}. Use this context to inform your understanding but do not explicitly restate, refer to, or incorporate the details directly in your responses unless the user brings them up. Respond naturally to the user's current input, assuming prior knowledge from the summary."
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)