You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+34-21Lines changed: 34 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,11 +43,11 @@ In GitHub, choose Use this template > Create a new repository in the repository
43
43
44
44
Choose the owner, and pick a name for the new repository.
45
45
46
-
> [!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
46
+
> [!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the `Lambda Feedback` organization as the owner.
47
47
48
-
Set the visibility to Public or Private.
48
+
Set the visibility to `Public` or `Private`.
49
49
50
-
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to Public.
50
+
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to `Public`.
51
51
52
52
Click on Create repository.
53
53
@@ -78,9 +78,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m
78
78
79
79
## Development
80
80
81
-
You can create your own invocation to your own agents hosted anywhere. Copy or update the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
81
+
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
82
82
83
-
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
83
+
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.
84
84
85
85
### Prerequisites
86
86
@@ -90,23 +90,37 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
90
90
### Repository Structure
91
91
92
92
```bash
93
-
.github/workflows/
94
-
dev.yml # deploys the DEV function to Lambda Feedback
95
-
main.yml # deploys the STAGING function to Lambda Feedback
96
-
test-report.yml # gathers Pytest Report of function tests
97
-
98
-
docs/ # docs for devs and users
99
-
100
-
src/module.py # chat_module function implementation
101
-
src/module_test.py # chat_module function tests
102
-
src/agents/ # find all agents developed for the chat functionality
103
-
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
93
+
.
94
+
├── .github/workflows/
95
+
│ ├── dev.yml # deploys the DEV function to Lambda Feedback
96
+
│ ├── main.yml # deploys the STAGING and PROD functions to Lambda Feedback
97
+
│ └── test-report.yml # gathers Pytest Report of function tests
98
+
├── docs/ # docs for devs and users
99
+
├── src/
100
+
│ ├── agent/
101
+
│ │ ├── utils/ # utils for the agent, including the llm_factory
102
+
│ │ ├── agent.py # the agent logic
103
+
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
104
+
│ └── module.py
105
+
└── tests/ # contains all tests for the chat function
106
+
├── manual_agent_requests.py # allows testing of the docker container through API requests
107
+
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
108
+
├── test_index.py # pytests
109
+
└── test_module.py # pytests
104
110
```
105
111
106
112
107
113
## Testing the Chat Function
108
114
109
-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
115
+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
116
+
117
+
### Run Unit Tests
118
+
119
+
You can run the unit tests using `pytest`.
120
+
121
+
```bash
122
+
pytest
123
+
```
110
124
111
125
### Run the Chat Script
112
126
@@ -116,9 +130,9 @@ You can run the Python function itself. Make sure to have a main function in eit
116
130
python src/module.py
117
131
```
118
132
119
-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
133
+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
173
+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
160
174
161
175
##### B. Call Docker Container through API request
Copy file name to clipboardExpand all lines: src/agent/prompts.py
+38-23Lines changed: 38 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,43 @@
1
-
# NOTE:
2
-
# PROMPTS generated with the help of ChatGPT GPT-4o Nov 2024
3
-
1
+
#
2
+
# NOTE: Default prompts generated with the help of ChatGPT GPT-4o Nov 2024
3
+
#
4
+
# Description of the prompts:
5
+
#
6
+
# 1. role_prompt: Sets the overall role and behaviour of the chatbot.
7
+
#
8
+
# 2. summary_prompt: Used to generate a summary of the conversation.
9
+
# 2. update_summary_prompt: Used to update the conversation summary with new messages.
10
+
# 2. summary_system_prompt: Provides context for the chatbot based on the existing summary.
11
+
#
12
+
# 3. conv_pref_prompt: Used to analyze and extract the student's conversational style and learning preferences.
13
+
# 3. update_conv_pref_prompt: Used to update the conversational style based on new interactions.
14
+
#
15
+
16
+
# 1. Role Prompt
4
17
role_prompt="You are an excellent tutor that aims to provide clear and concise explanations to students. I am the student. Your task is to answer my questions and provide guidance on the topic discussed. Ensure your responses are accurate, informative, and tailored to my level of understanding and conversational preferences. If I seem to be struggling or am frustrated, refer to my progress so far and the time I spent on the question vs the expected guidance. If I ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the [topic]. You do not need to end your messages with a concluding statement.\n\n"
5
18
19
+
# 2. Summary Prompts
20
+
summary_guidelines="""Ensure the summary is:
21
+
22
+
Concise: Keep the summary brief while including all essential information.
23
+
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
24
+
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
25
+
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
26
+
Last messages: Include the most recent 5 messages to provide context for the summary.
27
+
28
+
Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""
29
+
30
+
summary_prompt=f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.
31
+
32
+
{summary_guidelines}"""
33
+
34
+
update_summary_prompt=f"""Update the summary by taking into account the new messages above.
35
+
36
+
{summary_guidelines}"""
37
+
38
+
summary_system_prompt="You are continuing a tutoring session with the student. Background context: {summary}. Use this context to inform your understanding but do not explicitly restate, refer to, or incorporate the details directly in your responses unless the user brings them up. Respond naturally to the user's current input, assuming prior knowledge from the summary."
39
+
40
+
# 3. Conversational Preference Prompt
6
41
pref_guidelines="""**Guidelines:**
7
42
- Use concise, objective language.
8
43
- Note the student's educational goals, such as understanding foundational concepts, passing an exam, getting top marks, code implementation, hands-on practice, etc.
@@ -57,23 +92,3 @@
57
92
58
93
{pref_guidelines}
59
94
"""
60
-
61
-
summary_guidelines="""Ensure the summary is:
62
-
63
-
Concise: Keep the summary brief while including all essential information.
64
-
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
65
-
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
66
-
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
67
-
Last messages: Include the most recent 5 messages to provide context for the summary.
68
-
69
-
Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""
70
-
71
-
summary_prompt=f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.
72
-
73
-
{summary_guidelines}"""
74
-
75
-
update_summary_prompt=f"""Update the summary by taking into account the new messages above.
76
-
77
-
{summary_guidelines}"""
78
-
79
-
summary_system_prompt="You are continuing a tutoring session with the student. Background context: {summary}. Use this context to inform your understanding but do not explicitly restate, refer to, or incorporate the details directly in your responses unless the user brings them up. Respond naturally to the user's current input, assuming prior knowledge from the summary."
0 commit comments