|
| 1 | +# Developing Chat Agents: Getting Started |
| 2 | + |
| 3 | +## What is a Chat Agent? |
| 4 | + |
| 5 | +It's a function which calls Large Language Models (LLMs) to respond to the student's messages given contxtual data: |
| 6 | + |
| 7 | +- question data |
| 8 | +- user data such as past responses to the problem |
| 9 | + Chatbot Agents capture and automate the process of assisting students during their learning process when outside of classroom. |
| 10 | + |
| 11 | +## Getting Setup for Development |
| 12 | + |
| 13 | +1. Get the code on your local machine (Using github desktop or the `git` cli) |
| 14 | + |
| 15 | + - For new functions: clone the main repo for [lambda-chat](https://github.com/lambda-feedback/lambda-chat) and create a new branch. Then go under `scr/agents` and copy the `base_agent` folder. |
| 16 | + |
| 17 | + - For existing functions: please make your changes on a new separate branch |
| 18 | + |
| 19 | +2. _If you are creating a new chatbot agent_, you'll need to set it's name as the folder name in `scr/agents` and its corresponding files. |
| 20 | +3. You are now ready to start making changes and implementing features by editing each of the three main function-logic files: |
| 21 | + |
| 22 | + 1. **`scr/agents/{base_agent}/{base}_agent.py`**: This file contains the main LLM pipeline using [LangGraph](https://langchain-ai.github.io/langgraph/) and [LangChain](https://python.langchain.com/docs/introduction/). |
| 23 | + |
| 24 | + - the agent expects the following inputs when it being called: |
| 25 | + |
| 26 | + Body with necessary Params: |
| 27 | + |
| 28 | + ```JSON |
| 29 | + { |
| 30 | + "message":"hi", |
| 31 | + "params":{ |
| 32 | + "conversation_id":"12345Test", |
| 33 | + "conversation_history": [{"type":"user","content":"hi"}] |
| 34 | + } |
| 35 | + } |
| 36 | + ``` |
| 37 | + |
| 38 | + Body with optional Params: |
| 39 | + |
| 40 | + ```JSON |
| 41 | + { |
| 42 | + "message":"hi", |
| 43 | + "params":{ |
| 44 | + "conversation_id":"12345Test", |
| 45 | + "conversation_history":[{"type":"user","content":"hi"}], |
| 46 | + "summary":" ", |
| 47 | + "conversational_style":" ", |
| 48 | + "question_response_details": "", |
| 49 | + "include_test_data": true, |
| 50 | + "agent_type": {agent_name} |
| 51 | + } |
| 52 | + } |
| 53 | + ``` |
| 54 | + |
| 55 | + 2. **`scr/agents/{base_agent}/{base}_prompts.py`**: This is where you can write the system prompts that describe how your AI Assistant should behave and respond to the user. |
| 56 | + |
| 57 | + 3. Make sure to add your agent `invoke()` function to the `module.py` file. |
| 58 | + |
| 59 | + 4. Please add a `README.md` file to describe the use and behaviour of your agent. |
| 60 | + |
| 61 | +4. Changes can be tested locally by running the pipeline tests using: |
| 62 | + ```bash |
| 63 | + pytest src/module_test.py |
| 64 | + ``` |
| 65 | + [Running and Testing Agents Locally](local.md){ .md-button } |
| 66 | + |
| 67 | + |
| 68 | +5. Merge commits into any branch (except main) will trigger the `dev.yml` workflow, which will build the docker image, push it to a shared `dev` ECR repository to make the function available from the `dev` and `localhost` client app. |
| 69 | + |
| 70 | +6. In order to make your new chatbot available on the LambdaFeedback platform, you will have to get in contact with the ADMINS on the platform. |
0 commit comments