Skip to content

Commit 3202f4c

Browse files
zimegmwbrookslukegalbraithrussellsrtaalej
authored
feat: showcase text generation and thinking steps with tool calls from suggested prompts (#43)
Co-authored-by: Michael Brooks <michael@michaelbrooks.ca> Co-authored-by: Luke Russell <31357343+lukegalbraithrussell@users.noreply.github.com> Co-authored-by: Ale Mercado <maria.mercado@salesforce.com> Co-authored-by: Michael Brooks <mbrooks@slack-corp.com>
1 parent b513a58 commit 3202f4c

File tree

12 files changed

+359
-110
lines changed

12 files changed

+359
-110
lines changed

.env.sample

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
1-
SLACK_APP_TOKEN=YOUR_SLACK_APP_TOKEN
2-
SLACK_BOT_TOKEN=YOUR_SLACK_BOT_TOKEN
1+
# Optional, uncomment and set when running without the Slack CLI (python3 app.py).
2+
# SLACK_APP_TOKEN=YOUR_SLACK_APP_TOKEN
3+
# SLACK_BOT_TOKEN=YOUR_SLACK_BOT_TOKEN
4+
5+
# Optional, uncomment and set when using a custom Slack instance.
36
# SLACK_API_URL=YOUR_SLACK_API_URL
4-
# This template uses OpenAI, but you can use any other provider!
7+
8+
# Required, set your OpenAI API key.
59
OPENAI_API_KEY=YOUR_OPENAI_API_KEY

README.md

Lines changed: 21 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,15 @@
11
# AI Agent App Template (Bolt for Python)
22

3-
This Bolt for Python template demonstrates how to build [AI Apps](https://docs.slack.dev/ai/) in Slack.
3+
This Bolt for Python template demonstrates how to build [AI Apps](https://docs.slack.dev/ai/) in Slack, using models from [OpenAI](https://openai.com).
44

5-
Models from [OpenAI](https://openai.com) are used and can be customized for prompts of all kinds.
5+
## App overview
6+
7+
Setting up this app can provide you with an AI agent that enables the following flow:
8+
9+
* Users open the assistant side panel in Slack and see suggested prompts.
10+
* The app calls OpenAI's API when a user selects a prompt or sends a message.
11+
* The app streams in its response, which can be a combination of text and tasks.
12+
* Users provide feedback via buttons.
613

714
## Setup
815

@@ -16,7 +23,7 @@ Join the [Slack Developer Program](https://api.slack.com/developer-program) for
1623

1724
Add this app to your workspace using either the Slack CLI or other development tooling, then read ahead to configuring LLM responses in the **[Providers](#providers)** section.
1825

19-
### Using Slack CLI
26+
<details><summary><strong>Using Slack CLI</strong></summary>
2027

2128
Install the latest version of the Slack CLI for your operating system:
2229

@@ -46,7 +53,11 @@ slack install
4653

4754
After the Slack app has been created you're all set to configure the LLM provider!
4855

49-
### Using Terminal
56+
</details>
57+
58+
<details><summary><strong>Using Terminal</strong></summary>
59+
60+
#### Create Your Slack App
5061

5162
1. Open [https://api.slack.com/apps/new](https://api.slack.com/apps/new) and choose "From an app manifest"
5263
2. Choose the workspace you want to install the application to
@@ -91,6 +102,8 @@ source .venv/bin/activate # for Windows OS, .\.venv\Scripts\Activate instead sh
91102
pip install -r requirements.txt
92103
```
93104

105+
</details>
106+
94107
## Providers
95108

96109
### OpenAI Setup
@@ -150,9 +163,11 @@ Configures the new Slack Assistant features, providing a dedicated side panel UI
150163
- The `assistant_thread_started.py` file, which responds to new app threads with a list of suggested prompts.
151164
- The `message.py` file, which responds to user messages sent to app threads or from the **Chat** and **History** tab with an LLM generated response.
152165

153-
### `/ai`
166+
### `/agent`
167+
168+
The `llm_caller.py` file calls the OpenAI API and streams the generated response into a Slack conversation.
154169

155-
The `llm_caller.py` file, which handles OpenAI API integration and message formatting. It includes the `call_llm()` function that sends conversation threads to OpenAI's models.
170+
The `tools` directory contains app-specific functions for the LLM to call.
156171

157172
## App Distribution / OAuth
158173

agent/llm_caller.py

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
import json
2+
import os
3+
4+
import openai
5+
from openai.types.responses import ResponseInputParam
6+
from slack_sdk.models.messages.chunk import TaskUpdateChunk
7+
from slack_sdk.web.chat_stream import ChatStream
8+
9+
from agent.tools.dice import roll_dice, roll_dice_definition
10+
11+
12+
def call_llm(
13+
streamer: ChatStream,
14+
prompts: ResponseInputParam,
15+
):
16+
"""
17+
Stream an LLM response to prompts with an example dice rolling function
18+
19+
https://docs.slack.dev/tools/python-slack-sdk/web#sending-streaming-messages
20+
https://platform.openai.com/docs/guides/text
21+
https://platform.openai.com/docs/guides/streaming-responses
22+
https://platform.openai.com/docs/guides/function-calling
23+
"""
24+
llm = openai.OpenAI(
25+
api_key=os.getenv("OPENAI_API_KEY"),
26+
)
27+
tool_calls = []
28+
response = llm.responses.create(
29+
model="gpt-4o-mini",
30+
input=prompts,
31+
tools=[
32+
roll_dice_definition,
33+
],
34+
stream=True,
35+
)
36+
for event in response:
37+
# Markdown text from the LLM response is streamed in chat as it arrives
38+
if event.type == "response.output_text.delta":
39+
streamer.append(markdown_text=f"{event.delta}")
40+
41+
# Function calls are saved for later computation and a new task is shown
42+
if event.type == "response.output_item.done":
43+
if event.item.type == "function_call":
44+
tool_calls.append(event.item)
45+
if event.item.name == "roll_dice":
46+
args = json.loads(event.item.arguments)
47+
streamer.append(
48+
chunks=[
49+
TaskUpdateChunk(
50+
id=f"{event.item.call_id}",
51+
title=f"Rolling a {args['count']}d{args['sides']}...",
52+
status="in_progress",
53+
),
54+
],
55+
)
56+
57+
# Tool calls are performed and tasks are marked as completed in Slack
58+
if tool_calls:
59+
for call in tool_calls:
60+
if call.name == "roll_dice":
61+
args = json.loads(call.arguments)
62+
prompts.append(
63+
{
64+
"id": call.id,
65+
"call_id": call.call_id,
66+
"type": "function_call",
67+
"name": "roll_dice",
68+
"arguments": call.arguments,
69+
}
70+
)
71+
result = roll_dice(**args)
72+
prompts.append(
73+
{
74+
"type": "function_call_output",
75+
"call_id": call.call_id,
76+
"output": json.dumps(result),
77+
}
78+
)
79+
if result.get("error") is not None:
80+
streamer.append(
81+
chunks=[
82+
TaskUpdateChunk(
83+
id=f"{call.call_id}",
84+
title=f"{result['error']}",
85+
status="error",
86+
),
87+
],
88+
)
89+
else:
90+
streamer.append(
91+
chunks=[
92+
TaskUpdateChunk(
93+
id=f"{call.call_id}",
94+
title=f"{result['description']}",
95+
status="complete",
96+
),
97+
],
98+
)
99+
100+
# Complete the LLM response after making tool calls
101+
call_llm(streamer, prompts)

agent/tools/__init__.py

Whitespace-only changes.

agent/tools/dice.py

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
import random
2+
3+
from openai.types.responses import FunctionToolParam
4+
5+
6+
def roll_dice(sides: int = 6, count: int = 1) -> dict:
7+
if sides < 2:
8+
return {
9+
"error": "A die must have at least 2 sides",
10+
"rolls": [],
11+
"total": 0,
12+
}
13+
14+
if count < 1:
15+
return {
16+
"error": "Must roll at least 1 die",
17+
"rolls": [],
18+
"total": 0,
19+
}
20+
21+
# Roll the dice and calculate the total
22+
rolls = [random.randint(1, sides) for _ in range(count)]
23+
total = sum(rolls)
24+
25+
return {
26+
"rolls": rolls,
27+
"total": total,
28+
"description": f"Rolled a {count}d{sides} to total {total}",
29+
}
30+
31+
32+
# Tool definition for OpenAI API
33+
#
34+
# https://platform.openai.com/docs/guides/function-calling
35+
roll_dice_definition: FunctionToolParam = {
36+
"type": "function",
37+
"name": "roll_dice",
38+
"description": "Roll one or more dice with a specified number of sides. Use this when the user wants to roll dice or generate random numbers within a range.",
39+
"parameters": {
40+
"type": "object",
41+
"properties": {
42+
"sides": {
43+
"type": "integer",
44+
"description": "The number of sides on the die (e.g., 6 for a standard die, 20 for a d20)",
45+
"default": 6,
46+
},
47+
"count": {
48+
"type": "integer",
49+
"description": "The number of dice to roll",
50+
"default": 1,
51+
},
52+
},
53+
"required": ["sides", "count"],
54+
},
55+
"strict": False,
56+
}

ai/llm_caller.py

Lines changed: 0 additions & 27 deletions
This file was deleted.

app.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22
import os
33

44
from dotenv import load_dotenv
5-
65
from slack_bolt import App
76
from slack_bolt.adapter.socket_mode import SocketModeHandler
87
from slack_sdk import WebClient
@@ -22,6 +21,7 @@
2221
token=os.environ.get("SLACK_BOT_TOKEN"),
2322
),
2423
)
24+
2525
# Register Listeners
2626
register_listeners(app)
2727

Lines changed: 13 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
from logging import Logger
2-
from typing import Dict, List
32

43
from slack_bolt import Say, SetSuggestedPrompts
54

@@ -18,24 +17,19 @@ def assistant_thread_started(
1817
logger: Logger instance for error tracking
1918
"""
2019
try:
21-
say("How can I help you?")
22-
23-
prompts: List[Dict[str, str]] = [
24-
{
25-
"title": "What does Slack stand for?",
26-
"message": "Slack, a business communication service, was named after an acronym. Can you guess what it stands for?",
27-
},
28-
{
29-
"title": "Write a draft announcement",
30-
"message": "Can you write a draft announcement about a new feature my team just released? It must include how impactful it is.",
31-
},
32-
{
33-
"title": "Suggest names for my Slack app",
34-
"message": "Can you suggest a few names for my Slack app? The app helps my teammates better organize information and plan priorities and action items.",
35-
},
36-
]
37-
38-
set_suggested_prompts(prompts=prompts)
20+
say("What would you like to do today?")
21+
set_suggested_prompts(
22+
prompts=[
23+
{
24+
"title": "Prompt a task with thinking steps",
25+
"message": "Wonder a few deep thoughts.",
26+
},
27+
{
28+
"title": "Roll dice for a random number",
29+
"message": "Roll two 12-sided dice and three 6-sided dice for a pseudo-random score.",
30+
},
31+
]
32+
)
3933
except Exception as e:
4034
logger.exception(f"Failed to handle an assistant_thread_started event: {e}", e)
4135
say(f":warning: Something went wrong! ({e})")

0 commit comments

Comments
 (0)