Skip to content
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .slack/.gitignore
Comment thread
srtaalej marked this conversation as resolved.
Outdated
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
apps.dev.json
cache/
6 changes: 6 additions & 0 deletions .slack/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"manifest": {
"source": "remote"
},
"project_id": "22e2b5e7-ef8f-4fbf-8026-a62ae0623037"
}
5 changes: 5 additions & 0 deletions .slack/hooks.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"hooks": {
"get-hooks": "python3 -m slack_cli_hooks.hooks.get_hooks"
}
}
14 changes: 13 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,19 @@ black .

### `/listeners`

Every incoming request is routed to a "listener". Inside this directory, we group each listener based on the Slack Platform feature used, so `/listeners/events` handles incoming events, `/listeners/shortcuts` would handle incoming [Shortcuts](https://api.slack.com/interactivity/shortcuts) requests, and so on.
Every incoming request is routed to a "listener". This directory groups each listener based on the Slack Platform feature used, so `/listeners/events` handles incoming events, `/listeners/shortcuts` would handle incoming [Shortcuts](https://docs.slack.dev/interactivity/implementing-shortcuts/) requests, and so on.

Comment on lines +76 to +79
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**Note**: The `listeners/events` folder is purely educational and demonstrates alternative implementation approaches. These listeners are **not registered** and are not used in the actual application. For the working implementation, refer to `listeners/assistant.py`.
:::info[The `listeners/events` folder is purely educational and demonstrates alternative approaches to implementation]
These listeners are **not registered** and are not used in the actual application. For the working implementation, refer to `listeners/assistant.py`.
:::

this makes it look like a little callout card on the docs! anytime i think about using "note:" i just replace it with a callout card.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 note: Added in 42cf39f.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🗣️ thought: I'm not sure that this renders as expected with GitHub markdown?

🔗 https://github.com/srtaalej/bolt-python-assistant-template/tree/main?tab=readme-ov-file#listeners

We might want to revert this or use different syntax?

🔗 https://github.com/orgs/community/discussions/16925

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah idk why i suggested to use docs syntax in a README i think i got confused where i was oof

:::info[The `listeners/events` folder is purely educational and demonstrates alternative approaches to implementation]
These listeners are **not registered** and are not used in the actual application. For the working implementation, refer to `listeners/assistant.py`.
Comment thread
srtaalej marked this conversation as resolved.
**`/listeners/assistant`**
Comment thread
zimeg marked this conversation as resolved.

Configures the new Slack Assistant features, providing a dedicated side panel UI for users to interact with the AI chatbot. This module includes:

`assistant.py`, which contains two listeners:
* The `@assistant.thread_started` listener receives an event when users start new app thread.
* The `@assistant.user_message` listener processes user messages in app threads or from the app **Chat** and **History** tab.

`llm_caller.py`, which handles OpenAI API integration and message formatting. It includes the `call_llm()` function that sends conversation threads to OpenAI's models.

## App Distribution / OAuth

Expand Down
2 changes: 1 addition & 1 deletion listeners/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from .assistant import assistant
from listeners.assistant import assistant


def register_listeners(app):
Expand Down
3 changes: 3 additions & 0 deletions listeners/assistant/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .assistant import assistant

__all__ = ["assistant"]
63 changes: 31 additions & 32 deletions listeners/assistant.py → listeners/assistant/assistant.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
import logging
from typing import List, Dict
from slack_bolt import Assistant, BoltContext, Say, SetSuggestedPrompts, SetStatus
Comment thread
srtaalej marked this conversation as resolved.
from slack_bolt import Assistant, BoltContext, Say, SetSuggestedPrompts
from slack_bolt.context.get_thread_context import GetThreadContext
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError

from .llm_caller import call_llm
from ..llm_caller import call_llm

# Refer to https://tools.slack.dev/bolt-python/concepts/assistant/ for more details
assistant = Assistant()
Expand Down Expand Up @@ -57,39 +56,20 @@ def respond_in_assistant_thread(
payload: dict,
logger: logging.Logger,
context: BoltContext,
set_status: SetStatus,
get_thread_context: GetThreadContext,
client: WebClient,
say: Say,
):
try:
user_message = payload["text"]
set_status("is typing...")

if user_message == "Can you generate a brief summary of the referred channel?":
# the logic here requires the additional bot scopes:
# channels:join, channels:history, groups:history
thread_context = get_thread_context()
referred_channel_id = thread_context.get("channel_id")
try:
channel_history = client.conversations_history(channel=referred_channel_id, limit=50)
except SlackApiError as e:
if e.response["error"] == "not_in_channel":
# If this app's bot user is not in the public channel,
# we'll try joining the channel and then calling the same API again
client.conversations_join(channel=referred_channel_id)
channel_history = client.conversations_history(channel=referred_channel_id, limit=50)
else:
raise e
channel_id = payload["channel"]
thread_ts = payload["thread_ts"]

prompt = f"Can you generate a brief summary of these messages in a Slack channel <#{referred_channel_id}>?\n\n"
for message in reversed(channel_history.get("messages")):
if message.get("user") is not None:
prompt += f"\n<@{message['user']}> says: {message['text']}\n"
messages_in_thread = [{"role": "user", "content": prompt}]
returned_message = call_llm(messages_in_thread)
say(returned_message)
return
loading_messages = [
"Teaching the hamsters to type faster…",
"Untangling the internet cables…",
"Consulting the office goldfish…",
"Polishing up the response just for you…",
"Convincing the AI to stop overthinking…",
]

replies = client.conversations_replies(
channel=context.channel_id,
Expand All @@ -101,8 +81,27 @@ def respond_in_assistant_thread(
for message in replies["messages"]:
role = "user" if message.get("bot_id") is None else "assistant"
messages_in_thread.append({"role": role, "content": message["text"]})

returned_message = call_llm(messages_in_thread)
say(returned_message)
client.assistant_threads_setStatus(
channel_id=channel_id, thread_ts=thread_ts, status="Bolt is typing", loading_messages=loading_messages
)
stream_response = client.chat_startStream(
channel=channel_id,
thread_ts=thread_ts,
)
stream_ts = stream_response["ts"]
# use of this for loop is specific to openai response method
for event in returned_message:
if event.type == "response.output_text.delta":
client.chat_appendStream(channel=channel_id, ts=stream_ts, markdown_text=f"{event.delta}")
else:
continue

client.chat_stopStream(
channel=channel_id,
ts=stream_ts,
)

except Exception as e:
logger.exception(f"Failed to handle a user message event: {e}")
Expand Down
45 changes: 6 additions & 39 deletions listeners/llm_caller.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
import os
import re
from typing import List, Dict

import openai
from openai import Stream
from openai.types.responses import ResponseStreamEvent


DEFAULT_SYSTEM_CONTENT = """
You're an assistant in a Slack workspace.
Expand All @@ -16,44 +18,9 @@
def call_llm(
messages_in_thread: List[Dict[str, str]],
system_content: str = DEFAULT_SYSTEM_CONTENT,
) -> str:
) -> Stream[ResponseStreamEvent]:
Comment thread
srtaalej marked this conversation as resolved.
openai_client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
messages = [{"role": "system", "content": system_content}]
messages.extend(messages_in_thread)
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
n=1,
messages=messages,
max_tokens=16384,
)
return markdown_to_slack(response.choices[0].message.content)


# Conversion from OpenAI markdown to Slack mrkdwn
# See also: https://api.slack.com/reference/surfaces/formatting#basics
def markdown_to_slack(content: str) -> str:
# Split the input string into parts based on code blocks and inline code
parts = re.split(r"(?s)(```.+?```|`[^`\n]+?`)", content)

# Apply the bold, italic, and strikethrough formatting to text not within code
result = ""
for part in parts:
if part.startswith("```") or part.startswith("`"):
result += part
else:
for o, n in [
(
r"\*\*\*(?!\s)([^\*\n]+?)(?<!\s)\*\*\*",
r"_*\1*_",
), # ***bold italic*** to *_bold italic_*
(
r"(?<![\*_])\*(?!\s)([^\*\n]+?)(?<!\s)\*(?![\*_])",
r"_\1_",
), # *italic* to _italic_
(r"\*\*(?!\s)([^\*\n]+?)(?<!\s)\*\*", r"*\1*"), # **bold** to *bold*
(r"__(?!\s)([^_\n]+?)(?<!\s)__", r"*\1*"), # __bold__ to *bold*
(r"~~(?!\s)([^~\n]+?)(?<!\s)~~", r"~\1~"), # ~~strike~~ to ~strike~
]:
part = re.sub(o, n, part)
result += part
return result
Comment thread
srtaalej marked this conversation as resolved.
response = openai_client.responses.create(model="gpt-4o-mini", input=messages, stream=True)
return response
5 changes: 4 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
--extra-index-url=https://test.pypi.org/simple/
slack_sdk==3.36.0.dev0
Comment thread
srtaalej marked this conversation as resolved.
Outdated
slack-bolt>=1.21,<2
slack-sdk>=3.33.1,<4
slack-cli-hooks<1.0.0

Comment thread
srtaalej marked this conversation as resolved.
Outdated
# If you use a different LLM vendor, replace this dependency
openai

Expand Down