Replies: 5 comments 3 replies
-
|
In this scenario are you running the LLM calls on the server and provide UI in a web/mobile frontend for a Rails app or is it more like an interactive GUI/CLI Ruby app? |
Beta Was this translation helpful? Give feedback.
-
|
@swinner2 have you tried to ask permission via turbo in the |
Beta Was this translation helpful? Give feedback.
-
|
I think the way to do this would be to have some sort of database model. Then, split up the prompts. Trying to do it in-memory seems like it would get crazy (I've thought a lot about it recently for what I'm working on). |
Beta Was this translation helpful? Give feedback.
-
|
@crmne I'm having a similar HIIT issue as above, but more about input gathering for a richer, more precise experience. For example, if the user says "how do I give XYZ permission to another user?", the assistant could come back with "Pick the user and I'll do it for you" with a dropdown of users to select from. What I am doing is using a no-op tool call that halts. However, I have to jump through a few hoops to make it work:
Is there anything I'm missing? I can't keep a connection open until the user selects something, so I need a flow like this. Is it something you're interested in supporting if I came up with an approach and submitted a PR? At a high level, I'm envisioning:
|
Beta Was this translation helpful? Give feedback.
-
|
I saw a feature request in this direction but I'm also exploring this. Given the async nature of this flow (models wants some tool call but it needs to wait for some input from user) I don't think it's feasible for the code to wait for some user input. For web apps many would run the model interaction in background jobs which don't have any guarantees of being allowed to run for a long time. I'm currently exploring 2 avenues for this:
1. State restoration and resumeIn the What I believe it's currently missing in RubyLLM is the ability to break the flow when approval is required and the ability to resume the flow after restoration. While sounding easy at first sight this might not be so. Should it look only at last message from assistant or all pending approval tool calls? What if there are multiple tool calls requiring approval? If it were up to me I wouldn't have that in RubyLLM as it more of a application/implementation territory rather than library. Curious if this can be solved in a clean manner. 2. Tools approval toolThis is an interesting approach (mentioned in #503) but not sure if doable. Conceptually this would work like this:
For this to work RubyLLM would need to:
I'm going to explore avenue 2. First try (well ... not exactly only me :) ) to implement this in my app and if it works fine I'll extract to RubyLLM. I'd love to hear from people other ideas, how you approached this. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Context:
I'm exploring how to best implement the "Human-in-the-Loop" pattern, specifically scenarios where explicit human confirmation is required before proceeding with potentially risky or impactful tool calls (such as purchases, executing code, publishing, etc.).
Use Case:
I want my application to request explicit permission from the user before proceeding with certain tool executions initiated by an LLM. Ideally, the workflow would look something like:
Potential Solutions (half-baked)
We could use a new chat with a system prompt specifically around approve/reject with structured output that allows us to check if the LLM thinks the user accepts or rejects.
We could use an "Accept" UI button that persists (db/cache), indicating that the tool has been approved and can proceed.
For the discussion's sake, how would we want to implement "Human-in-the-Loop" for this simple tool?
Beta Was this translation helpful? Give feedback.
All reactions