You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Changed the LLM model from "groq/llama-3.3-70b-versatile" to "groq/llama-3.2-90b-vision-preview" in `groq-mcp.py` for improved performance.
- Added a new example script `ollama-python.py` for stock price retrieval using the Ollama model.
- Incremented version number to 0.0.70 in `pyproject.toml` to reflect recent changes.
- Updated Python version requirement in `uv.lock` to support Python 3.11 and above for better compatibility.
- Enhanced the LLM class to handle tool calls specifically for Ollama models, ensuring proper processing of tool results.
# For Ollama models, we need to explicitly ask the model to process the tool results
515
+
# First, check if the response is just a JSON tool call
516
+
try:
517
+
# If the response_text is a valid JSON that looks like a tool call,
518
+
# we need to make a follow-up call to process the results
519
+
json_response=json.loads(response_text.strip())
520
+
if ('name'injson_responseor'function'injson_response) andnotany(wordinresponse_text.lower() forwordin ['summary', 'option', 'result', 'found']):
521
+
logging.debug("Detected Ollama returning only tool call JSON, making follow-up call to process results")
522
+
523
+
# Create a prompt that asks the model to process the tool results
524
+
follow_up_prompt=f"I've searched for apartments and found these results. Please analyze them and provide a summary of the best options:\n\n{json.dumps(tool_result, indent=2)}\n\nPlease format your response as a nice summary with the top options."
# For Ollama models, we need to explicitly ask the model to process the tool results
1086
+
# First, check if the response is just a JSON tool call
1087
+
try:
1088
+
# If the response_text is a valid JSON that looks like a tool call,
1089
+
# we need to make a follow-up call to process the results
1090
+
json_response=json.loads(response_text.strip())
1091
+
if ('name'injson_responseor'function'injson_response) andnotany(wordinresponse_text.lower() forwordin ['summary', 'option', 'result', 'found']):
1092
+
logging.debug("Detected Ollama returning only tool call JSON in async mode, making follow-up call to process results")
1093
+
1094
+
# Create a prompt that asks the model to process the tool results
1095
+
follow_up_prompt=f"I've searched for apartments and found these results. Please analyze them and provide a summary of the best options:\n\n{json.dumps(tool_result, indent=2)}\n\nPlease format your response as a nice summary with the top options."
0 commit comments