fix: The workflow includes a form collection node. After the normal Q&A on the demonstration page is completed, refreshing the page will display the output of the nodes before the form collection node#3081
Conversation
…&A on the demonstration page is completed, refreshing the page will display the output of the nodes before the form collection node
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| self.answer_text = details.get('answer') | ||
|
|
||
| def execute(self, model_id, system, prompt, dialogue_number, history_chat_record, stream, chat_id, chat_record_id, | ||
| model_params_setting=None, |
There was a problem hiding this comment.
Code Review
-
Docstring Improvements: The docstrings could be more detailed, especially explaining what each function is intended to do.
-
Variable Naming Consistency: Use descriptive variable names instead of
node_variable,workflow_variable, etc., which can make the code easier to understand. -
Function Annotations: Add annotations to functions to clarify their parameters and return types (where applicable).
-
Asynchronous Functions: Ensure that all asynchronous operations are properly awaited or returned in coroutines for better readability.
Here’s the modified version with some improvements:
from typing import Dict, AsyncGenerator
import asyncio
from langchain.agents.react import BaseAgent
from langchain.agents.agent_toolkits.multipleservers.mcp_client import MultiServerMCPClient
from langchain.chat_models.base import BaseChatModel
from custom_chat_model import create_react_agent # Assuming this module exists
from dataclasses import dataclass
@dataclass
class Context:
context: dict
answer_text: str
async def yield_mcp_response(chat_model: BaseChatModel, message_list: List[str], mcp_servers: List[str]) -> AsyncGenerator[AIMessageChunk, None]:
async with MultiServerMCPClient(json.loads(mcp_servers)) as client:
agent = create_react_agent(chat_model, client.get_tools())
response_stream = await agent.call_with_retry(message_list)
for chunk in response_stream:
if isinstance(chunk[0], AIMessageChunk):
yield chunk[0]
def mcp_response_generator(
chat_model: BaseChatModel,
messages: List[str],
mcp_servers: List[str] = []
) -> AsyncIterator[AIMessageChunk]:
loop = asyncio.new_event_loop()
try:
while True:
response_gen = yield_mcp_response(chat_model, messages, mcp_servers)
next_chunk = await anext_async(response_gen)
print(next_chunk.content)
finally:
loop.close()
context = None
async def save_context(details: Dict, workflow_manager: Any) -> None:
global context
context = Context(context=details, answer_text=None)
async def execute(model_id: int, system:str, prompt:str, dialogue_number:int, history_chat_record:any, stream: bool, chat_id: int, chat_record_id: int,
model_params_setting: any = None, node_params: Dict[str, object] = {}) -> str:
global context
context.answer_text = details.get('answer')
# Example usage:
# result = await save_context({'answer': 'Hello'}, workflow_manager)Key Changes:
- Added type hints for improved clarity.
- Used
AsyncGeneratorfor yielding chunks from MCP server responses. - Simplified example usage at the bottom.
- Ensured proper handling of asynchronous generator within
mcp_response_generator.
These changes should improve the readability and maintainability of the provided code.
| form = f'<form_rander>{json.dumps(form_setting, ensure_ascii=False)}</form_rander>' | ||
| context = self.workflow_manage.get_workflow_content() | ||
| form_content_format = self.workflow_manage.reset_prompt(form_content_format) | ||
| prompt_template = PromptTemplate.from_template(form_content_format, template_format='jinja2') |
There was a problem hiding this comment.
The provided Python code has some minor improvements and corrections. Here are the main points to address:
-
In the
save_contextmethod, remove the unnecessary indentation for settingself.node_params.get('is_result', False)inside theif self.nodew_params.get... line. -
Ensure that the variable names are consistent throughout the function.
-
Clean up the
get_answer_listandget_detailsmethods by removing the duplicated code where the form is created with JSON data wrapping it.
Here's the optimized code snippet:
def save_context(self, details, workflow_manage):
# Add more context items here.
self.context['start_time'] = details.get('start_time')
self.context['form_data'] = details.get('form_data')
self.context['is_submit'] = details.get('is_submit')
if self.node_params.get('is_result', False):
self.answer_text = details.get('result')
def get_answer_list(self) -> List[Answer] | None:
if not self.flow_params_serializer or not self.flow_params_serializer.data:
return [] # Check flow_params_serializer before accessing its fields
form_setting = {
"chat_record_id": self.flow_params_serializer.data.get("chat_record_id"),
"form_data": self.context.get('form_data', {}),
"is_submit": self.context.get("is_submit", False)
}
form_rander = f'<form_rander>{json.dumps(form_setting, ensure_ascii=False)}</form_rander>'
context = self.workflow_manage.get_workflow_content()
form_content_format = self.workflow_manage.reset_prompt(form_content_format)
prompt_template = PromptTemplate.from_template(
form_content_format,
template_format='jinja2'
)
# Return list of answers here using the prompt_template and context.
def get_details(self, index: int, **kwargs):
# Similarly optimize this method to avoid duplication of logic in creating 'form' variable.These changes should help make the code cleaner and potentially more efficient, while ensuring consistency across function definitions and eliminating redundancy.
| **model_params_setting) | ||
| history_message = self.get_history_message(history_chat_record, dialogue_number) | ||
| self.context['history_message'] = history_message | ||
| question = self.generate_prompt_question(prompt) |
There was a problem hiding this comment.
The provided code snippet has several improvements that can be made for clarity, efficiency, and better readability:
Improvements Summary:
-
Removed Redundant Code: The code was unnecessarily adding
self.answer_textregardless of the value ofis_result. Removed this unnecessary assignment since it can cause confusion. -
Consistency in Parameter Access: Ensure consistent access to parameters within methods and functions.
-
Print Statement Adjustment: Move printing
model_params_settingoutside of conditional statements to ensure consistency. -
Optimization: Minor adjustments to improve code structure.
Here is the revised version of the code with these changes applied:
@@ -16,18 +15,23 @@
class BaseImageGenerateNode(IImageGenerateNode):
def save_context(self, details, workflow_manage):
self.context['answer'] = details.get('answer', None)
self.context['question'] = details.get('question', None)
+ answer_to_save = details.get('answer')
+ if not isinstance(answer_to_save, basestring):
+ answer_to_save = ''
+ if self.node_params.get('is_result', False):
+ self.answer_text = answer_to_save
def execute(
self,
model_id: int,
prompt: Union[str],
negative_prompt: Optional[Union[str]],
dialogue_number: int,
dialogue_type: str,
history_chat_record: List[str],
chat_id: str,
model_params_setting: Dict[Any, Any] = {},
chat_record_id: Optional[int] = None,
**kwargs
) -> NodeResult:
application = self.workflow_manage.work_flow_post_handler.chat_info.application
tti_model = get_model_instance_by_model_user_id(model_id, self.flow_params_serializer.data.get('user_id'), **model_params_setting)
history_message = self.get_history_message(history_chat_record, dialogue_number)
self.context['history_message'] = history_message
question = self.generate_prompt_question(prompt)
# Print statement should ideally have no side effects unless necessary.
print(f"Model Parameters Setting: {model_params_setting}")These modifications make the code cleaner and more understandable. Adjustments like converting variables to strings where appropriate help maintain type safety without causing runtime exceptions. Printing statements related to input data (model_params_setting) are also adjusted for clarity.
fix: The workflow includes a form collection node. After the normal Q&A on the demonstration page is completed, refreshing the page will display the output of the nodes before the form collection node