Allow to configure reflection prompt #641
Conversation
WalkthroughThe Changes
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Summary of Changes
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request enhances the agent's self-reflection capability by allowing users to customize the prompt used for reflection. It also includes a detailed docstring for the agent's initialization method to improve code documentation.
Highlights
- Custom Reflection Prompt: Introduced a new
reflect_promptparameter to theAgentconstructor, allowing users to provide a custom prompt template for the agent's self-reflection process. - Docstring Update: Added a comprehensive docstring to the
Agent.__init__method, detailing all parameters and their purpose. - Reflection Logic Update: Modified the internal reflection prompt generation logic in the
chatmethod to utilize the providedreflect_promptif available, falling back to the default reflection prompt otherwise.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨No code suggestions found for the PR. |
There was a problem hiding this comment.
Code Review
This pull request introduces a new reflect_prompt parameter to the Agent class, allowing for customization of the self-reflection process. Additionally, comprehensive docstrings have been added to the Agent constructor, significantly improving documentation for its numerous parameters.
The core logic for adding and utilizing the reflect_prompt appears correct. My main feedback point concerns a minor inaccuracy in the docstring for the reflect_llm parameter, where its default behavior could be described more precisely to align with the implementation. Overall, these changes enhance the agent's flexibility and developer experience through better documentation.
| reflect_llm (Optional[str], optional): Dedicated language model for self-reflection | ||
| operations. If None, uses the main llm parameter. Defaults to None. |
There was a problem hiding this comment.
The description of the default behavior for reflect_llm could be more precise.
The current docstring states: "If None, uses the main llm parameter." However, the actual initialization logic is self.reflect_llm = reflect_llm or os.getenv('OPENAI_MODEL_NAME', 'gpt-4o').
This means if reflect_llm is None, it defaults to the OPENAI_MODEL_NAME environment variable or 'gpt-4o', which is the same fallback as the main llm if the main llm itself was None and not a custom instance. It doesn't directly use the self.llm value if self.llm was, for example, set to a different model string like "claude-2".
Consider rephrasing to accurately reflect this fallback mechanism, similar to how the llm parameter's default is described.
| reflect_llm (Optional[str], optional): Dedicated language model for self-reflection | |
| operations. If None, uses the main llm parameter. Defaults to None. | |
| reflect_llm (Optional[str], optional): Dedicated language model for self-reflection | |
| operations. If `None`, it defaults to the value of the `OPENAI_MODEL_NAME` environment variable, or 'gpt-4o' if the environment variable is not set. Defaults to `None` (as a parameter). |
CI Feedback 🧐(Feedback updated until commit 4b913a7)A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
CI Feedback 🧐A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
Code ReviewThis pull request introduces a configurable The implementation within the provided diffs is clear and follows Python best practices (PEP 8). However, there are a few areas that need attention:
Addressing these points will significantly improve the completeness and robustness of this enhancement. @claude review above changes and make sure it’s all correct and fix or optimise if required in this pull request also minimal code change |
|
Claude finished @MervinPraison's task —— View job PR Review and Enhancement Complete ✅I've analyzed the configurable reflection prompt feature and implemented the necessary fixes to ensure consistency between synchronous and asynchronous chat methods. Todo List:
Analysis Results:
Solution Implemented:✅ Added complete self-reflection support to
Key Changes:
The PR now has consistent behavior between |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/praisonai-agents/praisonaiagents/agent/agent.py (2)
377-461: Docstring is overly verbose—consider refactoring.The expanded constructor docstring covers every parameter exhaustively, which may hinder quick scanning. Consider:
- Summarizing the class purpose in a single line.
- Moving detailed parameter descriptions to an external reference or using Sphinx/Google style sections.
1318-1325: Refactor reflection prompt construction to avoid unintended indentation.The f-string’s leading whitespace will be included in the prompt. Consider dedenting to produce a clean prompt, for example:
-import textwrap reflection_prompt = f""" -Reflect on your previous response: '{response_text}'. -{self.reflect_prompt if self.reflect_prompt else "Identify any flaws, improvements, or actions."} -Provide a "satisfactory" status ('yes' or 'no'). -Output MUST be JSON with 'reflection' and 'satisfactory'. - """ +import textwrap +reflection_prompt = textwrap.dedent(f"""\ +Reflect on your previous response: '{response_text}' +{self.reflect_prompt or "Identify any flaws, improvements, or actions."} +Provide a "satisfactory" status ('yes' or 'no'). +Output MUST be JSON with 'reflection' and 'satisfactory'. +""")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/praisonai-agents/praisonaiagents/agent/agent.py(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: quick-test
- GitHub Check: build (3.10)
- GitHub Check: build (3.11)
- GitHub Check: quick-test
🔇 Additional comments (2)
src/praisonai-agents/praisonaiagents/agent/agent.py (2)
371-371: Addition ofreflect_promptparameter is clear and backward‐compatible.The new optional
reflect_promptargument defaults toNoneand aligns with existing patterns, preserving backward compatibility.
559-559: Assignment ofself.reflect_promptis correctly placed.You’ve stored the passed prompt template on the instance before using it downstream.
…ethod - Implement complete self-reflection logic in achat method to match chat method - Support configurable reflect_prompt parameter in async operations - Add reflection loop with min/max limits and error handling - Update method docstring to reflect new self-reflection capability - Ensure consistent behavior between chat and achat methods Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
src/praisonai-agents/praisonaiagents/agent/agent.py (1)
1586-1663:⚠️ Potential issueAsync path: post-reflection branch misses history / guardrail handling
After the inner reflection loop breaks, the method immediately returns
response_text(line 1663) without:
- Persisting the user / assistant turns to
self.chat_history- Emitting
display_interaction(...)for verbose mode- Running
_apply_guardrail_with_retrylike the syncchatpathThis creates inconsistent behaviour between
chatandachatand can silently drop conversation context for subsequent calls.@@ - if logging.getLogger().getEffectiveLevel() == logging.DEBUG: - total_time = time.time() - start_time - logging.debug(f"Agent.achat completed in {total_time:.2f} seconds") - return response_text + # ─── Persist & post-process like the sync version ─── + self.chat_history.append({"role": "user", "content": original_prompt}) + self.chat_history.append({"role": "assistant", "content": response_text}) + + if self.verbose: + display_interaction( + original_prompt, + response_text, + markdown=self.markdown, + generation_time=time.time() - start_time, + console=self.console, + ) + + try: + validated = self._apply_guardrail_with_retry( + response_text, original_prompt, temperature, tools + ) + if logging.getLogger().getEffectiveLevel() == logging.DEBUG: + total_time = time.time() - start_time + logging.debug(f"Agent.achat completed in {total_time:.2f} seconds") + return validated + except Exception as e: + logging.error(f"Agent {self.name}: Guardrail validation failed: {e}") + return None🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 1561-1663: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
♻️ Duplicate comments (1)
src/praisonai-agents/praisonaiagents/agent/agent.py (1)
444-447: Docstring still mis-describesreflect_llmfallbackPrevious review (see “gemini-code-assist” comment) pointed out that
reflect_llmdoes not default to “the main llm parameter” – it falls back toos.getenv('OPENAI_MODEL_NAME', 'gpt-4o'). The docstring still contains the old wording.- reflect_llm (Optional[str], optional): Dedicated language model for self-reflection - operations. If None, uses the main llm parameter. Defaults to None. + reflect_llm (Optional[str], optional): Dedicated language model for self-reflection + operations. If None, defaults to the `OPENAI_MODEL_NAME` env-var or "gpt-4o". + Defaults to None.
🧹 Nitpick comments (2)
src/praisonai-agents/praisonaiagents/agent/agent.py (2)
1318-1323: Reflection-prompt construction duplicated – extract helper to avoid driftThe literal block that builds the reflection prompt now appears verbatim in both
chatandachat. Any future tweak (e.g., new metadata fields) will have to be done twice.def _build_reflection_prompt(self, prev_response: str) -> str: base = self.reflect_prompt or "Identify any flaws, improvements, or actions." return ( f"Reflect on your previous response: '{prev_response}'.\n" f"{base}\n" "Provide a \"satisfactory\" status ('yes' or 'no').\n" "Output MUST be JSON with 'reflection' and 'satisfactory'." )Then call
_build_reflection_prompt(response_text)in both places.
1596-1604: Minor: redundantreflection_count = 0inside already-zeroed scope
reflection_countis set to0twice (1591 and 1596). Harmless but noisy.- if self.self_reflect: - reflection_count = 0 - + if self.self_reflect:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/praisonai-agents/praisonaiagents/agent/agent.py(5 hunks)
🧰 Additional context used
🪛 Pylint (3.3.7)
src/praisonai-agents/praisonaiagents/agent/agent.py
[refactor] 1561-1663: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: quick-test
- GitHub Check: build (3.11)
- GitHub Check: build (3.10)
- GitHub Check: quick-test
🔇 Additional comments (1)
src/praisonai-agents/praisonaiagents/agent/agent.py (1)
370-372:reflect_promptis wired-through correctly – 👍The constructor now takes
reflect_promptand persists it on the instance; the later usages (chat&achat) dereference it safely with a truthiness check, so an empty string re-falls back to the default. No functional issues spotted here.Also applies to: 558-560
Allow to configure reflection prompt
User description
I currently struggle to get any use from the self-reflection feature. During reflection, LLM always reports that the response is satisfactory. I want to provide clear requirements for reflection through the additional parameter
reflect_prompt.This change was useful in my own project, so it will probably be useful for someone else.
PR Type
Enhancement
Description
• Add configurable
reflect_promptparameter to Agent class• Replace hardcoded reflection prompt with customizable option
• Enable users to provide specific reflection requirements
• Maintain backward compatibility with default prompt fallback
Changes walkthrough 📝
agent.py
Add configurable reflection prompt parametersrc/praisonai-agents/praisonaiagents/agent/agent.py
• Added
reflect_promptparameter to Agentinitmethod• Stored
reflect_promptas instance variable• Modified reflection logic to use
custom prompt when provided
• Added fallback to default prompt for
backward compatibility
Summary by CodeRabbit
New Features
Documentation