Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 104 additions & 0 deletions src/praisonai-ts/src/cli/features/external-agents.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ export interface ExternalAgentResult {
duration: number;
}

export type StreamEvent =
| { type: 'text'; content: string }
| { type: 'json'; data: unknown }
| { type: 'error'; error: string };
Comment on lines +22 to +25
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

StreamEvent advertises an error variant that the implementations never emit.

Right now failures reject the async generator instead of yielding { type: 'error' }, so consumers branching on event.type cannot rely on the exported contract. Either emit the error variant before terminating, or remove it from StreamEvent and document that stream errors are surfaced via throw.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-ts/src/cli/features/external-agents.ts` around lines 22 - 25,
The StreamEvent union currently includes an { type: 'error'; error: string }
variant but the async generator implementation rejects on failures instead of
yielding that variant; pick one approach and make it consistent: either (A)
update the generator(s) that produce StreamEvent to catch errors and yield {
type: 'error', error: string } before returning/closing (ensure functions that
consume the generator handle this sentinel and stop iterating), or (B) remove
the error variant from the exported StreamEvent type and update
documentation/comments for the generator-producing functions to state that
errors are surfaced via thrown exceptions (then update any consumers that
currently branch on event.type to handle thrown errors). Reference StreamEvent
and the async generator producers/consumers when making the change.


/**
* Base class for external agent integrations
*/
Expand All @@ -42,6 +47,11 @@ export abstract class BaseExternalAgent {
*/
abstract execute(prompt: string): Promise<ExternalAgentResult>;

/**
* Stream output from the external agent
*/
abstract stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown>;
Comment on lines +50 to +53
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid making stream() a new abstract requirement on the exported base class.

Line 53 turns stream() into a mandatory member for every downstream BaseExternalAgent subclass, which is a source-breaking change despite the PR’s backward-compatibility goal. A concrete fallback in the base class keeps existing custom agents compiling and still lets built-ins override for true streaming.

Suggested non-breaking shape
-  abstract stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown>;
+  /**
+   * Stream output from the external agent.
+   * `@param` prompt Prompt to send to the external agent.
+   * `@returns` Async stream of external-agent events.
+   * `@example`
+   * for await (const event of agent.stream("hello")) {
+   *   console.log(event);
+   * }
+   */
+  async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
+    const result = await this.execute(prompt);
+    if (!result.success) {
+      throw new Error(result.error || `${this.getName()} execution failed`);
+    }
+    yield { type: 'text', content: result.output };
+  }

As per coding guidelines, "All public APIs must include JSDoc comments with @param, @returns, and @example sections showing usage".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-ts/src/cli/features/external-agents.ts` around lines 50 - 53,
Change stream() on BaseExternalAgent from an abstract method to a concrete async
generator that provides a safe non-breaking default (so subclasses aren’t forced
to implement it); implement a simple fallback AsyncGenerator that emits no
streaming updates or a single completion StreamEvent and then returns, and keep
the method signature using StreamEvent. Update the exported
BaseExternalAgent.stream JSDoc to include `@param` prompt, `@returns`
AsyncGenerator<StreamEvent, void, unknown>, and an `@example` showing basic usage
so public API docs are complete; built-in agents can still override stream() for
real streaming.


/**
* Get the agent name
*/
Expand Down Expand Up @@ -97,6 +107,65 @@ export abstract class BaseExternalAgent {
});
}

/**
* Stream command output line by line
*/
protected async *streamCommand(args: string[]): AsyncGenerator<StreamEvent, void, unknown> {
const { spawn } = await import('child_process');

const proc = spawn(this.config.command, args, {
cwd: this.config.cwd || process.cwd(),
env: { ...process.env, ...this.config.env },
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

streamCommand() spawns a long-running process but does not pass timeout: this.config.timeout (unlike runCommand). This can cause streaming calls to hang indefinitely if the external CLI stalls. Consider wiring in the same timeout behavior (or supporting an AbortSignal) for streaming executions.

Suggested change
env: { ...process.env, ...this.config.env },
env: { ...process.env, ...this.config.env },
timeout: this.config.timeout,

Copilot uses AI. Check for mistakes.
timeout: this.config.timeout,
stdio: ['pipe', 'pipe', 'pipe']
});
Comment on lines +116 to +121
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The spawn call is missing an error listener. If the command is not found or fails to start, an error event will be emitted. If there are no listeners for the error event, the Node.js process will throw an unhandled exception and crash. Adding a simple error listener prevents this.

    const proc = spawn(this.config.command, args, {
      cwd: this.config.cwd || process.cwd(),
      env: { ...process.env, ...this.config.env },
      stdio: ['pipe', 'pipe', 'pipe']
    });

    proc.on('error', () => {}); // Prevent unhandled exception if command not found


if (!proc.stdout) {
throw new Error('Failed to create stdout stream');
}

let stderr = '';
proc.stderr?.on('data', (chunk) => {
stderr += chunk.toString();
});

const exit = new Promise<number | null>((resolve, reject) => {
proc.once('error', reject);
proc.once('close', resolve);
});

const readline = await import('readline');
const rl = readline.createInterface({
input: proc.stdout,
crlfDelay: Infinity
});
Comment on lines +116 to +141
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

streamCommand() only reads from proc.stdout and never consumes proc.stderr. If the child process writes enough to stderr, the buffer can fill and block the process, stalling the stream. Drain stderr (e.g., with a separate listener/interface) or redirect it to stdout when streaming.

Copilot uses AI. Check for mistakes.

try {
for await (const line of rl) {
if (line.trim()) {
// Try to parse as JSON first
try {
const event = JSON.parse(line);
yield { type: 'json', data: event };
} catch {
// If not JSON, treat as text
yield { type: 'text', content: line };
}
}
}
Comment on lines +144 to +155
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve blank lines in streamed text output.

The if (line.trim()) guard drops intentional empty lines, which means markdown paragraphs and code blocks cannot be reconstructed faithfully by callers consuming Claude/Gemini/Codex/Generic streams. Please preserve empty text chunks here and let the consumer decide whether to ignore them.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-ts/src/cli/features/external-agents.ts` around lines 144 - 155,
Remove the "if (line.trim())" guard so blank lines are not dropped; inside the
loop that iterates "for await (const line of rl)" still attempt JSON.parse in
the existing try/catch and on error yield the text variant (yield { type:
'text', content: line }) — this ensures empty lines (line === "" or whitespace)
are preserved and emitted for consumers to handle while keeping the existing
JSON parsing behavior.


const exitCode = await exit;
if (exitCode !== 0) {
throw new Error(stderr || `${this.config.command} exited with code ${exitCode}`);
}
} finally {
rl.close();
if (!proc.killed) {
proc.kill();
}
}
Comment on lines +161 to +166
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

streamCommand() always proc.kill() in finally and never checks the subprocess exit code. This can terminate a still-running process prematurely and also makes failures (non-zero exit) invisible to callers. Consider awaiting the close/exit event, surfacing non-zero codes (throw or emit an error/exit event), and only killing the process on cancellation/timeouts.

Copilot uses AI. Check for mistakes.
}
Comment thread
coderabbitai[bot] marked this conversation as resolved.

/**
* Check if a command exists
*/
Expand Down Expand Up @@ -131,6 +200,10 @@ export class ClaudeCodeAgent extends BaseExternalAgent {
return this.runCommand(['--print', prompt]);
}

async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
yield* this.streamCommand(['--print', '--output-format', 'stream-json', prompt]);
}

async executeWithSession(prompt: string, sessionId?: string): Promise<ExternalAgentResult> {
const args = ['--print'];
if (sessionId) {
Expand Down Expand Up @@ -163,6 +236,10 @@ export class GeminiCliAgent extends BaseExternalAgent {
async execute(prompt: string): Promise<ExternalAgentResult> {
return this.runCommand(['-m', this.model, prompt]);
}

async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
yield* this.streamCommand(['-m', this.model, '--json', prompt]);
}
}

/**
Expand All @@ -184,6 +261,10 @@ export class CodexCliAgent extends BaseExternalAgent {
async execute(prompt: string): Promise<ExternalAgentResult> {
return this.runCommand(['exec', '--full-auto', prompt]);
}

async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
yield* this.streamCommand(['exec', '--full-auto', '--json', prompt]);
}
}

/**
Expand All @@ -205,6 +286,19 @@ export class AiderAgent extends BaseExternalAgent {
async execute(prompt: string): Promise<ExternalAgentResult> {
return this.runCommand(['--message', prompt, '--yes']);
}

async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
// Aider doesn't support JSON streaming, so just yield text events
const result = await this.execute(prompt);
if (!result.success) {
throw new Error(result.error || 'Aider execution failed');
}
for (const line of result.output.split('\n')) {
if (line.trim()) {
yield { type: 'text', content: line };
}
Comment on lines +296 to +299
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Aider’s fallback stream also strips meaningful blank lines.

Line 297 applies the same trim() filter, so streamed aider output loses paragraph and code-block separation too. This path should preserve empty lines just like the shared streaming helper.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-ts/src/cli/features/external-agents.ts` around lines 296 - 299,
The loop over result.output is trimming and skipping blank lines (if
(line.trim())), which removes meaningful empty lines; change it to preserve
empty lines like the shared streaming helper by removing the trim-based filter
and yield every line (including empty strings) — update the loop that iterates
result.output and the yield { type: 'text', content: line } so it no longer uses
line.trim() to skip lines.

}
}
Comment on lines +290 to +301
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This implementation of stream is not actually streaming; it waits for the entire command to finish before yielding any output. Since streamCommand already handles non-JSON output by yielding text events, you can use it here to provide real-time streaming of Aider's output.

  async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
    yield* this.streamCommand(['--message', prompt, '--yes']);
  }

Comment thread
coderabbitai[bot] marked this conversation as resolved.
}

/**
Expand All @@ -231,6 +325,16 @@ export class GenericExternalAgent extends BaseExternalAgent {
}
return this.runCommand(args);
}

async *stream(prompt: string): AsyncGenerator<StreamEvent, void, unknown> {
const args = [...(this.config.args || [])];
if (this.promptArg) {
args.push(this.promptArg, prompt);
} else {
args.push(prompt);
}
yield* this.streamCommand(args);
}
}

/**
Expand Down
16 changes: 16 additions & 0 deletions src/praisonai/praisonai/integrations/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@
'ManagedAgentIntegration', # backward compat alias
'ManagedBackendConfig', # backward compat alias
'get_available_integrations',
'ExternalAgentRegistry',
'get_registry',
'register_integration',
'create_integration',
]


Expand Down Expand Up @@ -79,4 +83,16 @@ def __getattr__(name):
elif name == 'get_available_integrations':
from .base import get_available_integrations
return get_available_integrations
elif name == 'ExternalAgentRegistry':
from .registry import ExternalAgentRegistry
return ExternalAgentRegistry
elif name == 'get_registry':
from .registry import get_registry
return get_registry
elif name == 'register_integration':
from .registry import register_integration
return register_integration
elif name == 'create_integration':
from .registry import create_integration
return create_integration
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
52 changes: 40 additions & 12 deletions src/praisonai/praisonai/integrations/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -285,19 +285,47 @@ def get_available_integrations() -> Dict[str, bool]:
"""
Get a dictionary of all integrations and their availability status.

Backward compatibility wrapper. Use ExternalAgentRegistry for new code.

Returns:
dict: Mapping of integration name to availability (True/False)
"""
from .claude_code import ClaudeCodeIntegration
from .gemini_cli import GeminiCLIIntegration
from .codex_cli import CodexCLIIntegration
from .cursor_cli import CursorCLIIntegration

integrations = {
'claude': ClaudeCodeIntegration(),
'gemini': GeminiCLIIntegration(),
'codex': CodexCLIIntegration(),
'cursor': CursorCLIIntegration(),
}
# Import here to avoid circular imports
try:
from .registry import get_registry
import asyncio

registry = get_registry()

# Handle async call in sync context
try:
# Try to get existing event loop
loop = asyncio.get_event_loop()
if loop.is_running():
# We're in an async context, use create_task
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(asyncio.run, registry.get_available())
return future.result()
else:
# No running loop, safe to use asyncio.run
return asyncio.run(registry.get_available())
except RuntimeError:
# No event loop, safe to use asyncio.run
return asyncio.run(registry.get_available())
Comment on lines +300 to +315
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block introduces a complex and potentially fragile mechanism to call an asynchronous method from a synchronous context. If ExternalAgentRegistry.get_available is changed to be synchronous (as it doesn't perform any actual async work), this entire block can be simplified to a direct call, improving performance and maintainability.

Suggested change
# Handle async call in sync context
try:
# Try to get existing event loop
loop = asyncio.get_event_loop()
if loop.is_running():
# We're in an async context, use create_task
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(asyncio.run, registry.get_available())
return future.result()
else:
# No running loop, safe to use asyncio.run
return asyncio.run(registry.get_available())
except RuntimeError:
# No event loop, safe to use asyncio.run
return asyncio.run(registry.get_available())
return registry.get_available()


return {name: integration.is_available for name, integration in integrations.items()}
except ImportError:
# Fallback to original implementation
from .claude_code import ClaudeCodeIntegration
from .gemini_cli import GeminiCLIIntegration
from .codex_cli import CodexCLIIntegration
from .cursor_cli import CursorCLIIntegration

integrations = {
'claude': ClaudeCodeIntegration(),
'gemini': GeminiCLIIntegration(),
'codex': CodexCLIIntegration(),
'cursor': CursorCLIIntegration(),
}

return {name: integration.is_available for name, integration in integrations.items()}
37 changes: 32 additions & 5 deletions src/praisonai/praisonai/integrations/codex_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,35 +45,55 @@ class CodexCLIIntegration(BaseCLIIntegration):
output_schema: Path to JSON schema for structured output
"""

VALID_APPROVAL_MODES = {"suggest", "auto-edit", "full-auto"}

def __init__(
self,
workspace: str = ".",
timeout: int = 300,
full_auto: bool = False,
approval_mode: str = "suggest", # suggest, auto-edit, full-auto
sandbox: str = "default",
json_output: bool = False,
output_schema: Optional[str] = None,
output_file: Optional[str] = None,
provider: Optional[str] = None, # OpenAI, OpenRouter, Azure, Gemini, etc.
# Backward compatibility
full_auto: Optional[bool] = None,
):
Comment on lines +54 to 62
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are existing unit tests for CodexCLIIntegration (see tests/unit/integrations/test_codex_cli.py), but the newly added options (approval_mode and provider) are not covered. Please add tests asserting that approval_mode="auto-edit" adds --auto-edit, approval_mode="full-auto" adds --full-auto, and provider adds --provider <value>, while full_auto= continues to work.

Copilot uses AI. Check for mistakes.
"""
Initialize Codex CLI integration.

Args:
workspace: Working directory for CLI execution
timeout: Timeout in seconds for CLI execution
full_auto: Whether to allow file modifications (--full-auto)
approval_mode: Approval mode ("suggest", "auto-edit", "full-auto")
sandbox: Sandbox mode ("default", "danger-full-access")
json_output: Whether to use JSON streaming output (--json)
output_schema: Path to JSON schema for structured output
output_file: Path to save the final output (-o)
provider: Model provider ("openai", "openrouter", "azure", "gemini", "ollama", etc.)
"""
super().__init__(workspace=workspace, timeout=timeout)

self.full_auto = full_auto
# Handle backward compatibility
if full_auto is not None:
approval_mode = "full-auto" if full_auto else "suggest"

if approval_mode not in self.VALID_APPROVAL_MODES:
raise ValueError(
f"Invalid approval_mode: '{approval_mode}'. "
f"Must be one of: {', '.join(sorted(self.VALID_APPROVAL_MODES))}"
)

self.approval_mode = approval_mode
Comment thread
coderabbitai[bot] marked this conversation as resolved.
self.sandbox = sandbox
self.json_output = json_output
self.output_schema = output_schema
self.output_file = output_file
self.provider = provider

# Backward compatibility
self.full_auto = approval_mode == "full-auto"

@property
def cli_command(self) -> str:
Expand All @@ -99,9 +119,12 @@ def _build_command(self, task: str, **options) -> List[str]:
# Add task
cmd.append(task)

# Add full auto flag if enabled
if self.full_auto:
# Add approval mode
if self.approval_mode == "full-auto":
cmd.append("--full-auto")
elif self.approval_mode == "auto-edit":
cmd.append("--auto-edit")
# suggest is the default, no flag needed
Comment on lines +122 to +127
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approval_mode is accepted as a free-form string, but _build_command() only handles "suggest", "auto-edit", and "full-auto". Any other value will silently fall back to suggest (no flag), which can hide misconfiguration. Consider validating approval_mode in __init__ and raising a clear ValueError for unsupported values.

Copilot uses AI. Check for mistakes.

# Add sandbox mode if not default
if self.sandbox and self.sandbox != "default":
Expand All @@ -119,6 +142,10 @@ def _build_command(self, task: str, **options) -> List[str]:
if self.output_file:
cmd.extend(["-o", self.output_file])

# Add provider if specified
if self.provider:
cmd.extend(["--provider", self.provider])

return cmd

async def execute(self, prompt: str, **options) -> str:
Expand Down
Loading