Skip to content

Chat command should use LLM to converse as the agent #69

@DeveshParagiri

Description

@DeveshParagiri

Current Behavior

The extropy chat command is a stub that just dumps agent context on every message:

chat> hello
Agent `agent_01` context snapshot:
- Position: actively_oppose_ban
- Sentiment: 1.000
...

It doesn't actually have a conversation with the agent.

Expected Behavior

The chat command should use an LLM to roleplay AS the agent, grounded in their:

  • Persona attributes
  • Simulation state (position, sentiment, conviction)
  • Memory traces / reasoning history
  • Conversation history from simulation

Implementation

  1. Build system prompt with agent persona + current state
  2. Include memory traces as context
  3. Send user messages to LLM (use simple_call_async)
  4. Return responses as if the agent is speaking
  5. Store conversation turns in chat_messages table (already exists)

Example

chat> Hey, what do you think about the book ban?
I think it's outrageous censorship. The school board made a heavy-handed 
decision without any transparent review process. I'm planning to attend 
the next board meeting to push back publicly.

chat> But don't you think some books aren't appropriate for kids?
Sure, some content isn't age-appropriate — I'm not naive about that. But 
a blanket ban isn't the answer. We need grade-level review and parental 
opt-out, not district-wide removals that override teachers and parents.

Files

  • extropy/cli/commands/chat.py — replace _summarize_context with LLM call

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions