Skip to content

Latest commit

 

History

History
569 lines (442 loc) · 14.1 KB

File metadata and controls

569 lines (442 loc) · 14.1 KB
title Chat Command
description Interactive command-line chat with your AI agent

Overview

The lua chat command provides an interactive command-line interface for conversing with your Lua AI agent in both sandbox and production environments.

lua chat
**Want to test on real channels?** You can also test your agent on WhatsApp, Facebook, Instagram, Email, and Slack without setting up your own channels. See [Quick Testing Channels](/channels/quick-testing) for instant testing on any platform.

Features

Test with local skill overrides and persona customizations Chat with your live production agent Continuous conversation until exit All skills automatically included in sandbox

Prerequisites

```bash lua auth configure ``` ```bash lua init ```
Ensures `lua.skill.yaml` exists
```bash lua push ```
Required for sandbox testing

How It Works

``` ✅ Authenticated ```
Validates your API key and retrieves user data
``` ? Select environment: 🔧 Sandbox (with skill overrides) 🚀 Production ```
Choose between sandbox (testing) or production (live)
``` 🔄 Compiling skill... 🔄 Pushing skills to sandbox... ✅ Pushed 2 skills to sandbox ```
Compiles and deploys your local skills to sandbox
``` ============================================================ 💬 Lua Chat Interface Environment: 🔧 Sandbox Press Ctrl+C to exit ============================================================
🤖 Assistant: Hi there! How can I help you today?

👤 You: 
```

Interactive conversation begins

Sandbox vs Production

**For Development & Testing**
```bash
$ lua chat
? Select environment: 🔧 Sandbox
```

**Features:**
- ✅ Local skill overrides
- ✅ Persona customization
- ✅ Environment variables from `.env`
- ✅ Test before deploying

**Setup time:** ~10-30 seconds (includes compilation)

**Use when:**
- Developing new features
- Testing skill changes
- Iterating on persona
- Before pushing to production
**For Validation**
```bash
$ lua chat
? Select environment: 🚀 Production
```

**Features:**
- ✅ Production skills only
- ✅ Live production persona
- ✅ Real environment variables
- ✅ Verify deployed changes

**Setup time:** ~1-2 seconds

**Use when:**
- Validating deployed changes
- Testing production experience
- Verifying skill interactions

Example Session

Sandbox Mode

$ lua chat
✅ Authenticated
? Select environment: 🔧 Sandbox (with skill overrides)
🔄 Setting up sandbox environment...
🔄 Compiling skill...
✅ Skill compiled successfully - 3 tools bundled
🔄 Pushing skills to sandbox...
✅ Pushed 2 skills to sandbox

============================================================
💬 Lua Chat Interface
Environment: 🔧 Sandbox
Press Ctrl+C to exit
============================================================

🤖 Assistant: Hi there! How can I help you today?

👤 You: What's the weather in London?
🤖 Assistant: ...
🤖 Assistant: The current weather in London is 15°C and cloudy with light wind.

👤 You: Search for laptops
🤖 Assistant: ...
🤖 Assistant: I found 5 laptops in our catalog:
1. MacBook Pro - $1999
2. Dell XPS 13 - $1299
3. ThinkPad X1 - $1499
...

👤 You: ^C

👋 Goodbye!

Persona Override

Configuration

In lua.skill.yaml:

agent:
  agentId: "agent_abc123"
  organizationId: "org_xyz789"
  persona: |
    You are a helpful customer service assistant.
    You help users with product inquiries and order management.
    Be friendly, professional, and concise.

In Sandbox Mode:

  • Persona is automatically loaded and sent with each request
  • Test different persona variations
  • Iterate quickly

In Production Mode:

  • Uses production persona (from server)
  • No local override

Skill Override

How It Works

Sandbox mode automatically:

  1. Compiles all skills in your project
  2. Pushes to sandbox environment
  3. Gets sandbox IDs for each skill
  4. Includes all sandbox IDs in chat requests

Example override:

{
  "skillOverride": [
    {
      "skillId": "skill_abc123",
      "sandboxId": "sandbox_def456"
    }
  ]
}

The AI uses your local sandbox versions instead of production versions.

Thread Isolation

By default, lua chat uses your agent's shared conversation context. Use the --thread flag to scope a session to an isolated thread — useful for running consecutive tests without state leaking between runs.

Usage

# Scope to an explicit thread ID
lua chat --thread my-test-scenario

# Auto-generate a fresh thread ID (printed at session start)
lua chat --thread

# Reuse a thread across multiple non-interactive messages
lua chat -m "step 1" --thread test-flow
lua chat -m "step 2" --thread test-flow

# Isolated test: scoped thread, auto-cleared on exit
lua chat -m "run test" -t --clear

# Explicit thread with auto-clear
lua chat -m "run test" -t my-test --clear

Flags

Flag Description
-t, --thread [id] Scope to a thread. Omit the ID to auto-generate a UUID. The active thread ID is always printed at session start.
--clear, --clear-thread Clear the thread's history when the session ends (interactive: on exit, non-interactive: after response). Requires --thread.

Clearing a Thread Manually

lua chat clear --thread my-test-scenario

Testing Workflow Example

Run 10 isolated test cases against your agent, each with a clean slate:

for i in $(seq 1 10); do
  lua chat -m "test scenario $i" -t "test-run-$i" --clear -e sandbox
done

File Attachments

You can attach files to any message using @<path> syntax — in both interactive and non-interactive mode.

# Attach an image
@/path/to/screenshot.png what's wrong with this UI?

# Attach a document
@report.pdf summarize this

# Mix text and attachment
check @screenshot.png and tell me what you see

The @ must appear at the start of your message or after a space. Email addresses (user@example.com) are never treated as attachments.

Supported File Types

Sent natively to vision-capable models.
| Extension | Type |
|-----------|------|
| `.png` | PNG image |
| `.jpg`, `.jpeg` | JPEG image |
| `.gif` | GIF image |
| `.webp` | WebP image |
| `.bmp` | Bitmap image |
| `.tiff`, `.tif` | TIFF image |
| `.heic` | HEIC image |
If the model natively supports the format it is sent as-is; otherwise it is converted to an LLM-friendly format automatically.
| Extension | Type |
|-----------|------|
| `.pdf` | PDF document |
| `.doc`, `.docx` | Word document |
| `.xls`, `.xlsx` | Excel spreadsheet |
| `.ppt`, `.pptx` | PowerPoint presentation |
| `.odt` | OpenDocument text |
| `.epub` | E-book |
| Extension | Type | |-----------|------| | `.txt` | Plain text | | `.md` | Markdown | | `.csv` | Comma-separated values | | `.tsv` | Tab-separated values | | `.html`, `.htm` | HTML | | `.xml` | XML | | `.json` | JSON | | `.rtf` | Rich text | | `.rst` | reStructuredText | | `.org` | Org-mode | | Extension | Type | |-----------|------| | `.eml` | Email message | | `.msg` | Outlook message |

Files with unsupported extensions are left as plain text in your message — they are never silently stripped.

**Model support required.** Attachment support depends on the model configured for your agent. If the model doesn't support vision or file inputs, attachments won't be processed — even if the CLI sends them successfully. Check your agent's model configuration if attachments aren't being picked up.

Limits

  • Maximum file size: 10 MB per attachment
  • Multiple attachments per message are supported

Non-Interactive Mode

The same @<path> syntax works in -m / --message flags:

lua chat -m "@screenshot.png what do you see?" -e production
lua chat -m "review @report.pdf and @notes.txt" -e sandbox

Keyboard Shortcuts

Shortcut Action
Enter Send message
Ctrl+C Exit chat
Ctrl+D Exit chat (alternative)

Best Practices

1. Make changes to your skills 2. Run `lua chat` in sandbox mode 3. Test changes interactively 4. Iterate until satisfied 5. Run `lua push` to deploy 6. Test again in production mode 7. Deploy with `lua deploy` **Start with sandbox:** - Test happy paths - Test error cases - Test edge cases - Test multi-step flows
**Validate with production:**
- Verify deployed changes work
- Test with production data
- Confirm no regressions
1. Update `persona` in your `LuaAgent` code (`src/index.ts`) 2. Run `lua chat` in sandbox 3. Test conversation style 4. Refine persona 5. Repeat until satisfied 6. Deploy to production with `lua push persona`

Active log probe (agent_error after every turn)

After every lua chat turn — interactive or -m non-interactive — the CLI runs a quiet lua logs --type agent_error probe scoped to the current session. The probe checks for server-side pipeline errors that don't always surface in the chat response itself (billing failures, schema validation failures, LLM provider errors, post-processor errors).

When the probe is silent, no agent errors fired during your turn — you're clean.

When new errors fire, the CLI prints a one-line warning at the end of the turn:

⚠️  2 new agent error(s) during this turn — run `lua logs --type agent_error --limit 2` to inspect.

Empty response: If the agent returns no text (empty stream), the CLI warns: ⚠️ The agent returned an empty response and points you at agent_error and runtime logs so you can see what failed on the server.

Run the suggested command verbatim to inspect the errors.

To opt out (for example in CI that captures only command output), set LUA_NO_HINTS=1:

LUA_NO_HINTS=1 lua chat -m "test" -e production

Debugging recipe

# Send one isolated test, then check for pipeline errors
lua chat -m "test" -t debug-1 --clear && lua logs --type agent_error --limit 5

See Debugging your agent — the post-deploy loop for the complete flow.

Troubleshooting

**Debugging after a failed chat:** If a chat turn returns a wrong, empty, or error response, the CLI now automatically surfaces a count of `agent_error` logs that fired during that turn. To inspect them, run the suggested `lua logs --type agent_error --limit N` command. See [Debugging your agent — the post-deploy loop](/cli/debugging) for the canonical loop. **Error:** ``` ❌ No API key found ```
**Solution:**
```bash
lua auth configure
```
**Error:** ``` ❌ No agent ID found in lua.skill.yaml ```
**Solution:**
```bash
lua init
```
**Error:** ``` ❌ Compilation failed ```
**Solution:**
- Fix TypeScript errors in your code
- Check `src/index.ts` for syntax errors
- Verify all imports are correct
**Error:** ``` ❌ Failed to push skills to sandbox ```
**Solution:**
```bash
lua push  # Deploy skills first
lua chat  # Then try again
```
**Issue:** Long wait times in sandbox
**Causes:**
- First request after compilation
- Large skill bundles
- Network latency

**Solution:** Subsequent messages will be faster
**Issue:** Persona override not being applied
**Check:**
- Using sandbox mode (not production)
- `agent.persona` exists in `lua.skill.yaml`
- Persona is properly formatted YAML

Related Commands

Canonical push → test → check flow + the active log probe Inspect agent_error and tool execution logs Test individual tools with specific inputs Deploy skills to server

Next Steps

Test tools one at a time Push your skills live