| title | Chat Command |
|---|---|
| description | Interactive command-line chat with your AI agent |
The lua chat command provides an interactive command-line interface for conversing with your Lua AI agent in both sandbox and production environments.
lua chatEnsures `lua.skill.yaml` exists
Required for sandbox testing
Validates your API key and retrieves user data
Choose between sandbox (testing) or production (live)
Compiles and deploys your local skills to sandbox
🤖 Assistant: Hi there! How can I help you today?
👤 You:
```
Interactive conversation begins
```bash
$ lua chat
? Select environment: 🔧 Sandbox
```
**Features:**
- ✅ Local skill overrides
- ✅ Persona customization
- ✅ Environment variables from `.env`
- ✅ Test before deploying
**Setup time:** ~10-30 seconds (includes compilation)
**Use when:**
- Developing new features
- Testing skill changes
- Iterating on persona
- Before pushing to production
```bash
$ lua chat
? Select environment: 🚀 Production
```
**Features:**
- ✅ Production skills only
- ✅ Live production persona
- ✅ Real environment variables
- ✅ Verify deployed changes
**Setup time:** ~1-2 seconds
**Use when:**
- Validating deployed changes
- Testing production experience
- Verifying skill interactions
$ lua chat
✅ Authenticated
? Select environment: 🔧 Sandbox (with skill overrides)
🔄 Setting up sandbox environment...
🔄 Compiling skill...
✅ Skill compiled successfully - 3 tools bundled
🔄 Pushing skills to sandbox...
✅ Pushed 2 skills to sandbox
============================================================
💬 Lua Chat Interface
Environment: 🔧 Sandbox
Press Ctrl+C to exit
============================================================
🤖 Assistant: Hi there! How can I help you today?
👤 You: What's the weather in London?
🤖 Assistant: ...
🤖 Assistant: The current weather in London is 15°C and cloudy with light wind.
👤 You: Search for laptops
🤖 Assistant: ...
🤖 Assistant: I found 5 laptops in our catalog:
1. MacBook Pro - $1999
2. Dell XPS 13 - $1299
3. ThinkPad X1 - $1499
...
👤 You: ^C
👋 Goodbye!In lua.skill.yaml:
agent:
agentId: "agent_abc123"
organizationId: "org_xyz789"
persona: |
You are a helpful customer service assistant.
You help users with product inquiries and order management.
Be friendly, professional, and concise.In Sandbox Mode:
- Persona is automatically loaded and sent with each request
- Test different persona variations
- Iterate quickly
In Production Mode:
- Uses production persona (from server)
- No local override
Sandbox mode automatically:
- Compiles all skills in your project
- Pushes to sandbox environment
- Gets sandbox IDs for each skill
- Includes all sandbox IDs in chat requests
Example override:
{
"skillOverride": [
{
"skillId": "skill_abc123",
"sandboxId": "sandbox_def456"
}
]
}The AI uses your local sandbox versions instead of production versions.
By default, lua chat uses your agent's shared conversation context. Use the --thread flag to scope a session to an isolated thread — useful for running consecutive tests without state leaking between runs.
# Scope to an explicit thread ID
lua chat --thread my-test-scenario
# Auto-generate a fresh thread ID (printed at session start)
lua chat --thread
# Reuse a thread across multiple non-interactive messages
lua chat -m "step 1" --thread test-flow
lua chat -m "step 2" --thread test-flow
# Isolated test: scoped thread, auto-cleared on exit
lua chat -m "run test" -t --clear
# Explicit thread with auto-clear
lua chat -m "run test" -t my-test --clear| Flag | Description |
|---|---|
-t, --thread [id] |
Scope to a thread. Omit the ID to auto-generate a UUID. The active thread ID is always printed at session start. |
--clear, --clear-thread |
Clear the thread's history when the session ends (interactive: on exit, non-interactive: after response). Requires --thread. |
lua chat clear --thread my-test-scenarioRun 10 isolated test cases against your agent, each with a clean slate:
for i in $(seq 1 10); do
lua chat -m "test scenario $i" -t "test-run-$i" --clear -e sandbox
doneYou can attach files to any message using @<path> syntax — in both interactive and non-interactive mode.
# Attach an image
@/path/to/screenshot.png what's wrong with this UI?
# Attach a document
@report.pdf summarize this
# Mix text and attachment
check @screenshot.png and tell me what you seeThe @ must appear at the start of your message or after a space. Email addresses (user@example.com) are never treated as attachments.
| Extension | Type |
|-----------|------|
| `.png` | PNG image |
| `.jpg`, `.jpeg` | JPEG image |
| `.gif` | GIF image |
| `.webp` | WebP image |
| `.bmp` | Bitmap image |
| `.tiff`, `.tif` | TIFF image |
| `.heic` | HEIC image |
| Extension | Type |
|-----------|------|
| `.pdf` | PDF document |
| `.doc`, `.docx` | Word document |
| `.xls`, `.xlsx` | Excel spreadsheet |
| `.ppt`, `.pptx` | PowerPoint presentation |
| `.odt` | OpenDocument text |
| `.epub` | E-book |
Files with unsupported extensions are left as plain text in your message — they are never silently stripped.
**Model support required.** Attachment support depends on the model configured for your agent. If the model doesn't support vision or file inputs, attachments won't be processed — even if the CLI sends them successfully. Check your agent's model configuration if attachments aren't being picked up.- Maximum file size: 10 MB per attachment
- Multiple attachments per message are supported
The same @<path> syntax works in -m / --message flags:
lua chat -m "@screenshot.png what do you see?" -e production
lua chat -m "review @report.pdf and @notes.txt" -e sandbox| Shortcut | Action |
|---|---|
Enter |
Send message |
Ctrl+C |
Exit chat |
Ctrl+D |
Exit chat (alternative) |
**Validate with production:**
- Verify deployed changes work
- Test with production data
- Confirm no regressions
After every lua chat turn — interactive or -m non-interactive — the CLI runs a quiet lua logs --type agent_error probe scoped to the current session. The probe checks for server-side pipeline errors that don't always surface in the chat response itself (billing failures, schema validation failures, LLM provider errors, post-processor errors).
When the probe is silent, no agent errors fired during your turn — you're clean.
When new errors fire, the CLI prints a one-line warning at the end of the turn:
⚠️ 2 new agent error(s) during this turn — run `lua logs --type agent_error --limit 2` to inspect.
Empty response: If the agent returns no text (empty stream), the CLI warns: ⚠️ The agent returned an empty response and points you at agent_error and runtime logs so you can see what failed on the server.
Run the suggested command verbatim to inspect the errors.
To opt out (for example in CI that captures only command output), set LUA_NO_HINTS=1:
LUA_NO_HINTS=1 lua chat -m "test" -e production# Send one isolated test, then check for pipeline errors
lua chat -m "test" -t debug-1 --clear && lua logs --type agent_error --limit 5See Debugging your agent — the post-deploy loop for the complete flow.
**Debugging after a failed chat:** If a chat turn returns a wrong, empty, or error response, the CLI now automatically surfaces a count of `agent_error` logs that fired during that turn. To inspect them, run the suggested `lua logs --type agent_error --limit N` command. See [Debugging your agent — the post-deploy loop](/cli/debugging) for the canonical loop. **Error:** ``` ❌ No API key found ```**Solution:**
```bash
lua auth configure
```
**Solution:**
```bash
lua init
```
**Solution:**
- Fix TypeScript errors in your code
- Check `src/index.ts` for syntax errors
- Verify all imports are correct
**Solution:**
```bash
lua push # Deploy skills first
lua chat # Then try again
```
**Causes:**
- First request after compilation
- Large skill bundles
- Network latency
**Solution:** Subsequent messages will be faster
**Check:**
- Using sandbox mode (not production)
- `agent.persona` exists in `lua.skill.yaml`
- Persona is properly formatted YAML