|
| 1 | +--- |
| 2 | +title: "Install & Run" |
| 3 | +description: "Install the Atmosphere CLI and have a running app in 30 seconds" |
| 4 | +sidebar: |
| 5 | + order: 0 |
| 6 | +--- |
| 7 | + |
| 8 | +Get a running Atmosphere app in seconds — no Maven, no project setup, no boilerplate. |
| 9 | + |
| 10 | +## Install the CLI |
| 11 | + |
| 12 | +```bash |
| 13 | +curl -fsSL https://raw.githubusercontent.com/Atmosphere/atmosphere/main/cli/install.sh | sh |
| 14 | +``` |
| 15 | + |
| 16 | +Or with Homebrew: |
| 17 | + |
| 18 | +```bash |
| 19 | +brew install Atmosphere/tap/atmosphere |
| 20 | +``` |
| 21 | + |
| 22 | +The installer checks for Java 21+, downloads the `atmosphere` script to `/usr/local/bin`, and creates `~/.atmosphere/` for caching. |
| 23 | + |
| 24 | +## Run Your First App |
| 25 | + |
| 26 | +```bash |
| 27 | +atmosphere run spring-boot-chat |
| 28 | +``` |
| 29 | + |
| 30 | +That's it. The CLI downloads a pre-built JAR from GitHub Releases, caches it in `~/.atmosphere/cache/`, and starts a WebSocket chat app on `http://localhost:8080`. Open it in your browser — you have a working real-time chat. |
| 31 | + |
| 32 | +### Try an AI Chat |
| 33 | + |
| 34 | +```bash |
| 35 | +atmosphere run spring-boot-ai-chat --env LLM_API_KEY=your-key |
| 36 | +``` |
| 37 | + |
| 38 | +This starts an AI streaming chat that connects to Gemini, GPT, Claude, or Ollama — configured via environment variables. Without an API key, it runs in demo mode with simulated streaming. |
| 39 | + |
| 40 | +### Browse All 18 Samples |
| 41 | + |
| 42 | +```bash |
| 43 | +atmosphere install |
| 44 | +``` |
| 45 | + |
| 46 | +The interactive picker shows every sample grouped by category. Pick one, then choose to **run it** or **install its source code** into your current directory: |
| 47 | + |
| 48 | +``` |
| 49 | + Atmosphere Samples (18 available) |
| 50 | +
|
| 51 | + CHAT |
| 52 | + 1) spring-boot-chat Real-time WebSocket chat with Spring Boot |
| 53 | + 2) quarkus-chat Real-time WebSocket chat with Quarkus |
| 54 | + 3) embedded-jetty-ws-chat Embedded Jetty WebSocket chat (no framework) |
| 55 | +
|
| 56 | + AI |
| 57 | + 4) spring-boot-ai-chat AI streaming with conversation memory and structured events |
| 58 | + 5) spring-boot-ai-classroom Multiple clients share streaming AI responses |
| 59 | + 6) spring-boot-adk-chat AI chat with Google ADK |
| 60 | + 7) spring-boot-langchain4j-chat AI chat with LangChain4j |
| 61 | + ... |
| 62 | +
|
| 63 | + Pick a sample [1-18]: |
| 64 | +``` |
| 65 | + |
| 66 | +If [fzf](https://github.com/junegunn/fzf) is installed, you get fuzzy-search instead of numbered menus. |
| 67 | + |
| 68 | +### Filter by Category or Tag |
| 69 | + |
| 70 | +```bash |
| 71 | +atmosphere install --tag ai # AI samples only |
| 72 | +atmosphere install --category tools # Tool-calling samples |
| 73 | +atmosphere list # List all samples without the picker |
| 74 | +atmosphere info spring-boot-ai-chat # Show details about a specific sample |
| 75 | +``` |
| 76 | + |
| 77 | +## Create a New Project |
| 78 | + |
| 79 | +When you're ready to write your own code: |
| 80 | + |
| 81 | +```bash |
| 82 | +atmosphere new my-app |
| 83 | +atmosphere new my-ai-app --template ai-chat |
| 84 | +atmosphere new my-rag-app --template rag |
| 85 | +``` |
| 86 | + |
| 87 | +This scaffolds a complete Spring Boot + Atmosphere project with `pom.xml`, source code, and a frontend. Run it with `./mvnw spring-boot:run`. |
| 88 | + |
| 89 | +### Available Templates |
| 90 | + |
| 91 | +| Template | What you get | |
| 92 | +|----------|-------------| |
| 93 | +| `chat` | WebSocket chat with Broadcaster and `@ManagedService` | |
| 94 | +| `ai-chat` | `@AiEndpoint` with streaming, conversation memory, and configurable LLM backend | |
| 95 | +| `ai-tools` | AI tool calling with `@AiTool` and LangChain4j | |
| 96 | +| `rag` | RAG chat with Spring AI vector store and embeddings | |
| 97 | +| `quarkus-chat` | Real-time chat on Quarkus instead of Spring Boot | |
| 98 | + |
| 99 | +### npx Alternative (Zero Install) |
| 100 | + |
| 101 | +No Java CLI needed — scaffold from npm: |
| 102 | + |
| 103 | +```bash |
| 104 | +npx create-atmosphere-app my-chat-app |
| 105 | +npx create-atmosphere-app my-ai-app --template ai-chat |
| 106 | +``` |
| 107 | + |
| 108 | +### JBang Generator (Advanced) |
| 109 | + |
| 110 | +For full control over the generated project: |
| 111 | + |
| 112 | +```bash |
| 113 | +jbang generator/AtmosphereInit.java --name my-app --handler ai-chat --ai spring-ai --tools |
| 114 | +``` |
| 115 | + |
| 116 | +Options: `--handler` (chat, ai-chat, mcp-server), `--ai` (builtin, spring-ai, langchain4j, adk), `--tools` (include `@AiTool` methods). See [generator/README.md](https://github.com/Atmosphere/atmosphere/tree/main/generator) for details. |
| 117 | + |
| 118 | +## Sample Catalog |
| 119 | + |
| 120 | +### Chat |
| 121 | + |
| 122 | +| Sample | Description | Port | |
| 123 | +|--------|-------------|------| |
| 124 | +| `spring-boot-chat` | Real-time WebSocket chat with Spring Boot | 8080 | |
| 125 | +| `quarkus-chat` | Real-time WebSocket chat with Quarkus | 8080 | |
| 126 | +| `embedded-jetty-websocket-chat` | Embedded Jetty, no framework | 8080 | |
| 127 | + |
| 128 | +### AI Streaming |
| 129 | + |
| 130 | +| Sample | Description | Port | |
| 131 | +|--------|-------------|------| |
| 132 | +| `spring-boot-ai-chat` | Conversation memory, structured events, capability validation | 8080 | |
| 133 | +| `spring-boot-ai-classroom` | Multiple clients share streaming AI responses | 8080 | |
| 134 | +| `spring-boot-adk-chat` | Google ADK agent chat | 8080 | |
| 135 | +| `spring-boot-langchain4j-chat` | LangChain4j streaming | 8081 | |
| 136 | +| `spring-boot-embabel-chat` | Embabel agent chat | 8082 | |
| 137 | +| `spring-boot-spring-ai-chat` | Spring AI ChatClient | 8083 | |
| 138 | + |
| 139 | +### Tool Calling |
| 140 | + |
| 141 | +| Sample | Description | Port | |
| 142 | +|--------|-------------|------| |
| 143 | +| `spring-boot-ai-tools` | Framework-agnostic `@AiTool` with cost metering | 8090 | |
| 144 | +| `spring-boot-adk-tools` | Google ADK tool calling with caching | 8087 | |
| 145 | +| `spring-boot-langchain4j-tools` | LangChain4j tools with PII redaction | 8086 | |
| 146 | +| `spring-boot-spring-ai-routing` | Spring AI routing with content safety | 8088 | |
| 147 | +| `spring-boot-embabel-horoscope` | Multi-step Embabel agent with progress tracking | 8089 | |
| 148 | + |
| 149 | +### Infrastructure |
| 150 | + |
| 151 | +| Sample | Description | Port | |
| 152 | +|--------|-------------|------| |
| 153 | +| `spring-boot-mcp-server` | MCP tools, resources, and prompts for AI agents | 8083 | |
| 154 | +| `spring-boot-otel-chat` | OpenTelemetry tracing with Jaeger | 8084 | |
| 155 | +| `spring-boot-durable-sessions` | Session persistence with SQLite | 8080 | |
| 156 | +| `spring-boot-rag-chat` | RAG with Spring AI vector store | 8080 | |
| 157 | + |
| 158 | +## Environment Variables |
| 159 | + |
| 160 | +All AI samples accept the same environment variables: |
| 161 | + |
| 162 | +| Variable | Description | Default | |
| 163 | +|----------|-------------|---------| |
| 164 | +| `LLM_API_KEY` | API key for your LLM provider | (none — runs in demo mode) | |
| 165 | +| `LLM_MODEL` | Model name | `gemini-2.5-flash` | |
| 166 | +| `LLM_BASE_URL` | Override API endpoint | (auto-detected from model name) | |
| 167 | +| `LLM_MODE` | `remote` or `local` (Ollama) | `remote` | |
| 168 | + |
| 169 | +Pass them via `--env`: |
| 170 | + |
| 171 | +```bash |
| 172 | +atmosphere run spring-boot-ai-chat \ |
| 173 | + --env LLM_API_KEY=your-key \ |
| 174 | + --env LLM_MODEL=gpt-4o |
| 175 | +``` |
| 176 | + |
| 177 | +## Requirements |
| 178 | + |
| 179 | +| Requirement | Version | How to install | |
| 180 | +|-------------|---------|---------------| |
| 181 | +| Java | 21+ | `brew install openjdk@21` or [SDKMAN](https://sdkman.io) | |
| 182 | +| JBang | (optional) | `brew install jbang` — only for `atmosphere new` with full templates | |
| 183 | +| fzf | (optional) | `brew install fzf` — for fuzzy-search sample picker | |
| 184 | + |
| 185 | +## What's Next |
| 186 | + |
| 187 | +Once you have a running sample, you're ready to understand the code. Start with [Chapter 1: Introduction](/docs/tutorial/01-introduction/) for the architecture, or jump straight to [Chapter 2: Getting Started](/docs/tutorial/02-getting-started/) to build your first app from scratch. |
0 commit comments