Skip to content

Latest commit

 

History

History
234 lines (192 loc) · 5.87 KB

File metadata and controls

234 lines (192 loc) · 5.87 KB

1. Install API Key Management Software

Step 1:

# If already installed, start the service directly
vault server -config=/mnt/agent-framework/<your user path>/myapp/vault/config/vault.hcl > /mnt/agent-framework/<your user path>/myapp/vault/vault.log 2>&1 &

cd scripts
chmod +x install_vault.sh 
./install_vault.sh /mnt/agent-framework/<your user path>/myapp
# This starts the service at http://127.0.0.1:8200 by default.
# VSCode's remote port forwarding will map the port automatically,
# so just click the VSCode popup window to open http://127.0.0.1:8200 in the browser.

Step 2: Set the number of login verification keys to 1

Set Key shares to 1, Key threshold to 1, then click Initialize. alt text

Step 3:

You will see two keys: Initial root token and a login verification key key1. Make sure to record them! You can also download them locally by clicking Download Keys to save the JSON file.

It is recommended to put the Initial root token in the .env file at the project root:

VAULT_ADDR='http://127.0.0.1:8200'
VAULT_TOKEN=<Initial root token>
SECRET_ENGINE_PATH='cubbyhole/env'

alt text

Then click Continue to Unseal.

Step 4:

Enter key1 as the key. alt text

Step 5:

Enter the Initial root token as the key. alt text

Step 6:

Login is successful. You will see a secrets engine cubbyhole/. Click View. alt text

Step 7:

Click Create secret, set the path to env — this corresponds to SECRET_ENGINE_PATH='cubbyhole/env' in the .env file. alt text

Step 8:

Enter key-value pairs and click Save to finish the configuration. alt text

The required keys are:

OPENROUTER_API_BASE=...
OPENROUTER_API_KEY=...
OPENAI_API_BASE=...
OPENAI_API_KEY=...
NEWAPI_API_BASE=...
NEWAPI_API_KEY=...

Step 9: Verify the configuration

export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='your Initial root token'
vault kv get -field=OPENROUTER_API_KEY cubbyhole/env

# Expected output — your OPENROUTER_API_KEY value:
abcabc...

2. Install opencode

If you have already installed opencode, just make sure the opencode.json file is correct:

vim /mnt/agent-framework/<your user path>/myapp/opencode/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "permission": {
    "bash": "allow",
    "edit": "allow",
    "write": "allow"
  },
  "provider": {
    "openrouter": {
      "options": {
        "baseURL": "${OPENROUTER_API_BASE}",
        "apiKey": "${OPENROUTER_API_KEY}"
      },
      "models": {
        "anthropic/claude-opus-4.6": {}
      }
    },

    "newapi": {
      "options": {
        "baseURL": "${NEWAPI_API_BASE}",
        "apiKey": "${NEWAPI_API_KEY}"
      },
      "models": {
        "claude-opus-4-6": {}
      }
    },

    "openai": {
      "options": {
        "baseURL": "${OPENAI_API_BASE}",
        "apiKey": "${OPENAI_API_KEY}"
      },
      "models": {
        "gpt-5.4": {},
        "gpt-5.4-pro": {}
      }
    }
  },
  "model": "openrouter/anthropic/claude-opus-4.6"
}

Step 1:

cd scripts
chmod +x install_opencode.sh 
./install_opencode.sh /mnt/agent-framework/<your user path>/myapp

Step 2: Verify the configuration

opencode run 'hello world'

# If you see output similar to the following, the setup is successful:

> build · anthropic/claude-opus-4.6

Hello! How can I help you today? If you need assistance with a software engineering task, feel free to describe what you're working on.

3. Set Up Python Environment

Step 1:

conda create -n agentos python=3.11
conda activate agentos
pip install -r requirements.txt

Step 2:

Ensure your .env file contains the following:

VAULT_ADDR='http://127.0.0.1:8200'
VAULT_TOKEN=abcabcabc
SECRET_ENGINE_PATH='cubbyhole/env'

Step 3: Run a quick test

# Download HLE dataset
cd datasets
git clone https://huggingface.co/datasets/cais/hle
cd ..

# Run example
python examples/run_hle.py

Parameter Reference

Parameter Type Default Description
--config str configs/bus.py Path to the configuration file
--model-name str openrouter/gemini-3.1-pro-preview Model name used for direct inference mode
--use-bus flag False Enable the AgentBus full agent pipeline (default: direct LLM inference)
--max-concurrency int 4 Maximum number of concurrent tasks
--max-rounds int 10 Maximum planning rounds per task in Bus mode
--start int None Start index (inclusive) of the HLE dataset subset
--end int None End index (exclusive) of the HLE dataset subset
--cfg-options list - Override config entries in key=value format

4. Miscellaneous (Optional)

# 1. Test model API calls
curl -X POST "https://xxx/v1/responses" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer xxx" \
  -d '{
    "model": "gpt-5.4-pro",    
    "input": "hello",
    "max_output_tokens": 2048
  }'

curl -X POST "https://xxx/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer xxx" \
  -d '{
  "model": "openai/gpt-5.4",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 2048
}'
# 1. Install gcloud
curl https://sdk.cloud.google.com | bash
gcloud init
gcloud auth application-default login
gcloud config set project YOUR_PROJECT_ID # Get your project ID from https://console.cloud.google.com/
gcloud auth application-default set-quota-project YOUR_PROJECT_ID
# 1. Install Playwright
pip install playwright
playwright install

# 2. Install browser-use
pip install browser-use
browser-use install