|
| 1 | +## Integrating n8n Workflows with OCI Generative AI |
| 2 | + |
| 3 | +### Prerequisites |
| 4 | +1. **OCI Account and Permissions:** |
| 5 | + * Create or use an active OCI tenancy with Generative AI enabled. |
| 6 | + * Set up IAM policies: Grant your user/group access to `generative-ai-family` in a sandbox compartment (e.g., `allow group <group-name> to manage generative-ai-family in compartment <compartment-name>`). |
| 7 | + * Gather: Compartment OCID (from OCI Console > Identity & Security > Compartments). |
| 8 | + * Docs: [OCI Generative AI Getting Started](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm) and [IAM Policies](https://docs.oracle.com/en-us/iaas/Content/Identity/Concepts/policygetstarted.htm). |
| 9 | +2. **[Set up OCI API key](https://docs.oracle.com/en-us/iaas/Content/generative-ai/setup-oci-api-auth.htm) authentication locally.** |
| 10 | +3. n8n installed and running (self-hosted or cloud version; version 1.0+ recommended for AI nodes). |
| 11 | + |
| 12 | +## Step 1: Launch the OCI GenAI Gateway |
| 13 | +The gateway runs a local server (port 8088) that mimics the OpenAI API, allowing n8n to interact with OCI models like those provided by OpenAI, Llama and Grok as an example. |
| 14 | + |
| 15 | +### Option 1: Run with Uvicorn (Local Development) |
| 16 | +From the repo root: |
| 17 | + |
| 18 | +1. Navigate to `./app` and install dependencies: `pip install -r requirements.txt`. |
| 19 | +2. Start the server: |
| 20 | + ```bash |
| 21 | + cd app |
| 22 | + uvicorn app:app --host 0.0.0.0 --port 8088 --reload |
| 23 | + ``` |
| 24 | + For production (Linux only, with Gunicorn for scaling): |
| 25 | + ```bash |
| 26 | + gunicorn app:app --workers 16 --worker-class uvicorn.workers.UvicornWorker --timeout 600 --bind 0.0.0.0:8088 |
| 27 | + ``` |
| 28 | + |
| 29 | +### Option 2: Run with Podman (Containerized Deployment) |
| 30 | + |
| 31 | +1. Install Podman: |
| 32 | + * Linux: `sudo apt install podman` (Ubuntu) or `sudo dnf install podman` (Fedora). |
| 33 | + * macOS: `brew install podman`. |
| 34 | + * Windows: Follow [Podman for Windows guide](https://podman.io/docs/installation#windows). |
| 35 | +2. Ensure `~/.oci/config` is set up (from prerequisites). |
| 36 | +3. Build and run: |
| 37 | + ```bash |
| 38 | + podman build -t oci_genai_gateway . |
| 39 | + podman run -p 8088:8088 \ |
| 40 | + -v ~/.oci:/root/.oci:Z \ |
| 41 | + -it --name oci_genai_gateway oci_genai_gateway |
| 42 | + ``` |
| 43 | +4. Verify: Open `http://localhost:8088` in a browser (should show a health check or API docs). Check logs with `podman logs oci_genai_gateway`. |
| 44 | + |
| 45 | +**Gateway Endpoint:** The server exposes `/v1/chat/completions` (OpenAI-compatible) at `http://localhost:8088`. |
| 46 | + |
| 47 | +## Step 2: Configure n8n to Use the OCI Gateway |
| 48 | +In n8n, use the **OpenAI** node but point it to your local gateway as a custom endpoint. This lets you select OCI models seamlessly. |
| 49 | + |
| 50 | +1. In n8n, create a new workflow. |
| 51 | +2. Add an **OpenAI** node (under AI > Chat Models). |
| 52 | +3. Configure credentials: |
| 53 | + * **API Key:** Leave blank or use a dummy value (gateway uses OCI auth). |
| 54 | + * **Base URL:** `http://host.docker.internal:8088/v1` (for Podman/Docker; use `http://localhost:8088/v1` if running natively). |
| 55 | + * **Model:** Select or enter an OCI model (list available models via gateway docs or OCI Console). |
| 56 | +n8n will now route requests through the gateway to OCI GenAI. |
| 57 | + |
| 58 | +## Step 3: Simple n8n Workflow Example – AI-Powered Text Summarization |
| 59 | +This example creates a workflow that triggers on a webhook (e.g., incoming email or form data), summarizes text using an OCI LLM, and sends the result via email. It demonstrates calling the LLM with minimal code. |
| 60 | + |
| 61 | +### Workflow Overview |
| 62 | +* **Trigger:** Webhook (receives input text). |
| 63 | +* **AI Node:** OpenAI (calls OCI via gateway for summarization). |
| 64 | +* **Output:** Email node (sends summary). |
| 65 | + |
| 66 | +### Step-by-Step Setup in n8n |
| 67 | + |
| 68 | +1. **Add Webhook Trigger:** |
| 69 | + Drag Webhook node. |
| 70 | + * Set Method: POST. |
| 71 | + * Path: `/summarize` (test URL: `http://your-n8n-instance/webhook/summarize`). |
| 72 | + * This receives JSON payload like `{"text": "Long article content here..."}`. |
| 73 | +2. **Add OpenAI Node for Summarization:** |
| 74 | + * Connect after Webhook. |
| 75 | + * Operation: Chat. |
| 76 | + * Credentials: Use the custom setup from Step 2 (Base URL: `http://localhost:8088/v1`). |
| 77 | + * Model: `meta.llama-3.3-70b-instruct` (or your preferred OCI model). |
| 78 | + * Messages: |
| 79 | + * Role: System – Prompt: `You are a helpful summarizer. Provide a concise 3-sentence summary.` |
| 80 | + * Role: User – Prompt: `{{ $json.text }}` (references webhook input). |
| 81 | + * Options: Temperature: 0.3 (for consistent outputs); Max Tokens: 200. |
| 82 | + **Effective API Call (Behind the Scenes):** |
| 83 | + The node generates a request like this to the gateway: |
| 84 | + ```json |
| 85 | + POST http://localhost:8088/v1/chat/completions |
| 86 | + { |
| 87 | + "model": "meta.llama-3.3-70b-instruct", |
| 88 | + "messages": [ |
| 89 | + {"role": "system", "content": "You are a helpful summarizer. Provide a concise 3-sentence summary."}, |
| 90 | + {"role": "user", "content": "{{input_text}}"} |
| 91 | + ], |
| 92 | + "temperature": 0.3, |
| 93 | + "max_tokens": 200 |
| 94 | + } |
| 95 | + ``` |
| 96 | + The gateway forwards this to OCI GenAI, returning a response like: |
| 97 | + ```json |
| 98 | + { |
| 99 | + "choices": [ |
| 100 | + { |
| 101 | + "message": { |
| 102 | + "role": "assistant", |
| 103 | + "content": "Summary sentence 1. Sentence 2. Sentence 3." |
| 104 | + } |
| 105 | + } |
| 106 | + ] |
| 107 | + } |
| 108 | + ``` |
| 109 | +3. **Add Email Output Node:** |
| 110 | + * Connect after OpenAI. |
| 111 | + * Node: Send Email (or Gmail/SMTP). |
| 112 | + * To: `recipient@example.com`. |
| 113 | + * Subject: `AI Summary of Input`. |
| 114 | + * Body: `{{ $json.choices[0].message.content }}` (extracts summary from AI response). |
| 115 | +4. **Activate and Test:** |
| 116 | + * Save and activate the workflow. |
| 117 | + * Send a POST request to the webhook (e.g., via curl): |
| 118 | + ```bash |
| 119 | + curl -X POST http://your-n8n-instance/webhook/summarize \ |
| 120 | + -H "Content-Type: application/json" \ |
| 121 | + -d '{"text": "Oracle OCI GenAI enables powerful automations. n8n connects apps seamlessly. Together, they boost productivity."}' |
| 122 | + ``` |
| 123 | + * Check n8n executions: The workflow should summarize the text and email it. |
| 124 | + * View logs in n8n for details (e.g., AI response parsing). |
| 125 | + |
| 126 | +## Troubleshooting & Tips |
| 127 | +* **Authentication Errors:** Verify `~/.oci/config` permissions and OCI policies (from prerequisites). Test with `oci` CLI commands like `oci os ns get`. |
| 128 | +* **Connection Issues:** Ensure port 8088 is open (firewall/OCI security lists). For Podman, use `--network=host` if needed. |
| 129 | +* **Model Not Found:** List OCI models in Console under **Analytics & AI > Generative AI**. Ensure your tenancy has the required permissions for `generative-ai-family`. |
0 commit comments