| layout | default |
|---|---|
| title | n8n AI Tutorial - Chapter 2: AI Nodes |
| nav_order | 2 |
| has_children | false |
| parent | n8n AI Tutorial |
Welcome to Chapter 2: AI Nodes and LLM Integration. In this part of n8n AI Tutorial: Workflow Automation with AI, you will build an intuitive mental model first, then move into concrete implementation details and practical production tradeoffs.
Configure and use different AI providers, manage credentials, and build multi-model workflows.
n8n provides dedicated nodes for various AI providers, each with specific capabilities and configuration options.
{
"parameters": {
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant specialized in {{ $json.domain }}."
},
{
"role": "user",
"content": "={{ $json.question }}"
}
],
"temperature": 0.7,
"maxTokens": 1000,
"topP": 0.9,
"frequencyPenalty": 0.0,
"presencePenalty": 0.0,
"responseFormat": "text"
},
"name": "OpenAI Chat",
"type": "@n8n/n8n-nodes-langchain.openAi",
"credentials": {
"openAiApi": "openai-api"
}
}{
"parameters": {
"model": "text-embedding-ada-002",
"input": "={{ $json.texts }}",
"encodingFormat": "float"
},
"name": "OpenAI Embeddings",
"type": "@n8n/n8n-nodes-langchain.openAi",
"credentials": {
"openAiApi": "openai-api"
}
}{
"parameters": {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Check the weather in {{ $json.location }}"
}
],
"functions": [
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country"
}
},
"required": ["location"]
}
}
]
},
"name": "OpenAI Functions",
"type": "@n8n/n8n-nodes-langchain.openAi"
}{
"parameters": {
"model": "claude-3-sonnet-20240229",
"prompt": "Human: {{ $json.question }}\n\nAssistant:",
"maxTokensToSample": 1000,
"temperature": 0.7,
"topP": 0.9,
"topK": 250
},
"name": "Claude Chat",
"type": "@n8n/n8n-nodes-langchain.anthropic",
"credentials": {
"anthropicApi": "anthropic-api"
}
}{
"parameters": {
"baseUrl": "http://localhost:11434",
"model": "llama2:13b",
"prompt": "{{ $json.prompt }}",
"options": {
"temperature": 0.7,
"top_p": 0.9,
"num_predict": 500,
"stop": ["Human:", "\n\n"]
}
},
"name": "Ollama Chat",
"type": "@n8n/n8n-nodes-langchain.ollama",
"typeVersion": 1
}# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull models
ollama pull llama2
ollama pull codellama
ollama pull mistral
# Start Ollama service
ollama serve{
"parameters": {
"model": "gpt2",
"inputs": "{{ $json.text }}",
"parameters": {
"max_new_tokens": 100,
"temperature": 0.7,
"do_sample": true,
"pad_token_id": 50256
},
"options": {
"use_gpu": false,
"device": "cpu"
}
},
"name": "HuggingFace Text Gen",
"type": "@n8n/n8n-nodes-langchain.huggingFaceInference",
"credentials": {
"huggingFaceApi": "huggingface-api"
}
}{
"parameters": {
"model": "gpt-4o",
"prompt": "You are a helpful AI assistant. Use the available tools to answer questions.\n\nAvailable tools:\n{{ $json.tools }}\n\nQuestion: {{ $json.question }}",
"maxIterations": 5,
"returnIntermediateSteps": false,
"memory": "buffer",
"tools": [
{
"name": "web_search",
"description": "Search the web for information",
"parameters": {
"query": "string"
}
},
{
"name": "calculator",
"description": "Perform mathematical calculations",
"parameters": {
"expression": "string"
}
}
]
},
"name": "AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"credentials": {
"openAiApi": "openai-api"
}
}{
"parameters": {
"operation": "upsert",
"pineconeIndex": "my-index",
"items": [
{
"id": "={{ $json.id }}",
"values": "={{ $json.embedding }}",
"metadata": {
"text": "={{ $json.text }}",
"source": "={{ $json.source }}"
}
}
]
},
"name": "Pinecone Upsert",
"type": "@n8n/n8n-nodes-langchain.pinecone",
"credentials": {
"pineconeApi": "pinecone-api"
}
}{
"parameters": {
"operation": "getMany",
"pineconeIndex": "my-index",
"query": "={{ $json.embedding }}",
"numberOfResults": 5,
"includeValues": false,
"includeMetadata": true
},
"name": "Pinecone Search",
"type": "@n8n/n8n-nodes-langchain.pinecone",
"credentials": {
"pineconeApi": "pinecone-api"
}
}- Go to Settings → Credentials in n8n UI
- Click "Add Credential"
- Select credential type (OpenAI, Anthropic, etc.)
- Enter API keys and other required information
- Test connection
- Save credential
// OpenAI Credential
{
"name": "OpenAI API",
"type": "openAiApi",
"data": {
"apiKey": "sk-..."
}
}
// Anthropic Credential
{
"name": "Anthropic API",
"type": "anthropicApi",
"data": {
"apiKey": "sk-ant-..."
}
}
// Pinecone Credential
{
"name": "Pinecone API",
"type": "pineconeApi",
"data": {
"apiKey": "...",
"environment": "us-east-1-aws"
}
}{
"nodes": [
{
"parameters": {
"model": "gpt-4o",
"messages": [{"role": "user", "content": "={{ $json.question }}"}]
},
"name": "Primary AI (GPT-4)",
"type": "@n8n/n8n-nodes-langchain.openAi",
"continueOnFail": true
},
{
"parameters": {
"model": "claude-3-sonnet-20240229",
"prompt": "Human: {{ $json.question }}\n\nAssistant:"
},
"name": "Fallback AI (Claude)",
"type": "@n8n/n8n-nodes-langchain.anthropic"
},
{
"parameters": {
"conditions": {
"string": [
{
"value1": "={{ $node['Primary AI (GPT-4)'].error }}",
"operation": "isEmpty"
}
]
}
},
"name": "Check Primary Success",
"type": "n8n-nodes-base.if"
}
],
"connections": {
"Primary AI (GPT-4)": {
"main": [
[
{
"node": "Check Primary Success",
"type": "main",
"index": 0
}
]
]
},
"Check Primary Success": {
"main": [
[
{
"node": "Fallback AI (Claude)",
"type": "main",
"index": 1
}
]
]
}
}
}{
"nodes": [
{
"parameters": {
"values": {
"string": [
{
"name": "model_choice",
"value": "={{ $json.complexity === 'high' ? 'gpt-4o' : 'gpt-3.5-turbo' }}"
}
]
}
},
"name": "Model Selector",
"type": "n8n-nodes-base.set"
},
{
"parameters": {
"model": "={{ $json.model_choice }}",
"messages": [{"role": "user", "content": "={{ $json.question }}"}]
},
"name": "Dynamic AI",
"type": "@n8n/n8n-nodes-langchain.openAi"
}
]
}// Custom AI processing
const response = await $node.openAi.default.sendMessage({
model: 'gpt-4o',
messages: [
{
role: 'system',
content: 'You are a data analyst. Provide insights in JSON format.'
},
{
role: 'user',
content: $input.item.json.data
}
]
});
// Parse and structure response
const insights = JSON.parse(response.choices[0].message.content);
return [{
json: {
insights: insights,
timestamp: new Date().toISOString(),
model: response.model
}
}];{
"parameters": {
"method": "POST",
"url": "https://api.custom-ai.com/generate",
"sendBody": true,
"bodyContentType": "json",
"bodyParameters": {
"parameters": [
{
"name": "prompt",
"value": "={{ $json.prompt }}"
},
{
"name": "model",
"value": "custom-model-v1"
}
]
},
"options": {}
},
"name": "Custom AI API",
"type": "n8n-nodes-base.httpRequest"
}{
"parameters": {
"mode": "queue",
"batchSize": 1,
"concurrency": 1,
"options": {
"reset": false
}
},
"name": "Rate Limiter",
"type": "n8n-nodes-base.splitInBatches"
}// Track API costs
const startTime = new Date();
const response = await $node.openAi.default.sendMessage({
model: 'gpt-4o',
messages: $input.item.json.messages
});
const endTime = new Date();
const duration = endTime - startTime;
// Estimate cost (rough calculation)
const inputTokens = response.usage.prompt_tokens;
const outputTokens = response.usage.completion_tokens;
const estimatedCost = (inputTokens * 0.00003 + outputTokens * 0.00006);
return [{
json: {
response: response.choices[0].message.content,
usage: response.usage,
estimated_cost: estimatedCost,
processing_time: duration,
model: response.model
}
}];{
"parameters": {
"mode": "retry",
"retryCount": 3,
"retryInterval": 1000,
"continueOnFail": true
},
"name": "Retry on Error",
"type": "n8n-nodes-base.errorTrigger"
}{
"nodes": [
{
"parameters": {
"errorsToCatch": "all",
"resume": "withDifferentBranch"
},
"name": "Catch Errors",
"type": "n8n-nodes-base.errorTrigger"
},
{
"parameters": {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Simplify this request for a smaller model: {{ $json.original_question }}"
}
]
},
"name": "Fallback Model",
"type": "@n8n/n8n-nodes-langchain.openAi"
}
]
}{
"parameters": {
"batchSize": 10,
"options": {
"merge": false
}
},
"name": "Batch AI Requests",
"type": "n8n-nodes-base.splitInBatches"
}{
"parameters": {
"dataToSave": {
"question": "={{ $json.question }}",
"answer": "={{ $json.answer }}",
"timestamp": "={{ new Date() }}"
},
"keys": {
"question": "={{ $json.question }}"
},
"ttl": 86400
},
"name": "Cache Responses",
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow"
}- Credential Security: Store API keys securely, never in workflow JSON
- Error Handling: Implement comprehensive error handling and fallbacks
- Rate Limiting: Respect API limits and implement queuing
- Cost Monitoring: Track usage and set budget alerts
- Model Selection: Choose appropriate models based on task complexity
- Caching: Cache frequent queries to reduce API calls
- Testing: Thoroughly test workflows before production deployment
- Documentation: Document complex workflows and custom logic
These AI nodes provide powerful capabilities for building intelligent automations. The next chapter will explore document processing with AI.
Most teams struggle here because the hard part is not writing more code, but deciding clear boundaries for name, json, parameters so behavior stays predictable as complexity grows.
In practical terms, this chapter helps you avoid three common failures:
- coupling core logic too tightly to one implementation path
- missing the handoff boundaries between setup, execution, and validation
- shipping changes without clear rollback or observability strategy
After working through this chapter, you should be able to reason about Chapter 2: AI Nodes and LLM Integration as an operating subsystem inside n8n AI Tutorial: Workflow Automation with AI, with explicit contracts for inputs, state transitions, and outputs.
Use the implementation notes around nodes, model, langchain as your checklist when adapting these patterns to your own repository.
Under the hood, Chapter 2: AI Nodes and LLM Integration usually follows a repeatable control path:
- Context bootstrap: initialize runtime config and prerequisites for
name. - Input normalization: shape incoming data so
jsonreceives stable contracts. - Core execution: run the main logic branch and propagate intermediate state through
parameters. - Policy and safety checks: enforce limits, auth scopes, and failure boundaries.
- Output composition: return canonical result payloads for downstream consumers.
- Operational telemetry: emit logs/metrics needed for debugging and performance tuning.
When debugging, walk this sequence in order and confirm each stage has explicit success/failure conditions.
Use the following upstream sources to verify implementation details while reading this chapter:
- View Repo
Why it matters: authoritative reference on
View Repo(github.com). - Awesome Code Docs
Why it matters: authoritative reference on
Awesome Code Docs(github.com).
Suggested trace strategy:
- search upstream code for
nameandjsonto map concrete implementation paths - compare docs claims against actual runtime/config code before reusing patterns in production