
Build a private AI automation pipeline with n8n and Ollama. Self-hosted workflows for RSS summarization, email processing, and smart home automation.
Every time you send a prompt to ChatGPT, you're paying per token and sharing your data with OpenAI. For home server enthusiasts who value privacy and want unlimited AI usage, there's a better way: run your own AI automation pipeline with n8n and Ollama.
This guide shows you how to build a completely self-hosted AI automation stack that costs nothing per inference, keeps your data local, and integrates with hundreds of services.

The community is moving toward local AI stacks. As one user on r/n8n put it:
"We've all hit that point: You build an incredible AI agent in n8n, but then you look at your OpenAI bill or worry about sending sensitive client data to the cloud. The 'pay-per-token' model is a tax on your curiosity and scale."

| Factor | Cloud AI (OpenAI/Claude) | Self-Hosted (Ollama + n8n) |
|---|---|---|
| Cost | $0.002-0.06 per 1K tokens | $0 after hardware |
| Privacy | Data sent to third party | Everything stays local |
| Rate Limits | API throttling | Unlimited |
| Uptime | Dependent on provider | You control |
| Latency | Network round-trip | Local inference |
| Model Choice | What provider offers | Any open-source model |
According to n8n's local LLM guide, local LLMs offer a cost-effective and secure alternative to cloud-based options. By running models on your own hardware, you can avoid recurring API costs and keep sensitive data within your own infrastructure.

The "Holy Trinity" of self-hosted AI automation consists of three components:
n8n is a fair-code workflow automation platform with 400+ integrations and native AI capabilities. Think Zapier, but self-hosted and infinitely more powerful.
Why n8n over alternatives:
Ollama makes running open-source LLMs dead simple. One command to install, one command to run any model.
Supported models include:
For RAG (Retrieval-Augmented Generation) workflows, add a vector database:
The n8n Self-Hosted AI Starter Kit bundles everything you need:
# Clone the starter kit
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
# Start everything with Docker Compose
docker-compose up -d
This gives you:
For more control, set up each component individually:
# docker-compose.yml
version: '3.8'
services:
n8n:
image: n8nio/n8n:latest
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=your-secure-password
- N8N_HOST=localhost
- N8N_PORT=5678
- GENERIC_TIMEZONE=America/Los_Angeles
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- ollama
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
qdrant:
image: qdrant/qdrant:latest
ports:
- "6333:6333"
volumes:
- qdrant_data:/qdrant/storage
volumes:
n8n_data:
ollama_data:
qdrant_data:
# Start the stack
docker-compose up -d
# Pull a model to Ollama
docker exec -it ollama ollama pull llama3.2
# Verify Ollama is running
curl http://localhost:11434/api/tags
According to Hostinger's integration guide:
"Since both the n8n instance and the Ollama instance are running as containers, the communication between them needs to happen through the Docker network. You can select
http://ollama:11434as the Ollama base URL."
In n8n:
http://ollama:11434 (Docker) or http://localhost:11434 (same machine)llama3.2)Automatically classify and summarize incoming emails using a proven n8n template:
Workflow:
Local implementation:
Who benefits:
Turn your favorite RSS feeds into a daily AI-curated digest using the RSS + AI template:
Workflow:
Benefits:
Build a knowledge base assistant that answers questions from your own documents:
Workflow components:
According to n8n's workflow templates:
"Documents from Google Drive are downloaded, processed into embeddings, and stored in the vector store for retrieval."
The r/homeassistant community discovered n8n's potential for smart home AI:
"After playing with N8N for a few days, I realized how cool it would be to use it with Assist in Home Assistant..."
Use cases:
As noted on r/LocalLLaMA:
"A 7B parameter model runs reasonably on consumer CPUs. Larger models benefit from GPUs but don't strictly require them."
| Setup | Idle | Inference Load |
|---|---|---|
| Intel N100 (CPU-only) | 6-10W | 15-25W |
| Ryzen 5600G (iGPU) | 25-35W | 65-95W |
| RTX 3060 system | 45-60W | 180-220W |
| RTX 4090 system | 80-100W | 400-500W |
For 24/7 operation with occasional AI tasks, an N100-based system with a 7B model offers excellent efficiency.
From r/n8n:
"The secret weapon? Coolify. If you aren't using Coolify yet, it's basically a self-hosted Vercel/Heroku. Here is the 'Holy Trinity' stack I'm running: Ollama (on Coolify), n8n, and Supabase. The Cost: $0.00 per token."
Their recommendations:
One community member built AI LaunchKit with 50+ pre-configured tools:
"I got tired of manually setting up n8n + all the AI tools I need, so I packaged everything into one installer... n8n pre-configured with 50+ AI and automation tools that integrate seamlessly."
Included tools:
The Clara project combines everything into a unified interface:
"Imagine building your own workspace for AI — with local tools, agents, automations, and image generation... fully offline, fully modular."
Features:
n8n supports sophisticated AI agent workflows that go beyond simple prompts:
According to n8n's beginner guide:
Practical agent examples from n8n's blog:
Problem: AI responses take 30+ seconds
Solutions:
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_NUM_PARALLEL=1
Problem: Ollama crashes with OOM
Solutions:
Problem: n8n can't reach Ollama
Solutions:
http://ollama:11434http://host.docker.internal:11434Problem: Complex workflows time out
Solutions:
Running a self-hosted AI stack requires attention to security:
| Setup | Initial Cost | Monthly Power |
|---|---|---|
| Intel N100 mini PC | $150-200 | $2-4 |
| Used workstation + GPU | $400-600 | $10-20 |
| Dedicated ML server | $1,000-2,000 | $20-40 |
At $0.002/1K tokens (GPT-4o-mini pricing):
| Monthly Tokens | Cloud Cost | Self-Hosted |
|---|---|---|
| 1M tokens | $2 | $0 |
| 10M tokens | $20 | $0 |
| 100M tokens | $200 | $0 |
| 1B tokens | $2,000 | $0 |
Break-even point:
Building a private AI automation pipeline with n8n and Ollama gives you:
The self-hosted AI movement is growing rapidly, with communities on r/LocalLLaMA, r/n8n, and r/selfhosted sharing configurations, templates, and troubleshooting tips daily.
Start with the n8n Self-Hosted AI Starter Kit, pull a 7B model, and build your first email summarizer. From there, the only limit is your imagination.
Already running Ollama for AI inference? Check out our guide on Self-Hosted AI with Ollama and Open WebUI for a complementary chat interface.

Use Cases
Self-host Paperless-ngx for document management. OCR setup, scanner integration, and automation tips for your home server.

Use Cases
Deploy Immich on your low-power home server. Complete Docker Compose setup, mobile backup config, and hardware transcoding for Intel N100.

Use Cases
Run your own AI assistant on low-power hardware. Complete guide to Ollama and Open WebUI setup with Docker on Intel N100.
Check out our build guides to get started with hardware.
View Build Guides