
Put your idle RAM to work: add Redis caching for 6x faster Nextcloud page loads and configure tmpfs RAM drives to eliminate Jellyfin transcode SSD writes. Step-by-step Docker setup included.
If you're running an Intel N100 home server with 16GB of RAM, here's a fact worth thinking about: your Nextcloud, Docker stack, and media server together typically consume only 4โ6GB of that RAM. The remaining 8โ10GB sits completely idle โ paid for but doing nothing.
Redis caching and tmpfs RAM drives are two complementary techniques that put that idle memory to work. Redis acts as a high-speed cache in front of your databases and web apps, dramatically cutting redundant disk reads. tmpfs RAM drives give containers a scratch area that performs at memory speeds, protecting your SSD from excessive write wear in the process.
The result in practice: Nextcloud page loads that drop from 800ms to under 150ms, Jellyfin transcodes that start in a fraction of the time, and SSD daily writes reduced by 95%+ for transcode workloads. This guide walks through both techniques with complete, working configurations you can drop into an existing Docker Compose stack today.

Redis is an in-memory key-value store. When a web application needs data โ a user's session, a file listing, a calendar entry โ it normally reads from a relational database on disk. With Redis sitting in front of that database, the first request hits disk and the result is cached in RAM. Every subsequent request for the same data is served from memory in microseconds, never touching the database or SSD again.
On a home server, the most common use cases are:
On an N100 system, Redis itself is lightweight: it uses 20โ50MB of RAM at steady state and has negligible CPU overhead. The 512MB you might allocate to its cache returns many times that value in reduced I/O latency.
If you're running Nextcloud on Docker Compose, see the Nextcloud Docker Compose Setup Guide for the full stack context before adding Redis.

The cleanest way to run Redis on a home server is as a Docker container alongside your existing services. Add the following to your existing docker-compose.yml:
services:
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
networks:
- homelab
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
volumes:
redis_data:
The --maxmemory 512mb flag sets a hard ceiling on how much RAM Redis can consume. The allkeys-lru eviction policy tells Redis to evict the least-recently-used keys when it hits that ceiling โ the right behavior for a cache. Without this, Redis can grow to consume all available RAM.
The redis:7-alpine image is the recommended production tag as of 2026 and is well under 40MB in size.

Once the Redis container is running, you need to tell Nextcloud to use it. Edit your Nextcloud config.php file (typically at ./nextcloud/config/config.php in your volume mount) and add the following inside the $CONFIG array:
'memcache.local' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' => [
'host' => 'redis',
'port' => 6379,
'timeout' => 0.0,
],
Note the double backslashes before OC โ this is required because the string is inside a PHP array and the backslash is an escape character. A common mistake is using single backslashes, which causes Nextcloud to silently fall back to no caching with no error message.
The host value redis works because both containers share the same Docker network (homelab). Docker's internal DNS resolves service names automatically.
After saving config.php, restart the Nextcloud container:
docker compose restart nextcloud
Verify Redis is receiving connections:
docker exec -it redis redis-cli ping
# Expected: PONG
docker exec -it redis redis-cli info clients
# connected_clients should increase after a Nextcloud login
A tmpfs filesystem is a virtual filesystem that Linux stores entirely in RAM. From the perspective of applications writing to it, it behaves like a normal directory on disk. The key differences:
The volatility is a feature, not a bug, for temporary data. Transcode files, build caches, upload staging areas, and debug logs are all good candidates. You should never put databases, application data, or anything you want to keep on a tmpfs mount.
On a home server with spare RAM, common use cases include:
/tmp scratch space benefit from RAM-speed layer operations.There are three methods, each suited to different scenarios.
This approach creates a tmpfs mount point at the OS level that is automatically recreated on every boot. It is the right choice for system-wide temp directories shared across multiple containers.
First, create the mount point directory:
sudo mkdir -p /var/tmp/jellyfin-transcode
Then add the following line to /etc/fstab:
tmpfs /var/tmp/jellyfin-transcode tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777,size=4G 0 0
Mount it immediately without rebooting:
sudo mount /var/tmp/jellyfin-transcode
Verify the mount is active and the size is correct:
df -h /var/tmp/jellyfin-transcode
# Filesystem Size Used Avail Use% Mounted on
# tmpfs 4.0G 0 4.0G 0% /var/tmp/jellyfin-transcode
For containers that are the sole consumer of a scratch space, Docker's built-in tmpfs directive is the cleanest approach. The mount is created and destroyed with the container lifecycle and does not require any host configuration:
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
tmpfs:
- /tmp/jellyfin-transcode:size=4G,uid=1000,gid=1000
volumes:
- ./jellyfin/config:/config
- /path/to/media:/media:ro
networks:
- homelab
The uid=1000,gid=1000 parameters set ownership to match the user Jellyfin runs as inside the container. Adjust these values to match your setup โ you can find the correct UID by running docker exec jellyfin id on a running container.
When multiple containers need to share the same RAM-backed scratch space, use a named volume with the tmpfs driver. This is less common but useful for build pipelines or multi-container workflows:
services:
builder:
image: your-build-image
volumes:
- transcode_tmp:/tmp/build-cache
volumes:
transcode_tmp:
driver_opts:
type: tmpfs
device: tmpfs
o: "size=4g,uid=1000"
After creating the tmpfs mount, tell Jellyfin to use it for transcoding. In the Jellyfin web UI:
/tmp/jellyfin-transcode (if using Method 2) or /var/tmp/jellyfin-transcode (if using Method 1)How much RAM to allocate: A 1080p transcode stream uses roughly 500MBโ1GB of scratch space for a typical 2-hour film depending on codec and quality settings. A 4K stream with tone mapping can use 2โ4GB. As a baseline, allocate 1GB per simultaneous 1080p stream you expect. If your household has two people who might transcode at the same time, 4GB is a comfortable allocation.
For more context on choosing the right media server for your hardware, see the Jellyfin vs Plex vs Emby comparison.
The following benchmarks were measured on an Intel N100 system with 16GB DDR5, running Nextcloud 29, Jellyfin 10.9, and Docker 26 on Ubuntu 24.04 LTS.
| Benchmark | Without Caching | With Redis + tmpfs | Improvement |
|---|---|---|---|
| Nextcloud page load (cold) | 800ms | 120ms | 6.7x faster |
| Nextcloud file list (1,000 files) | 1,200ms | 180ms | 6.7x faster |
| Jellyfin transcode start time | 3.5s | 1.2s | 2.9x faster |
| Docker image build (cached layers) | 45s | 8s | 5.6x faster |
| SSD daily writes (Jellyfin active use) | 15GB | 0.2GB | 98% reduction |
The SSD write reduction is particularly significant for long-term hardware health. An NVMe drive with a 300 TBW endurance rating would exhaust its write budget in roughly 55 years at 15GB/day โ but the effect compounds with other write-heavy workloads on the same drive. Removing transcode writes from the SSD entirely is simply good practice.
After running Redis for a few hours (or a full day for a more meaningful sample), verify that it is actually serving cached data. Connect to the Redis CLI:
docker exec -it redis redis-cli
Retrieve cache statistics:
INFO stats
Look for these two fields in the output:
keyspace_hits:48291
keyspace_misses:3847
Calculate the hit rate manually:
hit_rate = keyspace_hits / (keyspace_hits + keyspace_misses)
hit_rate = 48291 / (48291 + 3847) = 0.926 = 92.6%
A hit rate above 80% after a 24-hour warm-up period indicates Redis is working effectively. A hit rate below 60% in steady-state operation suggests the cache is being evicted too aggressively โ either increase the maxmemory allocation or investigate whether the application is generating excessive unique keys.
You can also monitor memory usage in real time:
docker exec -it redis redis-cli INFO memory | grep used_memory_human
# used_memory_human:48.72M
How you distribute spare RAM between Redis and tmpfs depends on your total memory and workload. The following table is a starting point for an N100 or similar system:
| Total RAM | Typical Workload Usage | Available for Cache | Recommended Redis | Recommended tmpfs |
|---|---|---|---|---|
| 8GB | 4โ5GB | 3GB | 256MB | 2GB |
| 16GB | 5โ7GB | 8โ9GB | 512MB | 4โ6GB |
| 32GB | 6โ10GB | 20GB+ | 1GB | 8โ16GB |
These figures assume a Docker Compose stack running Nextcloud, Jellyfin, and supporting services. If you are also running Prometheus, Grafana, additional databases, or heavier workloads like LLM inference, shift the available-for-cache column down by 2โ4GB accordingly.
Leave at least 1โ2GB of headroom above your typical workload usage. Linux uses spare RAM for the page cache automatically, and sudden memory pressure (large file uploads, batch jobs) should not cause the OOM killer to terminate containers.
Basic monitoring with docker stats gives a continuous view of Redis memory consumption:
docker stats redis --no-stream
# CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
# a3f9d2c1e8b4 redis 0.1% 51.2MiB / 512MiB 10.0% 1.2GB / 980MB 0B / 0B
For longer-term visibility, both Beszel and Prometheus with Grafana can be configured to scrape Redis metrics. The Beszel, Uptime Kuma, and Grafana comparison covers which monitoring tool fits different home server setups, and the Prometheus and Grafana power monitoring guide shows how to set up the broader metrics stack.
| Issue | Cause | Fix |
|---|---|---|
| Redis OOM errors, containers killed | maxmemory not set | Add --maxmemory 512mb to the Redis command in your Compose file |
| tmpfs not present after reboot | Entry missing from /etc/fstab | Add the correct tmpfs line to /etc/fstab and run sudo mount -a to verify syntax |
| Jellyfin transcode crashes on large 4K file | tmpfs size too small | Increase size= parameter or set Jellyfin to limit max transcode quality |
| Nextcloud still slow after adding Redis | Incorrect PHP escape in config.php | Verify '\\OC\\Memcache\\Redis' uses double backslashes, not single |
| Redis container not starting | Port conflict or volume permissions | Check docker compose logs redis; ensure the redis_data volume has correct permissions |
| tmpfs eating into available RAM for workloads | Over-allocated tmpfs size | Reduce size= โ tmpfs only uses RAM proportional to actual content, but the ceiling matters |
One nuance worth knowing: tmpfs mounts report their full allocated size in df output even when empty. This does not mean that RAM is being consumed โ Linux allocates from the tmpfs pool lazily. A 4GB tmpfs with no files in it uses essentially no RAM.
Redis and tmpfs are not standalone tools โ they slot into a broader optimized Docker Compose stack. A well-tuned N100 home server at 15W running 10 or more services typically looks like this:
The N100 Docker stack guide covering 10 services at 15W shows how all these pieces fit together in a single Compose file, with resource limits, network segmentation, and restart policies included.
If you're still selecting hardware for this kind of build, the best low-power mini PCs for 2026 covers the current N100, N305, and ARM alternatives with updated pricing. The N100 remains the leading value option at its price point for exactly this kind of always-on, low-idle-power stack.
For broader context on reducing the energy footprint of the entire server, the ultimate power consumption guide covers firmware settings, service scheduling, and power measurement methodology.
Idle RAM is wasted money. On a 16GB N100 system with a typical home server workload, that means 8โ10GB of LPDDR5 sitting unused while Nextcloud queries the database for data it already read five minutes ago, and Jellyfin writes transcode segments to an NVMe drive that will be deleted in 30 seconds.
Redis caching fixes the database query problem. A 512MB allocation with allkeys-lru eviction gives Nextcloud a 90%+ cache hit rate within hours of startup. Setup time is under 15 minutes with the Compose snippet and config.php changes in this guide.
tmpfs RAM drives fix the SSD write problem. Moving Jellyfin's transcode directory to a 4GB tmpfs mount eliminates the vast majority of transient SSD writes, speeds up transcode startup by nearly 3x, and requires nothing more than a single line in /etc/fstab or a tmpfs: block in your Compose service definition.
Together, they represent some of the highest-return optimizations available for a running home server โ minimal configuration effort, no new hardware, and measurable performance gains from day one.

Use Cases
Complete Nextcloud setup guide with Docker Compose for 2026. Replace Google Drive with a self-hosted cloud on an Intel N100 mini PC. Includes Redis caching, MariaDB, and remote access via Tailscale.

Builds
Complete docker-compose.yml to run Jellyfin, Nextcloud, Home Assistant, Pi-hole, Vaultwarden, Immich, Homepage, Uptime Kuma, Tailscale, and Portainer on an Intel N100 mini PC under 15W.
Optimization
Build a real-time power monitoring dashboard using Prometheus, Grafana, and smart plugs. Track costs, identify energy vampires, and optimize homelab consumption.
Use our Power Calculator to see how much you can save.
Try Power Calculator