n8n-backup-love-story.html
n8n Backs Up n8n: A Love Story About Not Losing Your Work
It’s a Tuesday morning. You’re three coffees deep, tweaking your most complex n8n workflow — the one that took two weekends, a mass of Stack Overflow tabs, and one mass of profanity to build. You’re dragging nodes around, cleaning things up, making it pretty. Then you accidentally delete a connection. No — worse. You accidentally delete a node. The one with the 47-line Code block you never copied anywhere.
Ctrl+Z? Doesn’t work the way you think it does. Version history? Not in community edition. Git integration? That’s an Enterprise feature. Your workflow is gone, and the only copy that ever existed was the one you just destroyed.
This isn’t hypothetical. I’ve been there. And I’m willing to bet if you’ve used n8n for more than a month, you’ve had that stomach-drop moment too.
This article is the cure.
Sequel alert: This is a direct follow-up to n8n: A Homelab Superpower, But Handle With Care. In that article, I ended with a promise: “I will share the exact n8n workflow I use to back up all of my other workflows automatically.” This is me keeping that promise. If you haven’t read it, go do that first — it covers the security and development methodology you’ll need before building anything serious in n8n.
Just want the workflow? Skip straight to the download section at the bottom. Import it, configure your credentials, and you’re done in 10 minutes. But stick around — the why is more interesting than the how.
$ ./quick_install.sh
Quick Install: 5 Steps, 10 Minutes
Prerequisites: A running n8n instance (community edition is fine), SSH access to your Docker host, and an n8n API key (Settings → API).
Step 1: Deploy the File-Writer Service
n8n community edition can’t write files to disk directly (the fs module is blocked). This tiny HTTP service bridges the gap.
# On your Docker host (not inside the container):
sudo vi /opt/n8n/file_writer_service.py
# Paste the file-writer script (see "The File-Writer Hack" section below)
sudo vi /etc/systemd/system/n8n-file-writer.service
# Paste the systemd unit file
sudo systemctl daemon-reload
sudo systemctl enable --now n8n-file-writer
Verify it’s running: curl -s http://127.0.0.1:8765 should return a response (even an error is fine — it means the service is listening).
Step 2: Create the Backup Directory
sudo mkdir -p /opt/n8n/backups/workflows
sudo chown -R $(whoami):$(whoami) /opt/n8n/backups
Step 3: Import the Workflow
Download N8n_Complete_Backup_with_GCS_SANITIZED.json, then in n8n: Workflows → Import from File → select the JSON.
Step 4: Update Credentials
Two credential types need updating (the workflow will show warnings on the nodes that need them):
| Credential | Nodes That Use It | How to Get It |
|---|---|---|
| n8n API | “Get All Workflows”, “Get Workflow Details” | Settings → API → Create API Key |
| SSH Private Key | “Create Backup Directory”, “Cleanup Old Local Backups”, “Cleanup Old Tmp Files” | Your existing SSH key to the Docker host |
Step 5: Activate and Verify
- Toggle the workflow to Active
- Click Test Workflow to run it immediately
- Check your backup directory:
ls /opt/n8n/backups/workflows/ - You should see a timestamped folder with a JSON file for every workflow
That’s it. Tomorrow at 2 AM, it’ll run automatically. Every day, forever, for free.
No GCS? The workflow works without cloud backup. Just ignore the GCS references in the manifest — your local backups are the main event. Add cloud later if you want off-site DR.
$ cat results.log
What You Get: The End Result
Before I explain how it works, let me show you what “enterprise-grade DR on a homelab budget” actually looks like.
Every morning at 2 AM, while you’re sleeping, this workflow silently:
- Queries the n8n API for every workflow in your instance
- Fetches the complete definition of each one (not just the summary — every node, every connection, every setting)
- Saves each workflow as an individual, human-readable JSON file in a timestamped directory
- Generates a backup manifest cataloging everything it saved
- Uploads the whole thing to Google Cloud Storage for off-site disaster recovery
- Cleans up local backups older than 30 days so your disk doesn’t fill up
Here’s what the backup directory looks like on a typical morning:
/opt/n8n/backups/workflows/
├── 2026-02-10_020003/
│ ├── _backup_manifest.json
│ ├── Duplicati_Backup_Monitor_I80zE5ptBHVt44zZ.json
│ ├── N8n_Complete_Backup_with_GCS__Daily_DR__mWzpgzamOF0XpdvP.json
│ ├── PBS_Backup_Notification_Monitor_yA9iPkyJhsHfvubx.json
│ ├── Pinball_Service_Monitor_yg7dzcbnBBosFQej.json
│ ├── RSS_Feed_with_Gen_AI_Summary_abc123.json
│ └── ... (every workflow, every day)
├── 2026-02-11_020002/
│ ├── _backup_manifest.json
│ └── ... (same set, one day later)
└── 2026-02-12_020001/
└── ...
Notice the filenames: Workflow_Name_WorkflowID.json. Human-readable and machine-parseable. You can tell what’s what without opening a single file.
But the real magic? This:
diff 2026-02-10_020003/RSS_Feed_with_Gen_AI_Summary_abc123.json \
2026-02-11_020002/RSS_Feed_with_Gen_AI_Summary_abc123.json
< "expression": "Summarize these articles with a focus on security implications"
> "expression": "Summarize these articles with a focus on security implications and practical takeaways"
That’s it. That’s the exact change I made on Monday afternoon. Line 47. One sentence added to a Gemini prompt. When something breaks next week and I can’t remember what I changed, I don’t have to guess. I diff yesterday against last week and the answer is right there.
That’s version control for a GUI tool. That’s what separates “I hope my workflows are fine” from “I know exactly what changed and when.”
$ tree pipeline/
The Architecture: 10 Nodes, Zero Drama
Here’s the full pipeline at a glance:
Schedule (2 AM cron)
│
▼
Fetch All Workflows (n8n API)
│
▼
Set Backup Metadata (timestamp, count, array)
│
▼
Create Backup Directory (SSH → mkdir)
│
▼
Split Workflows for Processing (Code node)
│
▼
Get Full Details for Each (n8n API, per-workflow)
│
▼
Prepare File Write (sanitize names, base64 encode)
│
▼
Loop & Write (batch write via file-writer service)
│
▼
Generate Manifest + Upload to GCS
│
▼
Cleanup Old Backups (30+ days local, 7+ days tmp)
Ten nodes. No exotic plugins. No third-party services (besides GCS, which is optional). Just the n8n API talking to itself, some JavaScript, and a file-writing trick I’ll explain in a moment.
Let me walk through each stage.
$ init –stages 1-4
The Foundation: Trigger, Fetch, Metadata, Directory
The first four nodes set the stage. Nothing fancy here — just solid plumbing.
Cron: 0 2 * * * — 2 AM, every day. No exceptions, no manual intervention. I picked 2 AM because nothing else is running — no RSS fetches, no monitoring sweeps, no backup jobs competing for resources.
// Why not export manually? Because you won’t. You’ll do it for a week, maybe two, then you’ll forget. Automate it or accept the risk.
Hits http://localhost:5678/api/v1/workflows using n8n’s own API credentials. Returns every workflow in the instance — active, inactive, all of them.
// Important: this endpoint returns a summary list — names, IDs, active status — but NOT full node definitions. That’s why Stage 5 exists.
A simple Set node that captures three things:
- backupTimestamp —
2026-02-12_020001format (directory names and manifest) - workflowCount — how many workflows exist (for the summary)
- workflowsArray — the full list from Stage 2 (passed downstream for splitting)
// Nothing clever. Just organizing data for the pipeline.
An SSH node runs mkdir -p /opt/n8n/backups/workflows/{timestamp} on the Docker host.
/opt path. SSH bridges that gap.
$ ./process_each_workflow.sh
The Core: Split, Fetch, and Prepare
This is three nodes working together, and it’s the heart of the whole workflow.
Split Workflows for Processing
A Code node that takes the array from Stage 3 and fans it out into individual items. Each item carries its workflow metadata plus the backup timestamp.
Get Workflow Details
For each split item, an HTTP Request node hits http://localhost:5678/api/v1/workflows/{workflowId}. This is where we get the full definition: every node, every connection, every parameter, every credential reference.
Prepare File Write
Another Code node that does the heavy lifting:
- Sanitizes the workflow name for use as a filename (replaces special characters with underscores)
- Appends the workflow ID for uniqueness
- Builds the full filepath:
/opt/n8n/backups/workflows/{timestamp}/{filename}.json - Pretty-prints the JSON (
JSON.stringify(data, null, 2)) so it’s human-readable and diff-friendly - Base64-encodes the whole thing for transport
That last step — base64 encoding — exists because of the file-writer service, which deserves its own section.
$ python3 file_writer_service.py
The File-Writer Hack (or: When n8n Won’t Let You Write Files)
Here’s a fun problem I ran into.
n8n community edition blocks the fs (filesystem) module in Code nodes. It’s a security decision — they don’t want arbitrary workflows writing files to the server. Fair enough. But I need to write files to the server.
My first attempt: SSH from the n8n container back to the Docker host and pipe the file content through. Result? ECONNRESET errors after a few files. SSH doesn’t love being hammered with rapid sequential connections from inside a container.
My solution: a 30-line Python HTTP service running on the Docker host.
# Listens on 127.0.0.1:8765 (localhost only)
# Accepts POST with {"filepath": "...", "base64Data": "..."}
# Decodes the base64, writes the file
# Only allows paths under /opt/n8n/backups/workflows/
It’s a systemd service (n8n-file-writer.service) that starts on boot and does exactly one thing: accept base64-encoded files over HTTP and write them to disk. Localhost only, path-restricted, no authentication needed because it’s not exposed to the network.
Is it elegant? Not particularly. Does it work flawlessly every single day at 2 AM without complaint? Yes. That’s homelab engineering.
mkdir. But when you’re writing 15+ JSON files in rapid succession through a loop, SSH connections from inside a Docker container start timing out with ECONNRESET. The HTTP file-writer handles it without breaking a sweat.
The Loop Over Items node batches the writes, sending each workflow JSON to http://127.0.0.1:8765 one at a time. Clean, reliable, boring. Boring is good in backup systems.
$ cat _backup_manifest.json
The Backup Manifest
After all workflows are written, the pipeline generates a _backup_manifest.json in the same timestamped directory:
{
"backupDate": "2026-02-12 02:00:15",
"totalWorkflows": 15,
"backupLocation": "/opt/n8n/backups/workflows/2026-02-12_020001/",
"status": "SUCCESS",
"gcsFullBackupPath": "gs://your-bucket/full-backups/2026-02-12_020001/...",
"gcsWorkflowsPath": "gs://your-bucket/workflows/2026-02-12_020001/",
"workflows": [
{ "id": "abc123", "name": "My Important Workflow", "active": true },
{ "id": "def456", "name": "Another Workflow", "active": false }
]
}
This is your table of contents. At a glance you can see: how many workflows were backed up, whether the backup succeeded, where to find it locally and in the cloud, and which workflows are active vs. inactive.
It sounds like a small thing, but when you have 15+ workflows and you’re trying to find the right backup at 11 PM on a Friday, a manifest saves you from opening every file.
$ gsutil rsync
Off-Site to Google Cloud Storage
Local backups protect against mistakes. Cloud backups protect against your server catching fire.
The workflow uploads the complete backup to a Google Cloud Storage bucket. This gives you the full 3-2-1 backup picture:
| Layer | Protects Against | Location |
|---|---|---|
| n8n instance | Nothing (it IS the risk) | Docker container |
| Local JSON backups | Accidental deletions, bad edits | Host filesystem |
| GCS cloud backup | Hardware failure, fire, theft | Google Cloud |
What Does This Cost?
| Resource | Cost |
|---|---|
| GCS Standard Storage | ~$0.020/GB/month |
| Typical backup size (15 workflows) | ~500 KB |
| 30 days of daily backups | ~15 MB |
| Monthly cost | < $0.01 |
That’s not a typo. A penny. Maybe. Google doesn’t even bother to charge you for storage this small. Enterprise-grade disaster recovery for the price of literally nothing.
Don’t have GCS? The workflow works perfectly fine without the cloud upload. You still get local timestamped backups, the manifest, the cleanup — all of it. The GCS piece is the cherry on top, not a requirement.
$ find . -mtime +30 -delete
Cleanup (Because Hoarding Isn’t a Backup Strategy)
Two cleanup steps run at the end:
- Local backups older than 30 days —
find /opt/n8n/backups/workflows -type d -mtime +30 -exec rm -rf {} + - Temp files older than 7 days —
find /tmp -name 'n8n-*-backup-*.tar.gz' -mtime +7 -delete
Both use continueOnFail: true so a cleanup failure doesn’t tank the whole backup. Cleanup is important but not critical.
30 days of local history means you can go back an entire month. Combined with GCS, you can go back… well, as far as your cloud retention policy allows.
$ ls *backup*backup*
The Beautiful Recursion
Here’s my favorite detail about this whole system: the backup workflow backs up itself.
N8n_Complete_Backup_with_GCS__Daily_DR__mWzpgzamOF0XpdvP.json appears in every timestamped backup directory. If you break the backup workflow while editing it, yesterday’s backup has a working copy. Import it, and you’re back in business.
It’s turtles all the way down, and it’s exactly the kind of safety net that lets you develop fearlessly.
$ diff –color tuesday/ today/
How to Restore (Because a Backup You Can’t Restore Is Just Wasted Disk Space)
Let’s say it’s Friday. Something’s broken. You need to roll back your “Pinball Service Monitor” workflow to what it looked like on Tuesday.
ls /opt/n8n/backups/workflows/
# 2026-02-10_020003/
# 2026-02-11_020002/
# 2026-02-12_020001/
Tuesday’s backup is 2026-02-10_020003.
diff /opt/n8n/backups/workflows/2026-02-10_020003/Pinball_Service_Monitor_yg7dzcbnBBosFQej.json \
/opt/n8n/backups/workflows/2026-02-12_020001/Pinball_Service_Monitor_yg7dzcbnBBosFQej.json
This shows you exactly what changed between Tuesday and today. Maybe it’s obvious. Maybe it’s a single parameter you accidentally nudged. Either way, you know now.
Option A: Full restore via n8n UI
- Open n8n
- Go to Workflows → Import from File
- Select
Pinball_Service_Monitor_yg7dzcbnBBosFQej.jsonfrom Tuesday’s backup - It imports as a new workflow. Verify it looks right.
- Deactivate the broken version, activate the restored one
Option B: Surgical fix
- Open the
diffoutput - Find the changed line
- Fix it manually in the n8n editor
- Less drama, same result
Either way, you’re back to a known-good state in under 5 minutes. No guessing, no recreating from memory, no panic.
$ ./check_requirements.sh
Prerequisites: What You Need Before Importing
Before you import the downloadable workflow, you’ll need a few things in place:
| Prerequisite | What It Is | Why |
|---|---|---|
| n8n API credential | An API key configured in n8n Settings → API | The workflow queries itself |
| SSH credential | Private key with access to your Docker host | Creates directories, runs cleanup |
| File-writer service | Python HTTP service on port 8765 | Writes JSON files from inside the container |
| GCS bucket (optional) | Google Cloud Storage bucket | Off-site backup destination |
The File-Writer Service
This is the one piece that doesn’t come with n8n. You’ll need to set up a small systemd service on your Docker host:
file_writer_service.py — the full Python script
# /opt/n8n/file_writer_service.py
from http.server import HTTPServer, BaseHTTPRequestHandler
import json, base64, os
ALLOWED_PREFIX = "/opt/n8n/backups/workflows/"
class FileWriter(BaseHTTPRequestHandler):
def do_POST(self):
data = json.loads(self.rfile.read(int(self.headers['Content-Length'])))
filepath = data['filepath']
if not filepath.startswith(ALLOWED_PREFIX):
self.send_response(403)
self.end_headers()
return
os.makedirs(os.path.dirname(filepath), exist_ok=True)
with open(filepath, 'wb') as f:
f.write(base64.b64decode(data['base64Data']))
self.send_response(200)
self.end_headers()
self.wfile.write(b'{"status": "ok"}')
HTTPServer(('127.0.0.1', 8765), FileWriter).serve_forever()
n8n-file-writer.service — the systemd unit file
# /etc/systemd/system/n8n-file-writer.service
[Unit]
Description=n8n File Writer Service
After=network.target
[Service]
ExecStart=/usr/bin/python3 /opt/n8n/file_writer_service.py
Restart=always
User=root
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now n8n-file-writer
Localhost only. Path-restricted. Does one thing well.
$ cat /var/log/lessons.log
Lessons Learned
Six Things I Learned the Hard Way
- The list endpoint lies to you (sort of). The
/api/v1/workflowsendpoint returns workflow metadata — names, IDs, active status. It does not return full node definitions. If you back up just this response, you have a nice inventory and zero ability to restore. Always fetch individual workflow details via/api/v1/workflows/{id}. - SSH from containers is flaky. Running sequential SSH commands from inside a Docker container to the host works… until it doesn’t. After about 10 rapid connections, I started getting
ECONNRESETtimeouts. The HTTP file-writer service solved this completely. - Pretty-print your JSON.
JSON.stringify(data, null, 2)adds whitespace and indentation. It makes the files larger (barely) but makesdiffoutput actually readable. Minified JSON diffs are useless. continueOnFailis your friend. The cleanup nodes usecontinueOnFail: trueso a failed cleanup doesn’t mark the entire backup as failed. The backup itself is the critical path. Cleanup is housekeeping.- Name your files predictably.
WorkflowName_WorkflowID.jsongives you both human readability (you know what it is) and machine uniqueness (the ID prevents collisions if two workflows have similar names). - The manifest isn’t optional. When you have 15+ workflows and you need to find the right file at 11 PM on a Friday, you’ll be glad you have a table of contents instead of opening files one by one.
$ wget
Download the Workflow
Ready to set it up? The sanitized workflow JSON is below. All credential IDs, instance IDs, and bucket names have been replaced with placeholders.
After importing:
- Update the n8n API credential references (2 nodes use it)
- Update the SSH credential references (3 nodes use it)
- Update the GCS bucket name in the Prepare Manifest Code node (or remove the GCS reference if you don’t need cloud backup)
- Set up the file-writer service on your Docker host (see Prerequisites above)
- Activate the workflow
$ exit
Wrapping Up
Here’s the thing about backups: nobody cares about them until they need one. And by then it’s too late.
This workflow took me an afternoon to build and has been running silently every night for months. It costs nothing. It requires zero maintenance. And it has already saved me twice — once when I accidentally broke a complex workflow, and once when I needed to see exactly what I’d changed three weeks ago.
If you’re running n8n in your homelab, you already know it’s powerful. This is how you make it reliable. Enterprise-grade disaster recovery, timestamped version history, and off-site cloud backup — all running on the same tool that almost lost your work in the first place.
The irony isn’t lost on me. But that’s n8n for you: powerful enough to automate its own safety net.
Now go back up your workflows. Then go build something you’re not afraid to break.

One Comment