
In this post I describe how and why I radically overhauled my backup strategy over the past few years – and which alternatives I consciously ruled out. It is written as a journey (what it was), which hopefully gives you helpful insight, whether my decisions could be yours or not (what is ok for me 🤓). I added takeaway sections, for more general insights.
TLDR – Too long don’t read… the final setup:
- external USB 3.2 Gen 2×2 RAID enclosure (20 Gb/s) with two WD Red NVMe SSDs in Hardware RAID 1, formatted as EXT4.
- initial backup (~ 1.3 TB) taken directly on the Mac via ExtFS (Paragon).
- continuous service on a Raspberry Pi 4 B (8 GB RAM, Ubuntu):
- enclosure attached over USB 3.0 (5 Gb/s).
- SuperSpeed 3.1 USB C → USB 3.0 (Typ A) cable since Pi has no USB-C
- Mac → Pi is capped by Gigabit Ethernet (1 Gb/s).
- BorgBackup:
- via local mount for initial backup, and
- via SSH for continuous service
- → incremental, chunk‑based deduplication before encryption, transfers only deltas.
From NAS to Cloud – and back again
Early NAS
All data once lived unencrypted on a local NAS, but over time almost every feature drifted to the cloud:
- Music → Apple Music
- Photos/Videos → iCloud Photos
- Documents → iCloud Drive
- …
iCloud
I upgraded to the 2 TB plan with end‑to‑end encryption. Because iCloud deletes items across all devices when you trash them (a 30‑day grace period for Photos), I still needed an at‑rest backup.
OneDrive
A Microsoft 365 Family promo ( €5 / month) gave every family member 1 TB on OneDrive plus Office apps – looked perfect at first glance. In early 2025 Microsoft triple‑billed; PayPal refunds killed the discount and doubled the price. OneDrive also flagged my Cryptomator vaults as ransomware, shunting them to the recycle bin and deleting them soon after. So, I learned the hard way that this is not reliable and that you are dependent on Microsoft’s goodwill. I was out.
AWS S3
I tried AWS S3 with client‑side encryption. The tooling is nerd‑friendly but billing is opaque: you pay for storage and every mutation. Restoring from Deep Archive can take days just to compute deltas. Incremental backups become slow, pricey, and error‑prone – all inside a cloud silo. Not future‑proof.
Takeaway Do not use one of the big Silicon Valley corps for personal backup.
A brief NAS relapse …
Hence, in the beginning of 2025 I tried the cost-efficient approach and reactivated a five‑year‑old Linux NAS (2.5″ SSDs in RAID 1) over Gigabit Ethernet. But it fizzled: the initial RAID sync burned two days and the SMB/SSH link kept dropping – probably age‑related hardware issues. I bailed quickly.
The new hardware strategy
- Enclosure: compact USB 3.2 Gen 2×2 RAID case (~ €140) with active cooling (fan + heatsink) for 24 / 7 duty
- SSDs: 2 × WD Red SN700 NVMe M.2 in RAID 1
Why WD Red? Higher TBW (Total Bytes Written) endurance and RAID‑tuned firmware for constant use, unlike WD Black (faster, lower endurance) or WD Blue (mainstream, low endurance). - Throughput: measured 980 MB/s read & write on the Mac (20 Gb/s)
- Filesystem: EXT4 (journaled)
Workflow
- Initial backup
- Enclosure attached via USB 3.2 directly to the Mac (macOS).
- ExtFS (Paragon) mounts EXT4 read‑write.
- One‑off copy of ~ 1.3 TB.
- Continuous operation
- Enclosure moves to Raspberry Pi 4 B (8 GB RAM, Ubuntu) over USB 3.0 (5 Gb/s).
- Pi exports the EXT4 volume via SSH.
- BorgBackup on the Mac runs scheduled jobs:
- Chunk‑based deduplication before compression & encryption.
- Only changed data is sent (incremental).
- Hash cache is fetched from the server if needed.
Software RAID vs. Hardware RAID
Linux and macOS software RAID sets are incompatible (see above initial
vs. continuous
). A Parallels Desktop VM workaround would be pricey, convoluted, and would break UNIX permissions on shared folders. Hardware RAID sidesteps every trap.
Takeaway
Instead of cloud silos and incompatible software RAIDs I now rely on a fast, portable hardware RAID 1 paired with battle‑tested open‑source backup software. It delivers speed, full data sovereignty, encrypted off‑site copies, and granular incremental backups – exactly the balance I was after.
Addendum
During this overhaul of my backup strategies I struggled with “USB” as well as rethought how I make my raw binary backups incrementally… I wrote two articles regarding my current strategy way back when on this blog… Let me share these topics with you.
Cables, Adapters & Dongles – the hidden bottleneck
USB looks like a single standard, but it splinters into sub-standards (USB 2.0 → USB 4) plus Intel’s Thunderbolt branch.
Here you can see the USB-standard-hell:
Standard | Year | Max Speed | Notes |
---|---|---|---|
USB 2.0 | 2000 | 480 Mbit/s (High Speed) | Introduction of Mini/Micro-USB ports |
USB 3.0 | 2008 | 5 Gbit/s (SuperSpeed) | New connectors (USB-A, USB-B, Micro-B) |
USB 3.1 Gen 1 | 2013 | 5 Gbit/s (SuperSpeed) | Renamed USB 3.0 |
USB 3.1 Gen 2 | 2013 | 10 Gbit/s (SuperSpeed+) | New protocol, improved PHY |
USB 3.2 Gen 1×1 | 2017 | 5 Gbit/s (SuperSpeed) | Another rename of USB 3.0/3.1 Gen 1 |
USB 3.2 Gen 2×1 | 2017 | 10 Gbit/s (SuperSpeed+) | Former USB 3.1 Gen 2 |
USB 3.2 Gen 1×2 | 2017 | 10 Gbit/s (SuperSpeed) | Dual-lane 5 Gbit/s over USB-C |
USB 3.2 Gen 2×2 | 2017 | 20 Gbit/s (SuperSpeed++) | Dual-lane 10 Gbit/s over USB-C |
USB4 | 2019 | 20–40 Gbit/s | Thunderbolt-compatible, always USB-C |
The new USB-C plug merely unifies the connector; it doesn’t end the chaos. You can mate almost anything with anything and nothing will fry – the link simply falls back to the slowest, mutually supported mode – yet figuring out which mode you actually get is the trick.
Why it’s messy
- USB vs. Thunderbolt. Both share USB-C ports but have chipsets and feature sets like daisy chaining.
- Host limits. My Raspberry Pi 4 B exposes two USB-2 (480 Mb/s) and two USB-3 (5 Gb/s) USB-A ports – no USB-C at all.
- Cable reality.
- USB-C → USB-A cables are usually only USB 2.0 unless clearly labeled SuperSpeed.
- Passive cables longer than ≈ 0.7 m cannot sustain Thunderbolt 4 / USB 4’s 40 Gb/s; they downshift automatically.
- Every extra dongle drops the chain to the slowest spec in the path.
- Long cables may even lead to stuttering / freezing transfers or unreliable device detection.
- Adapters bite. A premium Thunderbolt-certified C-to-C cable throttles to USB 2.0 if you slap on a cheap C-to-A dongle.
Verify, don’t trust You may check whether the connection is as expected like this:
Platform | Quick check | Speed test |
---|---|---|
Linux | lsusb -t |
dd if=/dev/zero of=/mnt/raid/test.bin bs=5G count=1 oflag=direct |
macOS | Menu → About This Mac → System Report → USB | Use a disk benchmark or dd in Terminal |
The System Report on macOS is static – close & reopen after changes.
Takeaway
Before blaming drives or backup tools, sanity-check the wire. Ensure every cable, hub and adapter is rated for the speed you expect, keep high-speed runs short, and verify with lsusb
or a quick read & write test. In the USB jungle, the weakest link always wins.
Incremental Backups for Raw Images – rsync-batch vs. Borg, revisited
I store my day-to-day data with BorgBackup. For huge binary blobs (full SD-card images, VM disks) I originally cobbled together an rsync batch-diff pipeline. Time to benchmark both approaches head-to-head.
rsync-batch chain | BorgBackup | |
---|---|---|
What’s saved | Latest full image + reverse delta patches | Content-defined chunks, deduped repo-wide |
Restore speed | Apply n patches to reach target snapshot | Instant extract of any archive (O (1)) |
Encryption | External (Cryptomator / GPG) | Built-in AES-CTR + HMAC |
Integrity checks | Ad-hoc MD5 scripts | borg check (blake2 / CRC) |
Retention | DIY rotate script | borg prune policies |
Memory footprint | rsync scales badly on 10 + GB images | Chunk size ≤ 2 MiB, constant RAM |
Transport efficiency | Plain copy of patch files | Delta-aware borg serve over SSH |
Achieving the same semantics with Borg
# 1· Create / init encrypted repo (local or via SSH)
borg init --encryption=repokey-blake2 /mnt/raid/borg
# 2· First full volume snapshot (streamed from dd)
sudo dd if=/dev/mmcblk0 bs=1M | \
borg create --stdin --compression zstd,6 \
/mnt/raid/borg::vol-{now:%Y%m%d}
# 3· Subsequent incrementals – Borg deduplicates on chunk level
sudo dd if=/dev/mmcblk0 bs=1M | \
borg create --stdin --compression zstd,6 \
/mnt/raid/borg::vol-{now:%Y%m%d}
# 4· Retention: keep 7 daily, 4 weekly, 12 monthly
borg prune /mnt/raid/borg \
--keep-daily 7 --keep-weekly 4 --keep-monthly 12
# 5· Verify repository health
borg check --verify-data /mnt/raid/borg
# 6· Restore any snapshot at wire speed
borg extract --stdout /mnt/raid/borg::vol-20250215 | \
sudo dd of=/dev/mmcblk0 bs=1M
Takeaway
My rsync script chain still works for one-off jobs on machines without Borg installed. For everything else, Borg’s chunk-level deduplication, integrated crypto, and one-command snapshots make it the clear winner for big binary images as well.
Exciting 🤓.
This article has been written with help from an LLM, which may make errors (like humans do 😇).