The Core Method
Rsync can replicate macOS Time Machine's snapshot behavior on Linux using hardlinks - references to file data rather than duplicate copies. The approach uses rsync -avPh --delete --link-dest to compare new data against the current backup, creating timestamped snapshots that look like full backups but share unchanged file data.
A bash script automates this: rsync checks changes in a source directory against the last snapshot, creates a new timestamped copy of only what changed, then updates a 'current' symlink. Running via cron (say, 5am daily) produces browseable snapshots at minimal space cost. One example showed 497MB total storage after deletions - the hardlinks point to the same underlying data.
What This Means in Practice
For teams running Linux infrastructure, this offers:
- No licensing costs or vendor lock-in
- Full snapshot browsing without storage multiplication
- Standard Unix tools, no proprietary formats
- Scriptable integration with existing backup workflows
Open-source implementations exist. Tools like linux-timemachine (POSIX-compliant, SSH support) and rsync-time-backup (cross-platform with flexible exclusions) build on this pattern. Python wrappers add orchestration for multi-system deployments.
The Fine Print Matters Here
Hardlinks save space but introduce fragility: deleting an old snapshot before creating a new full backup can break incremental chains. Cron failures go unnoticed without logging. The Arch Linux community stresses the point: "The only way to verify backups is to try using it."
Enterprise-scale deployments should pair rsync with:
- Checksums for bit rot detection
- Regular restore testing
- Monitoring for cron job failures
- Scrubbing for redundancy at scale
Hacker News threads from 2018 still surface in 2026 discussions - the method works, but it's basic. Teams needing deduplication across multiple sources or automated rotation often layer tools like rsnapshot on top, or move to purpose-built solutions.
The real question: Is lightweight simplicity worth manual verification overhead? For teams comfortable with Unix tooling and disciplined testing, yes. For hands-off enterprise backup, probably not.