64

Time Machine-style Backups with rsync (2018)

Ha. That's a throwback.

I did the same thing, but with a more detailed writeup, in 2009: https://nuxx.net/blog/2009/12/06/time-machine-for-freebsd/

It was really handy, but I now use borg as it just works better.

5 hours agoc0nsumer

HA! It read your post I read back then and I have been using it ever since. So thank you. I guess I need to check out borg ;)

3 hours agohuman_llm

I use bontmia since forever, based on the same rsync feature (link-dest for creating hard links to the unmodified files since the last backup). It also supports backup rotation and I think it's quite solid/reliable after all these years: https://github.com/hcartiaux/bontmia

an hour agohcartiaux

"borg" has basically solved backups permanently. It's deduplicated, even across snapshots, compressed, and end-to-end encrypted. I'm surprised it's not more well known.

2 hours agoLeoPanthera

> "borg" has basically solved backups permanently. It's deduplicated, even across snapshots, compressed, and end-to-end encrypted.

Deduplication helps minimize space, but isn't it a major liability in backups? I mean, what happens when you try to restore your backups but a lone sector holding a file from way back in the past happens to not be recoverable? Doesn't it mean that no matter how frequent your backups are, your data is lost?

2 hours agolocknitpicker

If you are storing your data in a filesystem like ZFS then the risk is lower since you'll know you have a bad sector long before you attempt to deduplicate. But I otherwise share your same concerns and I'm eager to hear from others how to mitigate this. I've lost a volume due to poor practice while encrypting so I'm understandably reluctant to overcomplicate my backup strategies in the name of space compression or privacy.

an hour agostuxnet79

It doesn't really hurt in practice because it's only one part of the full backup procedure. Deduplicate to save space; re-duplicate the (smaller) backups to separate media for redundancy; scrub regularly for bit rot and rotate your media. A proper backup system requires all but the first of those anyway.

an hour agoharadion

In that case you are supposed to use your /other/ backup. Which you have.

an hour agoNSUserDefaults

The original post that introduced this idea into general public: http://www.mikerubel.org/computers/rsync_snapshots/

I’m sure others will chime in that they used hard links like this before then, however as noted in that page, it’s the one that made it popular enough that rsync was updated to support the idea natively.

6 hours agoorev

Seems similar to https://rsnapshot.org/

6 hours agonightshift1

I've been using snapshot for backups since so many years that I forgot when I started. I keep the last 14 snapshots.

4 hours agopmontra

This is the more robust way to go. Uses rsync under the hood.

6 hours agokunjanshah

If we spoke about a Linux box, one that prudently ran ZFS, or XFS on top of LVM, it would be possible to make a snapshot before the diffing and sending, so that the snapshot would be indeed point-in-time. IDK if whatever macOS uses for the filesystem supports snapshots.

Otherwise, I think, restic or kopia are better for proper backups, and Syncthing for keeping a mirror copy. But the simplicity of this script in charming.

6 hours agonine_k

sounds similar to rdiff-backup ( https://rdiff-backup.net ).

I know some folks that have been using that for a very long time as well.

an hour agonocman

Need to combine this with LVM or BTRFS or similar to be a true snapshot. Rsnapshot supports LVM snapshot pretty good.

3 hours agotheteapot

restic to s3 has been very reliable - haven't tried borg.

an hour agomattbillenstein

Isn’t restic better for backups overall?

Anyone have a good script for macOS triggered by launchd, ideally something that uses FSEvents to check for directory changes?

6 hours agomrtesthah

I use restic, too, and I am very pleased with it, although I run it on windows, which works, but isn't perfect. So, you use it regularly?

3 minutes agomovetheworld

Yes, it is; among a few other great cross-platform FOSS tools that are built just for backups, and they do it really well. But most of them do periodic scans (as opposed to file change trigger-based backup runs, which I guess is what you might be looking for, I assume by your second sentence).

6 hours agocrossroadsguy

Why make hard links when you can use rsync or syncthing to just make an actual copy on an external hard drive eg via wifi or just remotely?

6 hours agoEGreg

Hard links are file level dedupe.

And then once all references to the inode are removed (by rotating out backups) it's freed. So there's no maintenance of the deduping needed, it's all just part of how the filesystem and --link-dest work together.

5 hours agoc0nsumer

The hard links are to the most recent backup before the one happening now in the script, so that you aren't storing full copies of files that haven't changed between backups.