Among other things, I’m running a small Nextcloud instance on my home server and over time, data somewhat piles up (especially photos). My main storage is sufficiently sized & redundant for now, but I wonder how I am supposed to do a serious backup: The moment will come where the backup’s size will exceed any reasonly prized single drive. What then?
Of course I can just buy another disk and distribute the chunks, but that’s manual work - or is it not? At least rsync
has no builtin option for that.
Using a virtual, larger file system spanning among multiple drives looks like the easiest option, but I doubt it’s a good idea for a reliable backup - if one disk fails, all of the backup is gone. On the other hand: That’s true for the distributed chunks as well.
What do you people do? Right now I don’t have to bother as my data fits on a single device easily, but I wonder what in theory & practice is the best solution.
Regards!
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word “Linux” in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Ok, but if I have to backup, say, 5 TByte worth of data, I’d have to plug in several disks and reinstantiate my pool. ^^
What I am rather looking for is a script or something that would split the backup files into folders of, let’s say, 3 TByte which I can rsync to different drives. But as it looks, I’ll have to write that myself. That’s not an impossible task, but I wonder how good that works with duplicity/borg.