Will talk excessively about metal guitar and functional programming if not stopped

Also visit me at me on Mastodon

  • 2 Posts
  • 6 Comments
Joined 3Y ago
cake
Cake day: Nov 14, 2021

help-circle
rss

I thought of something similar, but that again doesn’t save me from having to plug in the disks one by one. Don’t know what I expected though, because you can’t make a hard drive suddenly beep or turn a light on. ^^


Ok, but if I have to backup, say, 5 TByte worth of data, I’d have to plug in several disks and reinstantiate my pool. ^^

What I am rather looking for is a script or something that would split the backup files into folders of, let’s say, 3 TByte which I can rsync to different drives. But as it looks, I’ll have to write that myself. That’s not an impossible task, but I wonder how good that works with duplicity/borg.



That’s precisely what I’m planning to do. Sadly, the disk ids are not printed on its outside. 😉


How do you identify individual disks in larger arrays?
Today the disks for my new ZFS NAS arrived, rejoice! 😍 Now I ask myself: If some day one of the drives fails, how am I supposed know which of the physical ones it is? My preliminary plan is to plug them into to disk container one by one, writing down the newly appearing blkids and labeling the corresponding drive. This is somewhat time consuming, so you folks have a better idea? Cheers!
fedilink

I may have missed to point out that my server’s storage is my NAS. ;)


Backing up (kind of) large amounts of data
Among other things, I'm running a small Nextcloud instance on my home server and over time, data somewhat piles up (especially photos). My main storage is sufficiently sized & redundant for now, but I wonder how I am supposed to do a serious backup: The moment will come where the backup's size will exceed any reasonly prized single drive. What then? Of course I can just buy another disk and distribute the chunks, but that's manual work - or is it not? At least `rsync` has no builtin option for that. Using a virtual, larger file system spanning among multiple drives looks like the easiest option, but I doubt it's a good idea for a reliable backup - if one disk fails, all of the backup is gone. On the other hand: That's true for the distributed chunks as well. What do you people do? Right now I don't have to bother as my data fits on a single device easily, but I wonder what in theory & practice is the best solution. Regards!
fedilink

I‘d say one of the primary reasons is compatibility. There is a lot of software for the Unix world that expects some kind of environment that behaves similarly to bash - imagine for instance of the bazillion of startup scripts that exist around certain tools. You’d have to be 100% backwards compatible with the bash language if you were to invent something to replace it, otherwise all those things wouldn’t work in your shell.