Today the disks for my new ZFS NAS arrived, rejoice! 😍
Now I ask myself: If some day one of the drives fails, how am I supposed know which of the physical ones it is? My preliminary plan is to plug them into to disk container one by one, writing down the newly appearing blkids and labeling the corresponding drive. This is somewhat time consuming, so you folks have a better idea?
Cheers!
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word “Linux” in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
One super-easy way to identify disks on the fly is just to do a
cat </dev/sdx >/dev/nulland see which disk activity light stays on.What I do is figure out which names in
/dev/disk/by-pathcorrespond to which disks. The by-path names are stable, even if you replace the disks (as long as the cabling doesn’t change). Then I set up aliases in/etc/zfs/vdev_id.confto give the disks names that correspond to the external labels on the enclosure.For example, disk
/dev/disk/by-path/pci-0000:06:08.0-sas-0x5842b2b2167fc188-lun-0might be the disk in slot zero in the array I’ve designated as “array0”. So/etc/zfs/vdev_id.confwould have:Then I create the pool with the
/dev/disk/by-vdevnames so I can tell immediately what each disk is. (If you’ve already created the pool, you can export it and then usezpool import -d /dev/disk/by-vdevto switch to the vdev names.)In theory, you can use some other settings in
/etc/zfs/vdev_id.confto get the system to enumerate the disks itself, rather than working out the aliases by hand. In my case, my enclosures don’t have stable numbering that the automatic settings can work with.A rather more sophisticated way to identify a disk, if it’s in an enclosure that has ID LEDs, is to use sg_ses.
The rough process for that is:
lsscsi -gto get the generic SCSI device (/dev/sgN) for the enclosure.lsscsi -tto get the SAS address for a disk. (Not sure whether this will work if it’s a SATA enclosure; all of mine are SAS.)sg_ses -p aes /dev/sgN | less, where/dev/sgNis the enclosure’s generic SCSI device. Look through the output to find the SAS address and, from that, get the index number of the disk.sg_ses --set ident --index I /dev/sgN, whereIis the disk index number and/dev/sgNis the enclosure’s device. This will turn on the ID LED for the disk.sg_ses --clear ident --index I /dev/sgNto turn the LED off.You can also use
faultinstead ofidentto turn on the “drive fault” LED, in case the enclosure has those but not ID LEDs.