One super-easy way to identify disks on the fly is just to do a cat </dev/sdx >/dev/null
and see which disk activity light stays on.
What I do is figure out which names in /dev/disk/by-path
correspond to which disks. The by-path names are stable, even if you replace the disks (as long as the cabling doesn’t change). Then I set up aliases in /etc/zfs/vdev_id.conf
to give the disks names that correspond to the external labels on the enclosure.
For example, disk /dev/disk/by-path/pci-0000:06:08.0-sas-0x5842b2b2167fc188-lun-0
might be the disk in slot zero in the array I’ve designated as “array0”. So /etc/zfs/vdev_id.conf
would have:
alias array0-0 pci-0000:06:08.0-sas-0x5842b2b2167fc188-lun-0
Then I create the pool with the /dev/disk/by-vdev
names so I can tell immediately what each disk is. (If you’ve already created the pool, you can export it and then use zpool import -d /dev/disk/by-vdev
to switch to the vdev names.)
In theory, you can use some other settings in /etc/zfs/vdev_id.conf
to get the system to enumerate the disks itself, rather than working out the aliases by hand. In my case, my enclosures don’t have stable numbering that the automatic settings can work with.
A rather more sophisticated way to identify a disk, if it’s in an enclosure that has ID LEDs, is to use sg_ses.
The rough process for that is:
lsscsi -g
to get the generic SCSI device (/dev/sgN
) for the enclosure.lsscsi -t
to get the SAS address for a disk. (Not sure whether this will work if it’s a SATA enclosure; all of mine are SAS.)sg_ses -p aes /dev/sgN | less
, where/dev/sgN
is the enclosure’s generic SCSI device. Look through the output to find the SAS address and, from that, get the index number of the disk.sg_ses --set ident --index I /dev/sgN
, whereI
is the disk index number and/dev/sgN
is the enclosure’s device. This will turn on the ID LED for the disk.sg_ses --clear ident --index I /dev/sgN
to turn the LED off.You can also use
fault
instead ofident
to turn on the “drive fault” LED, in case the enclosure has those but not ID LEDs.