Inside my home server were three hard drives combined to a RAID 5. On this large volume I then created three logic volumes1 with the LVM (Logic Volume Manager). As time went by, space was running out on the large storage volume, so I bought a new hard drive. Here are the steps required, maybe it helps others (and a future myself), who want to do the same.
Note: All commands are supposed to be run as a super user. So either log into root or use
Another Note: Everything is done online with the file system mounted, but it is slower than having it not mounted.
Add hard drive to RAID
First add the hard drive to the RAID and let it reshape. For my 3 TB disk the reshaping took roughly two days, so be patient. In runs in the background, though.
$ mdadm --add /dev/md0 /dev/sde1 $ mdadm --grow --raid-devices=4 /dev/md0
You can check the progress with
mdadm --detail /dev/md0:
Update Time : Sun Jun 23 21:35:02 2013 State : clean, reshaping Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Reshape Status : 56% complete Delta Devices : 1, (3->4)
A step I forgot at first and lead to errors during booting: adding the new hard disk to the RAID configuration file. Open the file /etc/mdadm/mdadm.conf with an editor of your choice.
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers DEVICE /dev/sd[bcd]1 [...]
In line 10 add the new disk (sde1):
Now it should also work after rebooting.
Enlarge the physical volume on the RAID
We now have a larger virtual hard drive on the RAID array. Instead of just applying a filesystem to that, I first added another layer of structure, the LVM. So we have to tell the physical volume (comparable to a hard drive) that more space is available now (
$ pvs PV VG Fmt Attr PSize PFree /dev/md0 raid lvm2 a- 5.46t 0 $ pvresize -v /dev/md0 Using physical volume(s) on command line Archiving volume group "raid" metadata (seqno 6). Resizing physical volume /dev/md0 from 1430729 to 2146094 extents. Resizing volume "/dev/md0" to 17580802048 sectors. Updating physical volume "/dev/md0" Creating volume group backup "/etc/lvm/backup/raid" (seqno 7). Physical volume "/dev/md0" changed 1 physical volume(s) resized / 0 physical volume(s) not resized $ pvs PV VG Fmt Attr PSize PFree /dev/md0 raid lvm2 a- 8.19t 2.73t
Enlarge the logical volume
Now the LVM nows that more space is available (like having a larger hard disk). The next step is to enlarge the partition on this hard drive, the logical volume. The procedure is pretty much the same:
$ lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert backup_user1 raid -wi-ao 500.00g backup_user2 raid -wi-ao 250.00g storage raid -wi-ao 4.73t $ lvresize --extents +100%FREE /dev/raid/storage Extending logical volume storage to 7.45 TiB Logical volume storage successfully resized $ lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert backup_user1 raid -wi-ao 500.00g backup_user2 raid -wi-ao 250.00g storage raid -wi-ao 7.45t
Enlarge the file system
Last part in the Inception-like volume chain: the file system (ext4 in my case). Resizing it takes a while (similar to the growing of the RAID), but it doesn’t run in the background. So if you are not working on the PC directly, use a screen session2.
$ df -h | grep storage /dev/mapper/raid-storage 4.7T 4.6T 118G 98% /srv/storage $ resize2fs /dev/raid/storage $ df -h | grep storage /dev/mapper/raid-storage 7,4T 4,6T 2,9T 62% /srv/storage
After doing this you should check the file system for errors. I believe this is asked after the resize process, unfortunately I forgot to look and just powered off the PC. To initiate a check manually, you call
fsck.ext4 -pf /dev/mapper/raid-storage