Quantcast
Channel: Linux Device Hacking
Viewing all articles
Browse latest Browse all 3190

[tutorial] Making a RAID1 rootfs (3 replies)

$
0
0
This works fine on my NSA325v2 with bodhi's 2015 uboot (the latest at the time of writing), but should work fine with anything else as it is mostly a Debian-side configuration.

My setup leaves u-boot untouched as it relies on disk labels, I'm labeling the RAID1 "rootfs" and the uboot does not need to be altered in any way, so if you use the automagic "search-and-boot" configuration bodhi supplies, it will be fine.

It should THEORETICALLY work also with stock uboot, but you will have to use ext2 or ext3 instead of ext4, or make a raid boot partition.

Start with booting a SINGLE disk as normal from bodhi's kernel/rootfs thread, the second disk with the same partitions as the first, but empty.

in this example, dev/sda is the disk we are booting from, /dev/sdb is the partitioned disk we are preparing.
/dev/sdb1 and /dev/sda1 are the rootfs.

Installing needed tools (run as root or with sudo)

apt-get install mdadm rsync initramfs-tools
mdadm will popup something while you install it and also drop some warnings. It's ok, we will set it up and then rerun its configuration popups later.

Good, now we start by creating a RAID1 array with only the second hard drive (the empty one).

mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb1

Note the "missing", that tells mdadm that this is a "degraded" array with only one drive of 2.

Note the --metadata option. This option limits this array to max 2 TB of size BUT lets bootloaders read and boot from the partition even if they don't understand RAID.
u-boot does not understand RAID, so we need this.
Since rootfs isn't likely going to be more than a dozen GBs, the max size isn't an issue.

data RAID will be generated with a similar command without the --metadata option, so will not have any size limitation and won't be readable by u-boot.

In my setup I use SnapRaid, so I don't need a data RAID.

I'm doing without swap partition as I use a swapfile in rootfs partition.

formatting the array as ext4 and giving it the "rootfs" label.
mkfs.ext4 -L rootfs /dev/md0

Open mdadm configuration file
nano /etc/mdadm/mdadm.conf

And in the DEVICE section add
DEVICE /dev/sd?*
This forces the thing to look at all drives/partitions for RAID signatures.

Save and close the file, then let's add the last line of that config file (the raid signature of the arrays running at the moment) like pros do

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Feel free to open up again the same file with the command above to see if there is now a line looking like this
ARRAY /dev/md0 metadata=0.90 UUID=66a8c96d:ac6a5da3:9d4deba6:47ca997f

Now we need to configure mdadm to start the stuff from inside the initramfs, as if this isn't started there, the kernel cannot find the rootfs to boot from.

dpkg-reconfigure mdadm

The settings are self-explanatory, I left "all" in the first one, anyway.

You will see a line saying that initramfs is getting updated, mdadm is getting into it, and its configs are also getting in there.

In case you have more than one initramfs or you want to trigger a rebuild manually, write this

update-initramfs -u -k all


This is cool but we boot with u-boot that works with uImage and uInitrd, not directly with initramfs.

So we need to rebuild them, let's go in /boot folder and see what we have in there

cd /boot && ls

Now we rebuild uImage, please alter the names of the files according to the ones you have in your /boot folder (version numbers will probably differ)
mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n Linux-3.18.5-kirkwood-tld-1 -d vmlinuz-3.18.5-kirkwood-tld-1 uImage

rebuilding uInitrd, same as above, change names according to the ones you have
mkimage -A arm -O linux -T ramdisk -C gzip -a 0x00000000 -e 0x00000000 -n initramfs-3.18.5-kirkwood-tld-1 -d initrd.img-3.18.5-kirkwood-tld-1 uInitrd

Then we mount the array, and clone the rootfs from the booted partition
mkdir /tmp/mnt
mount /dev/md0 /tmp/mnt
rsync -auHxv --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* /* /tmp/mnt

open the fstab of the new drive and adjust it

nano /tmp/mnt/etc/fstab

My fstab this line for root filesystem

/dev/md0      /               auto   noatime,errors=remount-ro 0 1

Save and close.

Now power down your device
poweroff

Disconnect the drive we booted from, leave only the drive we just prepared.

Power up the device, and see what happens.

You should see mdadm coming up and initalizing the raid array at a bit after second 3 after the kernel started booting, and booting will continue just fine until login.

Now login as normal and write
mount

this is my output, see the line? that's root filesystem mounted on raid array.
root@debian:/boot# mount
-----removed stuff------------
/dev/md0 on / type ext4 (rw,noatime,errors=remount-ro,data=ordered)
-----other removed stuff-------

Now connect the first drive, check that it is there, and simply add it to the array.

Since the device had only one drive when it booted, the new drive will be /dev/sdb1 again, while the old drive will be /dev/sda and /dev/md0 will be using /dev/sda1

WARNING, DATA IN PARTITION /dev/sdb1 will be erased, if you screw up the command and ask it to add /dev/sda1 it will just error out, so that's not an issue.

mdadm --add /dev/md0 /dev/sdb1

Nice, now let it settle a bit, check rebuilding progress with

mdadm --detail /dev/md0


And now, a link to a useful cheat sheet with most common mdadm commands http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/

Viewing all articles
Browse latest Browse all 3190

Latest Images

Trending Articles



Latest Images