FastTrak
Converting a single disk IDE system to a mirrored RAID 1 array.
IDE RAID controller: PROMISE 202376 mbfasttrak133 lite on a Gigabyte GA7VX motherboard
Introduction
After a disk failure of my aged server, I wanted more data security in the future. I had backups, but not of everything and they will be fiddly to restore. A RAID 1 arrangement would not only handle an entire disk being covered with olive oil, but would also boost read performance. I understand there is a slight concomitant penalty for disk writes, since the IDE controller will synchronise only when both drives have written the data. This is one reason why an identical pair of drives is considered ideal. Posh setups have physical synchronisation of the rotation of the drives, so the write time is guaranteed to be the same, no matter how many drives. These issues are really only important when there are numerous drives in the array. For a two disk array, I'll be happy with the data security.
First, I backed up the system. No, really. This is bound to go wrong.
Secondly, I read the instruction very carefully. These were in the booklet distributed with the motherboard. They said that with a windows installation, the drivers should be installed on it before doing anything else. my family occasionally uses windows, so i obeyed.
My BIOS allowed the RAID function to be switched off, and this was the case. When it is enabled, it takes about 30 seconds for it to scan the two IDE ports, before offering the configuration page via a control-F.
Both drives should be jumpered as MASTER and have their own IDE cable. This doesn't have to, but probably should correspond to the far end of the IDE cable, not the 40 pin connector a few inches down, which should generally be used for a SLAVE device. I don't know whether this controller would be able to handle 4 drives, say, with a master and slave on each ide cable. Anyway, I'd need a new power supply if I wanted to try that, as I even had to unplug the DVD drive to get power for the new little fellow.
My first problem is that my second drive is very close to the same spec, although from a different manufacturer. Both are around 80Gb, and have the same speeds, according to the documentation. I do have an identical pair, but I need a DVD writer or a lot of patience to back it up. The beauty of RAID 1 is that I can make the switch in the future, simply by removing the odd drive, and using the RAID BIOS to mirror the data on to the new one.
So, in the meantime, flouting the first recommendation of having two matching drives, I press on. The first concern was that I would not be able to distinguish the drives, and would therefore be in danger of mirroring a blank drive over my data. I plugged the drives into the RAID IDE slots, marked IDE3 and IDE4, with the original data drive on IDE3. I then booted and hoped for the best.
I was frightened to see that it thought I had just one drive of a striped array after the boot time detection process. It then told me it was rebooting, and in a flurry, the screen cleared. Well, I hoped it hadn't written anything to my disk. It hadn't, and the second time around, it detected both drives, but still thought there was a striped array. I 'deleted' this array, not the data, and was able to use Auto Configure - Security, to set up the mirror. I asked for data to be moved from the source to the target drives - distinguishable by the manufacturer ID, as well as IDE slot positions (1&2 of the RAID pair). Fatal error - array state critical - press any key to reboot. Not what I wanted to hear. This had been accompanied by the sound of a drive spinning/powering up, so I thought that maybe the controller hadn't anticipated me taking so long, the drive had powered down, and the controller 'forgot' to repower it before trying to write.
It was rebooting anyway, so I crossed my fingers, safe in the knowledge that my backups would take days to restore. I deleted and recreated the array again, this time quickly, and commenced the write process, which seemed to work. It is not very rapid. I'm not sure why it should take over an hour to copy 80Gb of data, but presumably this is done with the utmost care.
The next step is to get debian linux to boot. I have a knoppix CD handy, anticipating a problem with no longer having /dev/hda, and instead /dev/hde or something more exotic as the device. As I booted Knoppix, it occurred to me that the recent
mount /dev/ataraid/d0p5 /mnt/system mount /dev/ataraid/d0p6 /mnt/system/usr ... etcetera mount --bind /proc /mnt/system/proc
then
chroot /mnt/system vi /etc/lilo.conf
I added
/dev/hde inaccessible /dev/hdg inaccessible
to instruct lilo to ignore the independent drives, and so treat them as a single device.
/sbin/lilo
or
chroot /mnt/system /sbin/lilo
or
/sbin/lilo -C /mnt/system/etc/lilo.conf -r /mnt/system
First, I tried to boot linux, which failed with a kernel panic. The actual error was:
VFS: Cannot open root device ataraid/d0p5 on unknown-block(0,0)
Some searching on the internet gave me clues suggesting that this was a fairly fundamental problem. The /dev/ataraid device was simply not known about to the boot loader. I went back into knoppix, remount my system as above. Instead of changing directory to the place where I was going to chroot, and typing chroot . As I had done before, I ran lilo directly with chroot. I thought there may have been some issue with chrooting to the current directory which may have led to the devices not being present and correct. For good measure, I also ran
chroot /mnt/system /sbin/lilo -M /dev/ataraid/d0
to ensure the master boot record was stored on the array device. This is probably different, as a different IDE device is now the boot device. I'm not sure about what's inside the MBR, but I imagine it should change.
Reboot:
VFS: Cannot open root device "7205" on unknown-block(114,5)
At least this time it seems that the device is registered with a BIOS number, 7205, if that is what it is, and it has major and minor numbers 114,5 (if that is what they are!)
Windows, I'm embarrassed to report, booted fine. After much playing with different kernel compilations (using different settings for do off-board controllers first, etc.), i took a look at the actual kernel code for the controller. I onyl see one very irrelevant use of the word RAID, so i suspect that the driver only supports normal IDE, and not the RAID feature. Therefore, I'm backtracking to use a 2.4 kernel with the Promise Linux drivers available from their site (http://www.promise.com/support/download/download2_eng.asp?productId=8&category=driver&os=4)
This article made me bang my head on the wall: http://www.linuxquestions.org/questions/archive/18/2004/02/1/122328
After more detailed reading, it transpires that the Promise controller is in fact not much more than a disk BIOS. It provides two ide disks, and it is then up to the kernel software to manage these. What is the difference between this and a normal pair of IDE devices? I can't even see how I could use the BIOS disk facilities because the 'md' software raid tools will take charge. I'm hoping that I don't have to format new partitions on one disk, make a degraded array out of that one disk, copy over from the original disk by hand, and then finally add the original disk to the array. I have already used the disk bios to copy everything to the second disk.
One point: At least this way, I can use two swap space areas which the kernel uses intelligently. These do not hold data across reboots, so the mirroring requirement is no longer there.
Using Knoppix (and distcc) I recompiled my 2.6.4 kernel with RAID functions compiled in. This is necessary so the kernel can mount the filesystem as a RAID array at boot time. I don't use the initrd system which seems like a waste of time, and only useful for stock kernels. If you compile your own, you almost certainly don't need this. The exception is if you require an external module at boot time.
Using Knoppix, I began creating the arrays. I already had /dev/md* in my real root, as well as the Knoppix root file system. Confusingly, ataraid is still autodetected as well as the partitions on each of the individual disks. I ignored this and waded straight into mdadm, the software raid admin tool (in the debian repositories). apt-get mdadm.
mdadm -v --create /dev/md5 --level=raid1 -n2 missing /dev/hdg5
I want the md* numbers to match my previous partition numbering scheme - i don't think this is an issue, although some people use md0 for root, and then count upwards, regardless of the invidual partition numbers. Anyway, this didn't seem to do much. Indeed, cat /proc/devices didn't even show md was active. I thought a reboot into Knoppix was required which I did, then I realised I should have modprobed the appropriate modules. Not sure why, but Knoppix was having trouble using modprobe, so I went to the actual modules in /lib/modules/2.4.21-xfs/devices.... and then
insmod md.o insmod raid1.o insmod lvm-mod.o (for good measure - don't think this is required, though)
I then re-executed the mdadm command above, and this time, the array was activated. However,
mdadm -Q /dev/hdg
reported that it was not an md array, which I rather hoped it would have become. However,
mdadm -Q /dev/hdg5 did report that it was device 1 of 2 using raid 1.
I then thought I would boot into the degraded array to check it worked, before mirroring it back, but I changed my mind. I instead thought I would assign both the drives straight into the array and go for it. I was slightly concerned that I had written to one partition and not the other (small changes in /etc/fstab and lilo.conf), but I unmounted everything then issued the mdadm command with both partitions mentioned.
mdadm -v --create /dev/md5 --level=raid1 -n2 /dev/hde5 /dev/hdg5
it failed, but after rebooting Knoppix, it ran successfully. Immediately, the raid1d daemon began syncing the two disks - what exactly it is doing I don't know. Checking everything, I hope, and making few changes. I presume there is a master disk and any differences between this and the other are overwritten by the master. However, does the system know which is my master? It was referring to disk 1 as being hdg before, but was this because I had omitted the other disk when creating the degraded array? If i hadn't made a change to fstab and lilo.conf would it have accepted the array without having to check every bit?
Anyway, the computer became very unresponsive and a
cat /proc/mdstat
showed a very slow rate of progress, while 'top' showed 100% system (i.e. IO wait)
I devilishly stuck in some hdparm values to speed things up
hdparm -c1 -d1 -u1 /dev/hde
and the same for the other drive. changing the multicount didn't work. NB This is extremely high risk, but brought down the time from about 140 minutes to 5 minutes for the scan. NB This is much faster also than the Promise BIOS disk duplication which presumably plays on the safe side and avoids using funky disk settings.
I mounted the /dev/md5 device and checked the modified lilo and fstab files. They had reverteed, so indeed the true master disk (in the first of the two ide slots of the raid controller) is known as the master by the software. This seemed to be confirmed by another cat /proc/mdstat which shows my master disk numbered as 0, while the other disk is numbered as 1.
I performed the same mdadm commands for the other partitions, /boot, /usr and /var, mounted them ensemble with proc bound in. I chrooted into the fully raid enabled system and then updated lilo.conf and ran lilo. My lilo.conf contains amongst the other bits and pieces:
boot=/dev/md2 root=/dev/md5 raid-extra-boot=mbr
Before the boot line had a device, not a partition listed. The change here is that lilo will write the records to a partition, and, because of the raid-extra line, will also write the mbr for the array as well, and it will do this on each disk. There are a couple of other ways of fine tuning for RAID systems, which can be read about in 'man lilo.conf' This document says that it is not necessary to install MBR on the device for parallel (raid1 in this case) arrays, but with an awkward DOS partition, I thought I'd do this too. The actual RAID1 device used in the boot= line doesn't seem to matter, but I used the boot partition for good measure.
I commented out references to the DOS partition in lilo.conf and fstab to avoid any conflict. I don't know whether mdadm will handle this correctly, so I will need to do some research before recklessly doing it anyway. From Knoppix, running lilo in a chroot the /dev/ataraid/d0p1 entry in lilo.conf for my DOS partition worked, and Windows XP would boot. In kernel 2.6, the ataraid doesn't exist (superceded by the promise driver and md), so will md be okay under a 2.6 kernel (not Knoppix) for setting it up. Maybe a permanent solution, although an undesirable one, would be that I always run lilo from a chroot on Knoppix, thereby giving me the ataraid device which I know works for windows.
I run lilo and get some warnings:
Warning: /dev/md2 is not on the first disk The boot record of /dev/md2 has been updated Warning: /dev/hde is not on the first disk The Master boot record of /dev/hde has been updated Warning: /dev/hdg is not on the first disk The Master boot record of /dev/hdg has been updated
Which looks okay. I left in
other=/dev/araraid/d0p1 label=dos
because at boot time, lilo doesn't care about the /dev/ name, lilo stores a device number for the partition. If I can get that number, and insert it directly into the lilo config, I will be able to run it under kernel 2.6 without having to use an ataraid device.
I rebooted and got a kernel panic:
EXT3-fs: unable to read superblock EXT2-fs: unable to read superblock FAT: unable to read boot sector VFS: Cannot open root device "905" or md5 Please append a correct "root=" boot option Kernel panic: VFS: unable to mount root fs on md5
I remember reading in one of the references that lilo didn't always recognise the presence of md partitions correctly, and needed a jolt, by adding a partition= line in the lilo.conf. Above the kernel panic output, I can see the md kernel feature kicking into action.
md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE.
Which looks like it didn't do anything.
http://www.sol-linux.com/Content/Documentation/sw-raid-install
need to change partition type from Linux (ext2/3) to Linux RAID autodetect, WITHOUT erasing any data. I should probably back up the partition table before playing, but as I have a whole spare disk, I'll risk it. The disks will have different time stamps, so md will probably check them again, but hey ho. Or will it? The partition table is not explicitly mirrored, only the partitions are, but it should be identical on both disks. Hmmm, I'll try to use cfdisk to change this without unmounting the RAID partitions. I started with an experiment with the secondary drive, hdg, and just the boot partition which is doubly backed up. The reply was: "Wrote partition table, but re-read table failed. Reboot to update table." I unmounted and mdadm --stopped the raid partitions, and repeated, but had the same result. So, reboot, and see if the md autodetect works during hte boot sequence.
Still much fannying around and little progress. I've been looking at using: lilo -t -v2 which simulates a very verbose lilo run. This shows that although it tells about device md5 for root as device number 905, in reality the device codes are 2100 and 2200. Confusingly, the Knoppix originating ataraid device is taking BIOS device 80, while my real ide devices are on 81 and 82. However, it correctly states that device 81 will be the boot disk. The lilo howto was worth reading, as were the docs in /usr/share/doc/lilo
http://www.sol-linux.com/Content/Documentation/sw-raid-install - is this kernel 2.6 or not? suggests /dev/md/0 instead of /dev/md0 : is this relevant?
http://linas.org/linux/raid.html http://www.spinics.net/lists/raid/msg04583.html - closely related to my original problem. http://www.linuxquestions.org/questions/archive/2/2001/06/1/3034 - discusses problems of booting into RAID and a bit about autodetect http://willert.dk/geek/raid.html - includes good config quotes and autodetect settings but relies on formatting http://www.beaglebros.com/empeg/raid1/ - mentions autodetect kernel support but on an embedded platform I think. http://lists.debian.org/debian-user/2004/debian-user-200403/msg00507.html - recommends having hda as boot disk and non-RAID boot partition. http://linuxtoday.com/news_story.php3?ltsn=2002-08-01-015-26-PS-HW-HL&tbovrmode=3 http://www.schwarzvogel.de/od2md.shtml http://users.pandora.be/TheBlackUnicorn/linux/ http://www.infocopter.com/know-how/PROMISE_FastTrak_Linux_Driver_RedHat.htm http://www.debianhelp.org howtos: boot root raid ext3 - swears by reversing order of ide so first device is boot device. https://listman.redhat.com/archives/ataraid-list/ http://freshmeat.net/projects/debpromise/ http://freshmeat.net/projects/ataraid/ http://ttul.org/~rrsadler/linux-promise/ http://www.linuxquestions.org/questions/archive/1/2003/06/4/68620 - this guy got further than me - autodetect could at least see the devices here. http://www.faqs.org/docs/Linux-HOWTO/ATA-RAID-HOWTO.html http://www.linuxjournal.com/article.php?sid=2866 http://www.antgel.co.uk/compsci/linux/promise_raid.shtml http://www.linuxquestions.org/questions/archive/18/2004/02/1/122328
explanation of wh promise is software RAID and relationship with kernel 2.6: http://www.ussg.iu.edu/hypermail/linux/kernel/0312.3/0605.html http://www.linuxquestions.org/questions/showthread.php?s=&threadid=159956&highlight=software+raid+kernel+.6
HOWTOs
http://www.tldp.org/HOWTO/LILO-6.html - a detailed but a bit out of date LILO howto http://linas.org/linux/Software-RAID/Software-RAID.html http://www.tldp.org/HOWTO/Software-RAID-HOWTO-7.html#ss7.5 - this howto says boot= entry in lilo MUST NOT BE a raid device... http://fy.chalmers.se/~appro/linux/HOWTO-mirror-root.html whereas this one says it can be boot=/dev/md0 http://www.morphix.org/modules/newbb/ suggests that boot= line should be absent for RAID (at bottom of page) http://www.rot13.org/~dpavlin/md-raid1.html http://linuxquestions.org/questions/history/157610 http://www.experts-exchange.com/Operating_Systems/Linux/Q_20846518.html