[SOLVED] Install bootloader failed Incompatible Raid version

This forum is dedicated to advanced help and support :

Ask here your questions about advanced usage of Mageia. For example you may post here all your questions about network and automated installs, complex server configurations, kernel tuning, creating your own Mageia mirrors, and all tasks likely to be touchy even for skilled users.

[SOLVED] Install bootloader failed Incompatible Raid version

Postby hankivy » Jun 28th, '14, 21:14

To install Mageia 4, I bought two identical disk drives to install my file systems as RAID1 mirrored file systems.

The disk drives out of the box had a partition label on it.
DiskDrake said the disk had "Partition table type: table::dos".
The Install did not seem to give me the option of setting up a different partition table type.
I set up the first partition to start at sector 2048, and added the rest after that.
I set up partitions for /boot, swap, / (aka root), /usr, /tmp, /var, /home, and /shares.
All of the partitions are Linux RAID1, except swap.

FYI: /dev/md0 is the raid device for the /boot partition.

The installation ran well, except at the end it failed to install the boot loader.
Here are the error messages:

Installation of boot loader failed. The following error occurred:
:Fatal: Incompatible Raid version information on /dev/md0 (RV=0.90 GAI=1.2)

Booting the hard drives displayed a blank screen instead of a menu.

I have four backups of all of my personal data. I can start the installation process all over. I do NOT need ANY of the data on these two drives. I will do anything to them. And restore my personal data later.

Do you suggest I change the disk partition table type? If so, which application do you reccomend? Is there a hidden Expert/Expert mode in DiskDrake that will get the job done?

I can add any partitions, or empty space that you suggest.
Last edited by hankivy on Jul 15th, '15, 06:53, edited 3 times in total.
hankivy
 
Posts: 128
Joined: May 19th, '14, 20:36

Re: Install bootloader failed Incompatible Raid version

Postby doktor5000 » Jun 28th, '14, 22:22

Which bootloader did you choose?
Cauldron is not for the faint of heart!
Caution: Hot, bubbling magic inside. May explode or cook your kittens!
----
Disclaimer: Beware of allergic reactions in answer to unconstructive complaint-type posts
User avatar
doktor5000
 
Posts: 18018
Joined: Jun 4th, '11, 10:10
Location: Leipzig, Germany

bootloader choice, LILO, and RaID metadat version

Postby hankivy » Jun 28th, '14, 23:00

I was given only one choice of bootloader by the install process, LILO.

Another mageia user on the mageia irc pointed out code in the install process that does not allow GRUB when the boot partition is on raid.
Another post I found pointed out that LILO supports a boot partition on raid, only if the RAID metadata is version 0.90, NOT 1.x.
I am going to try to boot my system with Mageia 4 Live, unraid the boot partition, and create the raid metadata for the boot partitions, using version 0.90 metadata, and then getting LILO installed as the bootloader.
I might also try booting in rescue mode.
hankivy
 
Posts: 128
Joined: May 19th, '14, 20:36

raid1 /boot partition compatible with LILO

Postby hankivy » Jun 29th, '14, 07:37

DiskDrake will create raid arrays with RAID metadata format version 1.2 and not offer an opportunity to use any other format.

The install process has two choices for boot loader, LILO, and GRUB (Legacy).

If the /boot partition is on raid, then the install will correctly not allow you to use GRUB (Legacy). This combination is not supported by GRUB (Legacy).
But LILO does not support a /boot partition with RAID metadata format 1.2. It does support RAID metadata format 0.90.

So the Install Mageia 4 DVD, by itself will not allow one to create a bootable system with a boot partition that is RAID1.

I had to use Mageia 4 Live to delete and recreate the partitions for my Linux RAID partitions on the disk.
Then I added the partitions to RAID using the command line, rather than DiskDrake.

Here is the command I used.
Code: Select all
mdadm --create /dev/md128 --metadata=0.90 --level=raid1 --raid-devices=2 /dev/sd[ab]1

/dev/md128 is just an unused device name of the correct kind.
My two disk partitions were /dev/sda1, and /dev/sdb1.
It is OK to use DiskDrake for the other partitions.

P.S. I used the following commands to just look at the status of devices, etc.
Code: Select all
ls -l /dev/md*
mdadm --examine /dev/sda1
mdadm --query --detail /dev/md128
hankivy
 
Posts: 128
Joined: May 19th, '14, 20:36

Re: Install bootloader failed Incompatible Raid version [SOL

Postby doktor5000 » Jun 29th, '14, 10:19

Seems this has already been reported as a bug: https://bugs.mageia.org/show_bug.cgi?id=9524
But thanks for your workaround :)
Cauldron is not for the faint of heart!
Caution: Hot, bubbling magic inside. May explode or cook your kittens!
----
Disclaimer: Beware of allergic reactions in answer to unconstructive complaint-type posts
User avatar
doktor5000
 
Posts: 18018
Joined: Jun 4th, '11, 10:10
Location: Leipzig, Germany

Better Work Around [SOLVED]

Postby hankivy » Jun 30th, '14, 03:59

I have four backups of my data on the old system.
And two brand new identical disk drives.
I want all of my data, including the /boot partition, to be mirrored as RAID1.
I want Mageia 4. And get there via only Mageia utilities.
I have the "Mageia 4 Live" DVD, and the "Mageia 4" DVD.

Boot Mageia 4 Live, pick language, etc.
Launch DiskDrake, and enter Expert mode.

Repeat the following paragraph for both disk drives:
Clear all partitions. Now both drives only have an empty Partition table.
DiskDrake says "Partition table type:table::dos".

Create /boot partition, first sector 2048, type Linux RAID, size 8GB, (Pick your own size, PYOS.) [The device names of the /boot partitions will be sda1, and sdb1. DiskDrake will only know the partitions by their device names, until we add Mount Points. Optionally, we could add labels.] (I am leaving space at the beginning of the disk for a future GUID Partition Table, GPT; or GRUB2.)

Create partitions, type Linux RAID, PYOS, for / (aka root).
Create swap partition, type SWAP, size 8GB, PYOS.
Create partitions, type Linux RAID, PYOS, for /usr, /var, /tmp, /home, etc.

[For legacy reasons, I put the /boot, and / partitions at the beginning of the disks.]

Select the / partition on one drive, and click "Add to RAID".
(The system will ask "Do you want to install the mdadm package?" Say yes.)
Put the partition into a new md device.
Select the / partition on the other drive, and click "Add to RAID".
Put this partition into the md device that was just created.
Select the newly created md device on the raid tab. Label it, pick a partition type like EXT4, and format it.
[The label just helps to bring order to potential chaos, and document our method. We will not tell the system the mount points until later. The labels remind us which mount point we meant to use for the partition.]

Repeat the paragraph above for all of the other pairs of partitions, EXCEPT /boot, and SWAP.
(We could have, but I choose not to use RAID1 for the SWAP space partitions.)

[The "Add to RAID" button action created RAID metadata, version 1.20 in the pairs so far.
We need RAID metadata, version 0.90 in the /boot partitions.]
Close the DiskDrake window.

Launch a console window, aka Konsole, or Command Line Interface, or terminal.

If you need to be the super, or root user; use the command: "su root".
$ su root
# ls -l /dev/md*
[Pick an unused md device number similar to the listing above; like /dev/md6, or /dev/md126]
# mdadm --create /dev/md6 --metadata=0.90 --raid-devices=2 --level=raid1 /dev/sda1 /devsdb1
# sync ; sync
[The command line above would force all pending writes to disk to complete on some legacy systems. I do not know a better way in Mageia.]

Launch Diskdrake, and enter Expert mode.
Select the raid device (tab), and the /boot partition.
Set the type of the /boot partition.
Label the /boot partition as desired.
Format the /boot partition.
Verify that the partitions all look as desired.

Shutdown Mageia 4 Live.

Boot "Install Mageia 4" from the Mageia 4 DVD.
Go through the install process.
When you are asked about disk partitioning, select "Custom Partitioning".
All of the partitions should be like you want them.
Select the raid tab to see all of the raid partitions.
Set the mount points of the partitions.
Select "Done".
Finish the install process.
hankivy
 
Posts: 128
Joined: May 19th, '14, 20:36

Re: Install bootloader failed Incompatible Raid version [SOL

Postby pernel » Apr 5th, '15, 12:27

Would a similar workaround be possible for a RAID 5 3-disk configuration?
pernel
 
Posts: 66
Joined: Mar 21st, '12, 20:13

Install bootloader Raid5 [SOLVED]

Postby hankivy » Apr 5th, '15, 20:05

The answer is Yes, and No. Another Mageia user and I have installed most of the partitions as RAID5. It was his system. I just consulted.

See https://forums.mageia.org/en/viewtopic.php?f=8&t=8309 for details.

The short version is the /boot partition must be RAID1. The image of the kernel must be intact, on which ever disk that the kernel is actually booted from.
The other partition, file systems, can be a RAID5 three disk system.

You could have three identical RAID1 copies of the /boot partition on three different disks. This is a small price to pay since the /boot partition is fairly small on today's disks. You will have to look at how to set up your boot loaders on the disks as desired. You may need to install the boot loader (LILO, GRUB, whatever) three times, once for each drive.
hankivy
 
Posts: 128
Joined: May 19th, '14, 20:36

Installing the boot loader for a two disk RAID1 configuratio

Postby hankivy » Jun 5th, '15, 20:17

My current policy on my two disk RAID1 system is to install the boot loader on both disks every time I change the default boot kernel.
The post at https://forums.mageia.org/en/viewtopic.php?f=41&t=9824#p56755 describes this process very well.
At a minimum, I needed to install the boot loader on both disks at least once. This is to allow me to boot from either disk.

Reasons for policy:
At least one time, I installed the boot loader on the second drive, but not the first, with the new kernel as the boot default.
Then when I booted the first drive, the old kernel was flagged as the default.
I do not understand how this could happen. :?: I did not test it further.
hankivy
 
Posts: 128
Joined: May 19th, '14, 20:36


Return to Advanced support

Who is online

Users browsing this forum: No registered users and 1 guest