Setup soft raid 0 on live ubuntu 16.0.4/ debian -8/9 system without downtime

This guide explains how to set up software RAID1 on an already running debina9 system. The GRUB2 bootloader will be configured in such a way that the system will still be able to boot if one of the hard drives fails (no matter which one).

Preliminary Note

In this tutorial I am using an debian 9 system with two disks, /dev/vda and /dev/vdb which are identical in size.
/dev/vdb is currently unused, and /dev/vda has the following partition:

After completing this guide I will have the following situation:

The current situation:

Disk /dev/vdb doesn’t contain a valid partition table

Installing mdadm

First of all install md tools:

In order to avoid reboot, let’s load few kernel modules:


Preparing the second disk

To create a software RAID1 on a running system, we have to prepare the second disk added to the system (in this case /dev/vdb) for RAID1, then copy the contents from the first disk (/dev/vda) to it, and finally add the first disk to the RAID1 array.

Let’s copy the partition table from /dev/vda to /dev/vdb so that the both disks have the exactly same layout:
for mbr

for gpt

And the output of the command:

Change the partitions type on /dev/vdb to Linux raid autodetect:

To make sure that there are no remains from previous RAID installations on /dev/vdb, we run the following commands:

If you receive the following error messages then there are no remains from previous RAID installations, which is nothing to worry about:

Creating RAID arrays

Now use mdadm to create the raid arrays. We mark the first drive (vda) as «missing» so it doesn’t wipe out our existing data:

See status

root@debian9:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 vdb5[1]
1045952 blocks super 1.2 [2/1] [_U]

md0 : active raid1 vdb1[1]
19905408 blocks super 1.2 [2/1] [_U]

unused devices:

The output above means that we have two degraded arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok).

Create the filesystems on RAID arrays (ext4 on /dev/md0 and swap on /dev/md1)


Adjust mdadm configuration file which doesn’t contain any information about RAID arrays yet:

Display the content of /etc/mdadm/mdadm.conf:

Adjusting The System To RAID1

Let’s mount /dev/md0:

root@debian9:~# mount
/dev/vda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
/dev/md0 on /mnt/md0 type ext4 (rw)

Change the UID values in /etc/fstab with the UUID values returned by blkid:

After changing the UUID values the /etc/fstab should look as follows:

Next replace /dev/vda1 with /dev/md0 in /etc/mtab:

root@debian9:~# cat /etc/mtab
/dev/md0 / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
udev /dev devtmpfs rw,mode=0755 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
none /run/shm tmpfs rw,nosuid,nodev 0 0
none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0
/dev/md0 /mnt/md0 ext4 rw 0 0

Setup the GRUB2 boot loader.

Create the file /etc/grub.d/09_swraid1_setup as follows:

Make sure you use the correct kernel version in the menuentry (in the linux and initrd lines).

root@debian9:~# uname -r

Update grub configuration and adjust our ramdisk to the new situation:

add line


Copy files to the new disk

Copy the files from the first disk (/dev/vda) to the second one (/dev/vdb)


Preparing GRUB2 (Part 1)

Install GRUB2 boot loader on both disks (/dev/vda and /dev/vdb):


Now we reboot the system and hope that it boots ok from our RAID arrays:

Preparing /dev/vda

If everything went well, you should now find /dev/md0 in the output of:

The output of:


Change the partitions type on /dev/vda to Linux raid autodetect:


Now we can add /dev/vda1 and /dev/vda2 to the respective RAID arrays:

Take a look at:

Then adjust /etc/mdadm/mdadm.conf to the new situation:

Display the content of /etc/mdadm/mdadm.conf:

Preparing GRUB2 (Part 2)

Now it’s safe to delete /etc/grub.d/09_swraid1_setup

Update our GRUB2 bootloader configuration and install it again on both disks (/dev/vda and /dev/vdb)

Reboot the machine

Repair soft raid 0 on debian 8/9 / ubuntu 16.04

See status md

Add disk to raid

See sync status


Waiting for sync


Добавить комментарий

Войти с помощью: 

Ваш e-mail не будет опубликован. Обязательные поля помечены *


Этот сайт использует Akismet для борьбы со спамом. Узнайте как обрабатываются ваши данные комментариев.