Assuming you have added the DEVICE and ARRAY entries to the /etc/mdadm.conf.
Starting the array can be accomplished with a single command:
root@localhost / # mdadm --assemble --scan mdadm: /dev/md0 has been started with 6 drives. root@localhost / #
For those who have not added the array configuration information to the /etc/mdadm.conf file there is still hope yet. Starting the array without a /etc/mdadm.conf file is a two step process.
You must first find the UUID of the array you wish to start, and then start the array with the UUID qualifier.
root@localhost / # mdadm --examine /dev/sdb2 /dev/sdb2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 5c718acf:61182c6e:0aced717:e4d9d364 Name : rescue:1 (local to host rescue) Creation Time : Wed Apr 25 16:52:27 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1048552 (512.07 MiB 536.86 MB) Array Size : 1048552 (512.07 MiB 536.86 MB) Data Offset : 24 sectors Super Offset : 8 sectors State : clean Device UUID : 299b2460:a79d36a8:b6992772:2379959b Update Time : Thu Jun 13 14:22:51 2013 Checksum : af046248 - correct Events : 269 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing) root@localhost / #
Use the UUID to start the array
root@localhost / # mdadm --assemble /dev/md0 --run -u 5c718acf:61182c6e:0aced717:e4d9d364 mdadm: /dev/md0 has been started with 1 drive (out of 2). root@localhost / #
There is example of degrated array.
Add the array configuration information to /etc/mdadm.conf so you do not have to reference this KB again 😉
root@localhost / # mdadm --detail --scan >> /etc/mdadm.conf root@localhost / #