apt key bash centos debian docker fbsd fbsd jail ffmpeg fms freebsd freebsd jail hyper-v jail linux linux routing linux security lvm lvm backup lvm snapshot macosx megacli migrate lvm root mysql mysql 5.7 mysql cluster nested hyper-v NO_PUBKEY ntpd postfix postgresql postgresql auto failover postgresql benchmark postgresql replication raid ssd ssh stream supermicro supermicro ipmi vod vpn Windows Windows Server zfs zpool

HowTo: Fix inactive linux md raid array state

When you can assemble previously created raid array, but for some reasons it’s run in inactive state + and/or all the drives in md array is marked with: (S)->which means SPARE DRIVE.

In my case there was raid5 array from 4 drives with one failed drive. So firstly raid was successfully run with 3 out from 4 drives, but after some terabytes of data copied one of the running drives starts to print errors and finally degraded and stop the array ( in my case it was /dev/md2 )

So what to do:

first investigate:

dmesg
cat /proc/mdstat
mdadm --examine -v /dev/md2
mdadm --detail -v /dev/md2
mdadm --examine -v /dev/sd[b,c,d]3

then try to assemble the all arrays( there was 3 arrays from partitions on that 3 working hard drives):

mdadm --assemble --scan  # it will scan all drives for md magic numbers and will try to run raid arrays found.

then see what’s what:

cat /proc/mdstat
mdadm --detail /dev/md2

I see for /dev/md2 that: array state is: inactive, all 4 drives are marked with (S). So, here is what to do to give back live to this array.

mdadm --stop /dev/md2
mdadm -A --force /dev/md2

voila!

you can do fsck before mounting the partition.

NOTE: in my case , the partition which cause the array fail was reported with wrong timestamp ( in output from mdadm .. -v commands ). So that shows me no data was out of sync and that’s why breavly –force the array activation.

2 responses to “HowTo: Fix inactive linux md raid array state”

  1. Hello , on my qnap TS253

    mdadm -A –force /dev/md1
    mdadm: /dev/md1 not identified in config file.

    I assume md1 exist :

    mdadm –detail /dev/md1
    /dev/md1:
    Version : 1.0
    Creation Time : Fri Feb 25 12:47:00 2022
    Raid Level : raid1
    Array Size : 235159296 (224.27 GiB 240.80 GB)
    Used Dev Size : 235159296 (224.27 GiB 240.80 GB)
    Raid Devices : 1
    Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Wed Mar 2 19:23:16 2022
    State : active, Rescue
    Active Devices : 1
    Working Devices : 1
    Failed Devices : 0
    Spare Devices : 0

    Name : 1
    UUID : 317202ef:a621fbff:725ba63c:b555eab0
    Events : 489

    Number Major Minor RaidDevice State
    0 8 3 0 active sync /dev/sda3

    Have you any idea, please help

    Like

    1. it seems /dev/md1 is missing in your mdadm.conf file.
      Can you list the contents of /etc/mdadm.conf?

      Like

Leave a comment

Create a website or blog at WordPress.com