..

Argon EON NAS Removed

Well something didn’t look quite right, I was checking OpenMediaVault and it showed that /dev/md0 is “clean, degraded”.

Asked ChatGPT, my new assistant, what it meant. It said that it means that one of the drives is not in the RAID 1 setup (which means that it’s not really RAID 1).

OK, how do we find out?

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0  3.6T  0 disk
sdb           8:16   0  3.6T  0 disk
└─md0         9:0    0  3.6T  0 raid1 /srv/dev-disk-by-uuid-a56113ad-0c32-4ec7-a2f8-92392525f83f
mmcblk0     179:0    0 29.7G  0 disk
├─mmcblk0p1 179:1    0  512M  0 part  /boot/firmware
└─mmcblk0p2 179:2    0 29.2G  0 part  /var/folder2ram/var/cache/samba
                                      /var/folder2ram/var/lib/monit
                                      /var/folder2ram/var/lib/rrdcached
                                      /var/folder2ram/var/spool
                                      /var/folder2ram/var/lib/openmediavault/rrd
                                      /var/folder2ram/var/tmp
                                      /var/folder2ram/var/log
                                      /

OK, both sda and sdb are listed.

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Nov 20 22:40:56 2024
        Raid Level : raid1
        Array Size : 3906886464 (3.64 TiB 4.00 TB)
     Used Dev Size : 3906886464 (3.64 TiB 4.00 TB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Feb  4 11:31:05 2025
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : argon:0  (local to host argon)
              UUID : fada8382:82aae7d0:bd4b0af6:6847b4e5
            Events : 12047

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       16        1      active sync   /dev/sdb

That’s not good. One is removed, the other (/dev/sdb) is active sync.

Back to ChatGPT then. How do I restore this?

Since the /dev/md0 device was created like this:

$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb

To restore:

$ sudo mdadm --manage /dev/md0 --add /dev/sda

Then to monitor the progress:

$ watch cat /proc/mdstat

And when it’s done, run this again:

$ sudo mdadm --detail /dev/md0

Let’s give it a go, what have I got to lose other than 70GB of files I really need?

It looks like this:

Every 2.0s: cat /proc/mdstat                                                                                argon: Tue Feb  4 22:05:07 2025

Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda[0] sdb[1]
      3906886464 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.2% (10583488/3906886464) finish=580.2min speed=111909K/sec
      bitmap: 8/30 pages [32KB], 65536KB chunk

unused devices: <none>

That’s will take some time.

Update: It didn’t take much time, about 5-10 minutes.

Now:

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Nov 20 22:40:56 2024
        Raid Level : raid1
        Array Size : 3906886464 (3.64 TiB 4.00 TB)
     Used Dev Size : 3906886464 (3.64 TiB 4.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Feb  4 22:09:12 2025
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : argon:0  (local to host argon)
              UUID : fada8382:82aae7d0:bd4b0af6:6847b4e5
            Events : 12106

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb

And OpenMediaVault says ‘clean’, so I’m happy again.