background
oneprovider从RAID1转换成RAID0
# VPS

参考:https://www.taterli.com/8394/

cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]

[raid4] [raid10] md1 : active raid1 sda3[0] sdb3[1]

    998933504 blocks super 1.2 [2/2] [UU]

    [===========>.........] resync = 58.6% (585393536/998933504) finish=34.4m in speed=200051K/sec

    bitmap: 5/8 pages [20KB], 65536KB chunk

    

md0 : active raid1 sda2[0] sdb2[1]

    613376 blocks super 1.2 [2/2] [UU]

    

unused devices:

mdadm /dev/md1 --fail /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md1
mdadm /dev/md1 --remove  /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md1
wipefs -a /dev/sda3
/dev/sda3: 4 bytes were erased at offset 0x00001000(linux_raid_member): fc 4e 2b a9
cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]

[raid4] [raid10] md1 : active raid1 sdb3[1]

    998933504 blocks super 1.2 [2/1] [_U]

    bitmap: 6/8 pages [24KB], 65536KB chunk

    

md0 : active raid1 sda2[0] sdb2[1]

    613376 blocks super 1.2 [2/2] [UU]

    

unused devices:

mdadm --detail /dev/md1

/dev/md1:

Version : 1.2

Creation Time : Sat Aug 13 14:05:32 2022

Raid Level : raid1

Array Size : 998933504 (952.66 GiB 1022.91 GB)

Used Dev Size : 998933504 (952.66 GiB 1022.91 GB)

Raid Devices : 2

Total Devices : 1

Persistence : Superblock is persistent

    

Intent Bitmap : Internal

    

Update Time : Sat Aug 13 15:02:06 2022

State : clean, degraded

Active Devices : 1 Working Devices : 1

Failed Devices : 0

Spare Devices : 0

    

Consistency Policy : bitmap

    

Name : debian:1

UUID : 81cd0062:214021da:f76ccc0a:829afc8e

Events : 1436

    

Number Major Minor RaidDevice State

    -    0    0    0    removed

    1    8    19    1    active    sync    /dev/sdb3

mdadm --grow /dev/md1 --level=0
mdadm: level of /dev/md1 changed to raid0
mdadm --misc --detail /dev/md1

/dev/md1:

    Version : 1.2

    Creation Time : Sat Aug 13 14:05:32 2022

    Raid Level : raid0

    Array Size : 998933504 (952.66 GiB 1022.91 GB)

    Raid Devices : 1

    Total Devices : 1

    Persistence : Superblock is persistent

    

    Update Time : Sat Aug 13 15:03:23 2022

    State : clean

    Active Devices : 1 Working Devices : 1

    Failed Devices : 0

    Spare Devices : 0

    

    Chunk Size : 64K

    

Consistency Policy : none

    

    Name : debian:1

    UUID : 81cd0062:214021da:f76ccc0a:829afc8e

    Events : 1442

    

    Number    Major    Minor    RaidDevice State

    1    8    19    0    active    sync    /dev/sdb3

mdadm --add /dev/md1 /dev/sda3

ps.如果遇到禁止添加,可以尝试使用:mdadm /dev/md1 --grow -l 0 --raid-devices=2 -a /dev/sda3

mdadm: level of /dev/md1 changed to raid4
mdadm: added /dev/sda3
mdadm --misc --detail /dev/md1

/dev/md1:

    Version : 1.2

    Creation Time : Sat Aug 13 14:05:32 2022

    Raid Level : raid4

    Array Size : 998933504 (952.66 GiB 1022.91 GB)

    Used Dev Size : 998933504 (952.66 GiB 1022.91 GB)

    Raid Devices : 3

    Total Devices : 2

    Persistence : Superblock is persistent

    

    Update Time : Sat Aug 13 15:07:32 2022

    State : active, FAILED, reshaping

    Active Devices : 1    Working Devices : 2

    Failed Devices : 0

    Spare Devices : 1

    

    Chunk Size : 64K

    

Consistency Policy : resync

    

    Reshape Status : 0% complete

    Delta Devices : 1, (2->3)

    

    Name : debian:1

    UUID : 81cd0062:214021da:f76ccc0a:829afc8e

    Events : 1488

    

    Number Major Minor RaidDevice State

    1 8 19 0 active sync /dev/sdb3

    2 8 3 1 spare rebuilding /dev/sda3

    - 0 0 2 removed

watch cat /proc/mdstat

Every 2.0s: cat /proc/mdstat

op1: Sat Aug 13 15:08:46 2022

    

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]

[raid4] [raid10] md1 : active raid4 sda3[2] sdb3[1]

    998933504 blocks super 1.2 level 4, 64k chunk, algorithm 5 [3/2] [U__]

    [>....................] reshape = 1.1% (11306924/998933504) finish=201.1min speed=81843K/sec

    

md0 : active raid1 sda2[0] sdb2[1]

    613376 blocks super 1.2 [2/2] [UU]

    

unused devices:

正在努力Reshape,然后漫长时间后重启,就Raid0了.

mdadm --misc --detail /dev/md1

resize2fs /dev/md1

df -h