Software RAID10 fejl duplicate PVs
HejHåber at nogen kan hjælpe. Mit Software RAID er efter opgradering fra Debian 8 til 9 gået i stykker. Den kan ikke starte det op og komme med den her fejl.
root@HomeServer:~# vgdisplay
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
WARNING: PV 9nPKqU-AQ08-d0Tz-8Ld1-Ryxi-bjnB-2TRsgv on /dev/sdc was already found on /dev/sdb.
WARNING: PV 9nPKqU-AQ08-d0Tz-8Ld1-Ryxi-bjnB-2TRsgv on /dev/md126 was already found on /dev/sdb.
WARNING: PV 9nPKqU-AQ08-d0Tz-8Ld1-Ryxi-bjnB-2TRsgv prefers device /dev/sdb because device was seen first.
WARNING: PV 9nPKqU-AQ08-d0Tz-8Ld1-Ryxi-bjnB-2TRsgv prefers device /dev/md126 because device size is correct.
--- Volume group ---
VG Name system21211
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 53
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 8
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.46 TiB
PE Size 4.00 MiB
Total PE 1430783
Alloc PE / Size 1331200 / 5.08 TiB
Free PE / Size 99583 / 389.00 GiB
VG UUID HT1nHy-FBjW-lZSM-yMoh-Wxwz-0OGR-zFCdlZ
Status på RAID er således:
root@HomeServer:~# cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md126 : active raid10 sdb[3] sdc[2] sdd[1] sde[0]
5860491264 blocks super external:/md127/0 64K chunks 2 near-copies [4/4] [UUUU]
md127 : inactive sde[3](S) sdc[2](S) sdd[1](S) sdb[0](S)
12612 blocks super external:imsm
unused devices: <none>
root@HomeServer:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid10
Array Size : 5860491264 (5589.00 GiB 6001.14 GB)
Used Dev Size : 2930245760 (2794.50 GiB 3000.57 GB)
Raid Devices : 4
Total Devices : 4
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 64K
UUID : 672b4cb5:5ce8b308:2875b2b8:58889b5d
Number Major Minor RaidDevice State
3 8 16 0 active sync set-A /dev/sdb
2 8 32 1 active sync set-B /dev/sdc
1 8 48 2 active sync set-A /dev/sdd
0 8 64 3 active sync set-B /dev/sde
Nogen der har en ide ?
Thomas