Strange problem when migrating a RAID array from QNAP to OMV

  • Hello,


    I am trying to migrate my existing RAID5 array, created with QNAP, into my OMV install build.


    I have plugged in the drives, and OMV can read the RAID array via the 'RAID Management' tab. It is read as device /dev/md/124.


    However, when I click on the 'File Systems' tab to try and mount /dev/md124, it doesn't appear in the list. It shows /dev/md125 and /dev/md127, which I am able to successfully mount, but these are not my RAID array, and I am not able to access my RAID array after mounting them.


    I have attached screenshots of my RAID Management and File Systems tabs.


    Any ideas? Thank you in advance for your help!

  • ryecoaaron

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    What is the output of: sudo blkid

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks for your reply! But I can't even log in to the CLI to run the command (I've been managing it through the web interface, which I can log in through).


    When I try to log in, I input my user and password, and I get the following screen. After that, it takes me back to the login page:


    [ OK ] Stopped Getty on tty1.

    [ OK ] Started Getty on tty1. 

    • Offizieller Beitrag

    When I try to log in, I input my user and password, and I get the following screen. After that, it takes me back to the login page:

    add your user to the ssh and sudo groups in the web interface and then login via ssh (use putty or another terminal).

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    it was impossible for me to copy-paste the output.

    That is why putty is a better choice.


    I see array members but no filesystem on the array. This is why OMV isn't showing anything in the Filesystems tab. Did you run any mdadm commands to re-assemble or fix the array when you added it to OMV?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • No, I didn't! I did my due dilligance research via Google before migrating my RAID array, but forum posts and such were scarce. The only one I found was about someone who plugged in their RAID array and OMV managed to read it straight away, and he was able to mount it immediately.


    Can you point me in the direction of where I can learn about mdadm commands, and what I need to run?


    Thanks again

    • Offizieller Beitrag

    The only one I found was about someone who plugged in their RAID array and OMV managed to read it straight away, and he was able to mount it immediately.

    QNAP runs Linux and uses mdadm just like OMV. But you don't say which model. The ext3 filesystems makes me think it was old.

    Can you point me in the direction of where I can learn about mdadm commands, and what I need to run?

    The problem is the array is assembled and clean. So, there is nothing to "fix" with mdadm. What kind of system is this on? 32TB is a big filesystem that maybe your system doesn't support.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    I had a TS-451. Not as old as I thought the system might be. And the new system should be fine. I can't explain what happened to the array. What is the output of: wipefs -n /dev/md124 (no this command will not wipe the disk and never could if you don't use sudo)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    And it is stuff like this is why I stay away from these threads....


    Is util-linux installed? sudo apt-get install util-linux

    After that, does /usr/sbin/wipefs -n /dev/md124 work?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!