Posts by tschensie

    OK, I'm downloading the debian 7.5 dvd-iso's at the moment.
    What do I have to install from it to get omv kralizec running ?
    I want to keep the system as small as possible an not install packages i don't need for omv, plex and sabnzbd.

    I also thought about installing kralizec.
    But i want to have a stable nas.
    It takes to much time to copy about 18tb data from backup to nas in case of a failure (my system only has usb 2.0 ports, no space to install 3.0 card).
    Also building my plexmedia database has taken a lot of time, so I would like to keep it in the new installed system (if possible).

    Hi.
    I'm running OMV 0.4.38 with an 8x3TB raid5.
    After a Discfailure I can't recover my raid, because always after about 50% recovering OMV sets a 2nd disc as faulty spare an the raid is degraded.
    I can force a reassembling an acces my data again, but the recovery fails again after about 50%.
    I have a backup of all my data, so i can delete the raid and build a new (with 2 new discs instead of the faulty).
    But this time I'll use a raid6.
    So, when I have to make everthing new I can also upgrade to OMV 0.5
    But what is the best way ?
    I only use Plexmediaserver and SABNzbd on the server.
    I would prefer a fresh install, but is it possible to keep my plex database and my SAB-settings ?
    The plex database is located on the datadiscs of my raid. Can I backup the plexmediaserver-folder and put it back after the new install ?


    regards


    Jens

    OK, I tried something.


    I did "mdadm --assemble --force /dev/md127 /dev/sd[abdefghi]
    The superblock on drive sda was repaired, the raid started and i saw it again in the webgui.
    But the raid only had 7 of 8 drives and 1 spare (sdi).
    I wasn't able to recover the the raid, beacause omv found no extra drive. omv also did it not automatic.
    so i deleted drive sdi, reboot the server and the raid was found with 7 drives, 1 missing.
    Now i was able to recover the raid by choosing the deleted drive.
    The recovery is still running, my filesystem is still there and my data also.

    I need some help.
    My Raid5 is gone (8 x 3TB).
    Yesterday my OMV sent a mail with a mdadm-event.
    So I looked into the webgui and found 1 drive dead, raid5 degraded.
    I replaced the dead drive with a new one and started a recovery.
    Now i got a new mail with a mdadm-event.
    A look to the webgui showed raid5 - degraded - failed.
    I looked to the drives: all 9 drives (system + 8-raid drives) are online.


    Here are the outputs:








    mdadm.conf:


    It seems, that OMV put the sda device into a new raid md126 (with one of 8 disks) and the md127 has only 6 disks + 1 spare.


    is there a way to repair this without loosing my data ?
    When i looked at the recovering status his morning it was at 59%


    regards


    Jens


    P.S.: Sorryn forgot to say that I run OMV 0.4.38

    OK, wiping worked.
    But I can't shrink the ext4 with gparted.
    The partition editor says it can't find the fs-superblock, so some options are not avaiable (e.g. change fs-size).


    So I wanted to try your option:


    1. fsck.ext4 -f /dev/md127
    2. resize2fs /dev/md127 xxTB


    But OMV warns not to run fsck.ext4 because md127 is mounted and the fs would be damaged.
    I can't unmount the fs via gui.


    So, how can I shrink my ext4-partition ????

    Ok, shrinking worked.


    But I can't ceate a new array with the removed drives.
    After shrinking the array the 3 removed drives were shown as spare drives.
    so I removed them manually with mdadm --manage /dev/md127 --remove /dev/sdX


    After this the drives were not shown in the old array, but I also couldn't create a new array, because the removed drives are not shown in the gui.
    After a reboot the 3 removed drives are shown as spare drives again.
    How can I now remoe the drives from array 1 and create a second array ?

    Ok, thx.
    I'll give it a try next week when I'm back from holliday.


    in your example, are you shure with the new arraysize of 23G ? Or did you mean 32G (4 Disks, each 8 GB) ?


    So, in my case mdadm --detail /dev/md127 shows the following:



    Resize will be done with gparted.
    So I have to do the following 3 steps:


    1. mdadm /dev/md127 --grow --array-size=8790795264 (3*used dev size 2930265088)
    2. mdadm /dev/md127 --grow --raid-devices=4 --backup-file=/tmp/backup
    3. wait :-)


    Is this correct ?

    Quote from "davidh2k"

    Wenn knapp 7TB von 10,72TB belegt sind/waren kannst du doch noch eine zusätzlich aus dem RAID nehmen?


    Ja, aber dann wäre das neue Raid5 Array nur 6TB groß, und damit zu klein um alle Daten auf einen Rutsch zu verschieben.


    Aber ich denke, ich werde nächste Woche nach dem Urlaub noch eine weitere HD kaufen und dann ein 2. Raid mit 4 Platten einrichten.
    Wie entferne ich eigentlich die Platten wieder aus dem Raid-Verbund (nachdem ich das Filesystem mit GParted verkleinert habe) ?

    Nein, das Raid ist nicht randvoll.
    Ursprünglich waren knapp 7 TB von den 10,72 TB belegt.
    Dann habe ich ja weitere 2 Platten dem Raid hinzugefügt, kann ja aber leider jetzt das ext4 nicht auf 16,37 TB vergrößern.
    Um ein 2. Array anzulegen müsste ich erst die Platten wieder aus dem Raid entfernen und eine weitere Platte kaufen.
    Daher jetzt mein Plan:


    1. ext4-Partition auf ca. 8 TB verkleinern
    2. xfs-Partition mit etwas mehr als 8 TB zusätzlich anlegen
    3. alle Daten von ext4 nach xfs verschieben
    4. ext4-Partition löschen
    5. xfs-Partition auf 16,37 TB vergrößern.


    Sollte doch hoffentlich so funktionieren ?


    Gruß


    Tschensie

    Ich schreibe jetzt mal auf deutsch weiter, dann muss ich nicht so viel überlegen.


    Kann man kein 2. Dateisystem auf dem gleichen Raid anlegen ?
    Mein Raid ist jetzt 16,37 TB, das ext4-Dateisystem 10,74TB.
    Beim Versuch ein xfs-Dateisystem mit der Restkapazität des Raid anzulegen habe ich im Dialogfenster keine Auswahl für "Laufwerk".


    Wie soll ich jetzt anfangen meine Daten vom ext4 nach xfs zu migrieren ohne wieder alles auf externe Platten auszulagern ?


    Gruß


    Tschensie

    Hmmm, ok.
    Is there a way to switch to xfs without loosing data ?
    Or can I "shrink" the 10,72 tb ext4 filesystem to 8tb, so that I can create a 8tb xfs filesystem to move the data ?
    And after that delete the ext4 and grow the xfs.
    Or is it planed to support >16TB fs in future OMV-versions ?


    Thx


    Tschensie

    Hi,
    My OMV was running with 5x3TB as raid5.
    Now I've added 2 more 3TB HD's and growed the raid.
    After that I tried to resize the ext4 filesystem, but got the following error:


    Code
    Failed to execute command 'sudo resize2fs /dev/md127':
    Fehler #4000:
    exception 'OMVException' with message 'Failed to execute command 'sudo resize2fs /dev/md127': ' in /var/www/openmediavault/rpc/filesystemmgmt.inc:574
    Stack trace:
    #0 [internal function]: FileSystemMgmtRpc->resize(Array)
    #1 /usr/share/php/openmediavault/rpc.inc(265): call_user_func_array(Array, Array)
    #2 /usr/share/php/openmediavault/rpc.inc(98): OMVRpc::exec('FileSystemMgmt', 'resize', Array)
    #3 /var/www/openmediavault/rpc.php(44): OMVJsonRpcServer->handle()
    #4 {main}


    I also tried to execute the command in the ssh-shell, but got the following error:

    Code
    root@nas:~# sudo resize2fs /dev/md127
    resize2fs 1.41.12 (17-May-2010)
    resize2fs: Die Datei ist zu groß beim Bestimmen der Dateisystemgröße
    root@nas:~#


    Anyone got an idea how to gro my filesystem ?

    Quote from "jhmiller"


    look like you have another sab init script, do a


    Code
    ls /etc/init.d/


    and paste the output here, it would be better to remove the other sab init file (if there is 1) then use the rc.local file.


    Yes, you were right.
    There was a sabnzb.ini an a sabnzbplus.ini
    After deleting the sabnzbplus.ini an doing update-rc.d SABnzbd defaults sabnzbs starts automatic after a reboot.


    Thx !!!

    Removing an ra-adding shows this:


    Code
    root@nas:~# update-rc.d -f SABnzbd remove
    update-rc.d: using dependency based boot sequencing
    root@nas:~# update-rc.d SABnzbd defaults
    update-rc.d: using dependency based boot sequencing
    insserv: script SABnzbd: service sabnzbdplus already provided!
    insserv: exiting now!
    update-rc.d: error: insserv rejected the script header


    The rc.local file is empty.
    I'll try to insert the string this evening, omv ist busy at the moment (my children are watching a movie...). :-)

    Hi.
    I've installed SABnzbd with your script and it worked fine.
    But after a reboot the service dosn't start automaticly.
    I allways have to logon with putty and start it by typing "service SABnzbd start".
    I've tried to reinstall several times, but no success.


    So, how can I manage to start the service after reboot automatic ?

    Hi.


    I've got a new server and want so setup a new OMV.
    Is it possible to take my raid5 with all the data to the new OMV ?
    Will I see the raid, filesystem and folder there ?


    Is there a way to backup /copy the config (user, shared folders etc.) from the old OMV the new ?


    Thx.

    Plan D sounds good to me.


    Let's see if I get it right:


    3 of my 3TB Disks are in USB-Cases at the moment.
    So I plug 2 disks into the free bays, connect 3 disks via usb, build a raid5 an filesystem with these 5*3TB, copy or move files, remove old filesystem and raid, shut down server, remove 2TB disks, put in all 3TB disks into the bays, boot server and I should be lucky ? :?::D


    This would be the easiest way...