Beiträge von yayaya

    Hello,

    I recently upgraded my system from OMV 4 to 5.

    I noticed an error when trying to access Snapraid in the WebGUI:

    Zitat

    The property 'rule-folder' does not exist in the model 'conf.service.snapraid.rule'.


    And here are more details:


    Zitat

    Erreur #0: OMV\AssertException: The property 'rule-folder' does not exist in the model 'conf.service.snapraid.rule'. in /usr/share/php/openmediavault/config/configobject.inc:71 Stack trace: #0 /usr/share/php/openmediavault/config/configobject.inc(186): OMV\Config\ConfigObject->assertExists('rule-folder') #1 /usr/share/php/openmediavault/config/configobject.inc(271): OMV\Config\ConfigObject->set('rule-folder', '/srv/4140fd8a-1...', false) #2 /usr/share/php/openmediavault/config/configobject.inc(233): OMV\Config\ConfigObject->setFlatAssoc(Array, false, false) #3 /usr/share/php/openmediavault/config/database.inc(85): OMV\Config\ConfigObject->setAssoc(Array, false) #4 /usr/share/php/openmediavault/config/database.inc(96): OMV\Config\Database->get('conf.service.sn...', NULL) #5 /usr/share/openmediavault/engined/rpc/snapraid.inc(182): OMV\Config\Database->getAssoc('conf.service.sn...') #6 [internal function]: OMVRpcServiceSnapRaid->getRuleList(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getRuleList', Array, Array) #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('SnapRaid', 'getRuleList', Array, Array, 1) #10 {main}


    Thanks in advance for your help !

    Hello,


    I just moved to a new location with a different internet provider.


    I cannot access to my OMV server.


    I pluged it to a display, the only thing I know so far is its IP address, but it seems invisible on my local network.


    Thanks for your help!

    I tried to figure what was happening but it was a nightmare (as far as my knowledge goes).


    I had a spare SSD around so I used it to test a fresh install of OMV 4.x on it.


    Once done, I rebooted with data and parity drives connected, but couldn't do a thing as errors popped up.


    I disconnected the drives, booted again and install Snapraid and Mergerfs plugins. Reconnected the drives, rebooted fine this time, then I recreated the pool and Snapraid configuration (took me <1min). My data & parity disks were recognised as on my previous OMV 3.x version.


    This operation didn't do a thing to my data in the process.


    And Voilà, all is up and running!


    In the end, I would be curious to know what caused the issue in the first place...

    Hi,


    I wanted to make the upgrade from OMV 3.x to 4.x.


    I first checked that my plugins were compatible with OM 4.x then upgraded.


    See attached a screenshot of the end of the operation.



    I saw the Python issue, and try to fix it by editing the followin lines:
    - line 109: defremove(wr, selfref=ref(self)): replace with:def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):
    - line 117:_remove_dead_weakref(d, wr.key) replace with:_atomic_removal(d, wr.key)



    I was trying to find some clues after that, but I had to shut down the server for a while (eletrical maintenance in the neighborhood).
    Anywway, after powering it back on, I can see that:
    - transmission is working, as well as Samba and SSH
    - when trying to connect to the WebGui: 502 Bad Gateway |Nginx


    The only "unusual" thing about this setup is that I run Transmission this way. No idea if it could have interfere in the upgrading process.


    I don't mind starting back on a fresh version of OMV 4.x if that's the best way to go, but I want to keep my current configuration based on a MegerFS pool & Snapraid.


    Thanks!

    First off, thanks gderf for trying to point me in the right direction!



    I think this time I found the solution (see attached).


    In short, if you are in the same situation, this command seems to be the one: Snapraid sync -R



    So basically I saw no change thanks to the 'ls -al' command after unmounting & mounting back the parity drive.


    I decided to wipe the disk and rebuilt the parity. This solution didn't fix anything, Snapraid immediatly allocated the exact same amount of space for this drive when using the sync command.


    Then, once rebuilt, I did a test: a simple transfer of data on the NAS. While the data disks available space decreased as expected, the parity one didn't change at all.


    In the end, I found a topic where someone made the following statement to another fellow who had the same issue: 'You had more data on the data disk in the past (in which case the parity space will be reused when you later add new data files)'



    The solution to get back to normal: Snapraid sync -R


    Snapraid immediatly decreased the used space on my parity drive, and it is now syncing.


    I still have to wait 5 hours or so before making a final check, but things look promising!


    ---
    About the Snapraid sync -R command, according to the manual:
    '
    -R, --force-realloc



    In "sync" forces a full reallocation of files and rebuild of the parity.
    This option can be used to completely reallocate all the files removing
    the fragmentation, but reusing the hashes present in the content file
    to validate data. Compared to -F, --force-full, this option reallocates
    all the parity not having data protection during the operation. This
    option can be used only with "sync".
    '
    ---

    Ok, did a sync, nothing changed...


    Thanks. Here is what I get :


    :~# ls -al
    total 32
    drwx------ 3 root root 4096 janv. 24 2018 .
    drwxr-xr-x 26 root root 4096 août 29 14:44 ..
    -rw------- 1 root root 1145 août 31 19:21 .bash_history
    -rw-r--r-- 1 root root 570 janv. 31 2010 .bashrc
    -rw------- 1 root root 0 janv. 24 2018 dead.letter
    -rw-r--r-- 1 root root 268 janv. 10 2018 .inputrc
    -rw------- 1 root root 26 août 29 16:54 .nano_history
    -rw-r--r-- 1 root root 140 nov. 19 2007 .profile
    drwx------ 2 root root 4096 janv. 10 2018 .ssh

    Hello,


    I might be in need of some help here!


    I just added a new 4Tb disk to my current pool.


    Until then I had :
    - 1x 4Tb parity disk
    - 2x 4Tb data disks
    - 1x SSD for the OS


    Using MergerFS & Snapraid, I'd like to use the mergerfs.balance tool to balance the data across the disks, as the other two disks are pretty much full.



    Problem is: I'm a bit confused on how to proceed.



    According to this thread, one way to install it would be through the command: wget raw.githubusercontent.com/trap…4c87/src/mergerfs.balance


    The things is, I get an "error 400: bad request"... Anyway, I'm stuck at this point.



    Thanks in advance!

    Hi all,


    A few days ago I got a new HDD and wanted to create a RAID1 from my original OMV setup that only included 1 drive (4TB WD Blue). To do so, I based myself on the following thread: Create RAID 1 with existing Data Disk


    All went pretty smoothly,





    => except for one thing: 2 lines in the System Files don't seem to be in order.



    - the 1st one appeared once I wiped out the original disk before including it in the RAID : as a result, "N/A" and "Missing" status.


    - the 2nd one: "linux_raid_member" and "N/A" which points to the original HDD since it's in the RAID



    Here is the content of "fdisk -l" command:



    As a result I get there's an issue in the partition table, but I don't know how to fix it.


    Many thanks in advance for your help!

    Edit : more results from the following commands


    - cat /proc/mdstat command:

    Code
    $ sudo cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdc1[2] sdb1[1]
          3906886272 blocks super 1.2 [2/2] [UU]
    
    
    unused devices: <none>


    - blkid command:

    Code
    $ sudo blkid
    /dev/sda1: UUID="a9e79f7b-2764-4388-a69a-b9c9de5de7e4" TYPE="ext4"
    /dev/sda5: UUID="34f08ca6-0e01-4e06-ad33-b5dbcd3402ff" TYPE="swap"
    /dev/sdb1:
     UUID="b01f8082-be76-9175-127b-400da1aff51b" 
    UUID_SUB="f3d835c8-a2c1-3ce3-b031-be6fc00f3366" LABEL="ServeurMaison:0" 
    TYPE="linux_raid_member"
    /dev/md0: UUID="89460968-4bc6-4ce1-9958-5621dbb13270" TYPE="ext4"
    /dev/sdc1: LABEL="rd1" UUID="5e8305ea-1f49-41ec-9600-906a0394c686" TYPE="ext4"


    - mdadm --detail --scan --verbose command:

    Code
    $ sudo mdadm --detail --scan --verbose
    ARRAY /dev/md/0 level=raid1 num-devices=2 metadata=1.2 name=ServeurMaison:0 UUID=b01f8082:be769175:127b400d:a1aff51b
       devices=/dev/sdc1,/dev/sdb1