I ended up making a fresh install, all is now running smoothly !
Beiträge von yayaya
-
-
Hello,
I recently upgraded my system from OMV 4 to 5.
I noticed an error when trying to access Snapraid in the WebGUI:ZitatThe property 'rule-folder' does not exist in the model 'conf.service.snapraid.rule'.
And here are more details:ZitatErreur #0: OMV\AssertException: The property 'rule-folder' does not exist in the model 'conf.service.snapraid.rule'. in /usr/share/php/openmediavault/config/configobject.inc:71 Stack trace: #0 /usr/share/php/openmediavault/config/configobject.inc(186): OMV\Config\ConfigObject->assertExists('rule-folder') #1 /usr/share/php/openmediavault/config/configobject.inc(271): OMV\Config\ConfigObject->set('rule-folder', '/srv/4140fd8a-1...', false) #2 /usr/share/php/openmediavault/config/configobject.inc(233): OMV\Config\ConfigObject->setFlatAssoc(Array, false, false) #3 /usr/share/php/openmediavault/config/database.inc(85): OMV\Config\ConfigObject->setAssoc(Array, false) #4 /usr/share/php/openmediavault/config/database.inc(96): OMV\Config\Database->get('conf.service.sn...', NULL) #5 /usr/share/openmediavault/engined/rpc/snapraid.inc(182): OMV\Config\Database->getAssoc('conf.service.sn...') #6 [internal function]: OMVRpcServiceSnapRaid->getRuleList(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getRuleList', Array, Array) #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('SnapRaid', 'getRuleList', Array, Array, 1) #10 {main}
Thanks in advance for your help !
-
Hello,
I just moved to a new location with a different internet provider.
I cannot access to my OMV server.
I pluged it to a display, the only thing I know so far is its IP address, but it seems invisible on my local network.
Thanks for your help!
-
I tried to figure what was happening but it was a nightmare (as far as my knowledge goes).
I had a spare SSD around so I used it to test a fresh install of OMV 4.x on it.
Once done, I rebooted with data and parity drives connected, but couldn't do a thing as errors popped up.
I disconnected the drives, booted again and install Snapraid and Mergerfs plugins. Reconnected the drives, rebooted fine this time, then I recreated the pool and Snapraid configuration (took me <1min). My data & parity disks were recognised as on my previous OMV 3.x version.
This operation didn't do a thing to my data in the process.
And Voilà, all is up and running!
In the end, I would be curious to know what caused the issue in the first place...
-
Hi,
I wanted to make the upgrade from OMV 3.x to 4.x.
I first checked that my plugins were compatible with OM 4.x then upgraded.
See attached a screenshot of the end of the operation.
I saw the Python issue, and try to fix it by editing the followin lines:
- line 109: defremove(wr, selfref=ref(self)): replace with:def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):
- line 117:_remove_dead_weakref(d, wr.key) replace with:_atomic_removal(d, wr.key)I was trying to find some clues after that, but I had to shut down the server for a while (eletrical maintenance in the neighborhood).
Anywway, after powering it back on, I can see that:
- transmission is working, as well as Samba and SSH
- when trying to connect to the WebGui: 502 Bad Gateway |NginxThe only "unusual" thing about this setup is that I run Transmission this way. No idea if it could have interfere in the upgrading process.
I don't mind starting back on a fresh version of OMV 4.x if that's the best way to go, but I want to keep my current configuration based on a MegerFS pool & Snapraid.
Thanks!
-
Thanks!
Sync went well, all is up and running
-
First off, thanks gderf for trying to point me in the right direction!
I think this time I found the solution (see attached).
In short, if you are in the same situation, this command seems to be the one: Snapraid sync -R
So basically I saw no change thanks to the 'ls -al' command after unmounting & mounting back the parity drive.
I decided to wipe the disk and rebuilt the parity. This solution didn't fix anything, Snapraid immediatly allocated the exact same amount of space for this drive when using the sync command.
Then, once rebuilt, I did a test: a simple transfer of data on the NAS. While the data disks available space decreased as expected, the parity one didn't change at all.
In the end, I found a topic where someone made the following statement to another fellow who had the same issue: 'You had more data on the data disk in the past (in which case the parity space will be reused when you later add new data files)'
The solution to get back to normal: Snapraid sync -R
Snapraid immediatly decreased the used space on my parity drive, and it is now syncing.
I still have to wait 5 hours or so before making a final check, but things look promising!
---
About the Snapraid sync -R command, according to the manual:
'
-R, --force-reallocIn "sync" forces a full reallocation of files and rebuild of the parity.
This option can be used to completely reallocate all the files removing
the fragmentation, but reusing the hashes present in the content file
to validate data. Compared to -F, --force-full, this option reallocates
all the parity not having data protection during the operation. This
option can be used only with "sync".
'
--- -
Ok will do, but how can I unmount the parity drive?
Is there a way to do so without deleting from snapraid ?
-
Ok, did a sync, nothing changed...
$ sudo snapraid sync
Self test...
Loading state from /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
Scanning disk SnapRaid1...
Scanning disk SnapRaid2...
Scanning disk SnapRaid3...
Using 917 MiB of memory for the FileSystem.
Initializing...
Resizing...
Saving state to /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
Saving state to /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
Saving state to /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
Verifying /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
Verifying /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
Verifying /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
Syncing...
Using 32 MiB of memory for 32 blocks of IO cache.
100% completed, 930 MB accessed in 0:00SnapRaid1 3% | *
SnapRaid2 37% | *********************
SnapRaid3 24% | **************
parity 0% |
raid 2% | *
hash 2% | *
sched 32% | ******************
misc 0% |
|___________________________________________________________
wait time (total, less is better)Everything OK
Saving state to /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
Saving state to /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
Saving state to /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
Verifying /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
Verifying /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
Verifying /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content... -
Maybe In that one?
ls -al /srv/dev-disk-by-label-ParityDiskA/snapraid.parity
-rw------- 1 root root 3691169447936 août 30 23:06 /srv/dev-disk-by-label-ParityDiskA/snapraid.parity -
Thanks. Here is what I get :
:~# ls -al
total 32
drwx------ 3 root root 4096 janv. 24 2018 .
drwxr-xr-x 26 root root 4096 août 29 14:44 ..
-rw------- 1 root root 1145 août 31 19:21 .bash_history
-rw-r--r-- 1 root root 570 janv. 31 2010 .bashrc
-rw------- 1 root root 0 janv. 24 2018 dead.letter
-rw-r--r-- 1 root root 268 janv. 10 2018 .inputrc
-rw------- 1 root root 26 août 29 16:54 .nano_history
-rw-r--r-- 1 root root 140 nov. 19 2007 .profile
drwx------ 2 root root 4096 janv. 10 2018 .ssh -
The parity disk isn't in the mergerfs pool.
See attached.
-
Thanks, that did the trick!
As expected, after using the mergerfs.balance tool all the disks have pretty much the same amount of data.
Except for the parity one. It's almost full... Any idea why ? Data disks have 1/3 of empty space.
I did a sync in snapraid but that didn't help.
-
Hello,
I might be in need of some help here!
I just added a new 4Tb disk to my current pool.
Until then I had :
- 1x 4Tb parity disk
- 2x 4Tb data disks
- 1x SSD for the OSUsing MergerFS & Snapraid, I'd like to use the mergerfs.balance tool to balance the data across the disks, as the other two disks are pretty much full.
Problem is: I'm a bit confused on how to proceed.
According to this thread, one way to install it would be through the command: wget raw.githubusercontent.com/trap…4c87/src/mergerfs.balance
The things is, I get an "error 400: bad request"... Anyway, I'm stuck at this point.
Thanks in advance!
-
Cheers ryecoaaron, I spent some time in the config.xml file and all seems to work flawlessly!
-
Hi,
Anyone to help me out regarding this issue?
Thanks!
-
Hi all,
A few days ago I got a new HDD and wanted to create a RAID1 from my original OMV setup that only included 1 drive (4TB WD Blue). To do so, I based myself on the following thread: Create RAID 1 with existing Data Disk
All went pretty smoothly,
=> except for one thing: 2 lines in the System Files don't seem to be in order.
- the 1st one appeared once I wiped out the original disk before including it in the RAID : as a result, "N/A" and "Missing" status.
- the 2nd one: "linux_raid_member" and "N/A" which points to the original HDD since it's in the RAID
Here is the content of "fdisk -l" command:
Code
Alles anzeigenDisk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00099074 Device Boot Start End Blocks Id System /dev/sda1 * 2048 239855615 119926784 83 Linux /dev/sda2 239857662 250068991 5105665 5 Extended /dev/sda5 239857664 250068991 5105664 82 Linux swap / Solaris WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn 't support GPT. Use GNU Parted. Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn 't support GPT. Use GNU Parted. Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/md0: 4000.7 GB, 4000651542528 bytes 2 heads, 4 sectors/track, 976721568 cylinders, total 7813772544 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table
As a result I get there's an issue in the partition table, but I don't know how to fix it.
Many thanks in advance for your help!
Edit : more results from the following commands- cat /proc/mdstat command:
Code$ sudo cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] sdb1[1] 3906886272 blocks super 1.2 [2/2] [UU] unused devices: <none>
- blkid command:
Code$ sudo blkid /dev/sda1: UUID="a9e79f7b-2764-4388-a69a-b9c9de5de7e4" TYPE="ext4" /dev/sda5: UUID="34f08ca6-0e01-4e06-ad33-b5dbcd3402ff" TYPE="swap" /dev/sdb1: UUID="b01f8082-be76-9175-127b-400da1aff51b" UUID_SUB="f3d835c8-a2c1-3ce3-b031-be6fc00f3366" LABEL="ServeurMaison:0" TYPE="linux_raid_member" /dev/md0: UUID="89460968-4bc6-4ce1-9958-5621dbb13270" TYPE="ext4" /dev/sdc1: LABEL="rd1" UUID="5e8305ea-1f49-41ec-9600-906a0394c686" TYPE="ext4"
- mdadm --detail --scan --verbose command: