Beiträge von ddavis1086
-
-
Thank you for the help. I ran that but still have errors. I will try some other options.
Code
Alles anzeigenmergerfs.balance -e aquota.group -e listaquota.group /srv/501bfd9a-bebf-48e5-805c-91ca31fa6f0f/ file: snapraid.content.tmp from: /srv/dev-disk-by-uuid-042bcc15-b27e-4750-9b93-67a81b9b6fd1to: /srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773 rsync -avlHAXWE --relative --progress --remove-source-files /srv/dev-disk-by-uuid-042bcc15-b27e-4750-9b93-67a81b9b6fd1/./snapraid.content.tmp /srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773/ sending incremental file list snapraid.content.tmp 0 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/1) sent 106 bytes received 43 bytes 298.00 bytes/sectotal size is 0 speedup is 0.00file: aquota.user from: /srv/dev-disk-by-uuid-042bcc15-b27e-4750-9b93-67a81b9b6fd1to: /srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773 rsync -avlHAXWE --relative --progress --remove-source-files /srv/dev-disk-by-uuid-042bcc15-b27e-4750-9b93-67a81b9b6fd1/./aquota.user /srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773/ sending incremental file listaquota.user 7,168 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/1) rsync: rename "/srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773/.aquota.user.PPOFCC" -> "aquota.user": Operation not permitted (1) sent 7,268 bytes received 180 bytes 14,896.00 bytes/sectotal size is 7,168 speedup is 0.96rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]ERROR - exited with exit code: 23Branches within 2.0% range: * /srv/dev-disk-by-uuid-3508d6d2-600b-4aeb-9e39-94a526b3fdce: 68.60% free * /srv/dev-disk-by-uuid-cd3e2c82-6290-4419-9ea6-0b5099ae0a9f: 98.17% free * /srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773: 98.25% free
I will post if I get it to run. Thanks again,
Dennis
-
Same error when I ran it.
Dennis
-
I ran it and got and error which I'm not sure what means:
Code
Alles anzeigenroot@ravenwood:/srv# mergerfs.balance /srv/501bfd9a-bebf-48e5-805c-91ca31fa6f0f/ file: aquota.group from: /srv/dev-disk-by-uuid-042bcc15-b27e-4750-9b93-67a81b9b6fd1 to: /srv/dev-disk-by-uuid-cd3e2c82-6290-4419-9ea6-0b5099ae0a9f rsync -avlHAXWE --relative --progress --remove-source-files /srv/dev-disk-by-uuid-042bcc15-b27e-4750-9b93-67a81b9b6fd1/./aquota.group /srv/dev-disk-by-uuid-cd3e2c82-6290-4419-9ea6-0b5099ae0a9f/ sending incremental file listaquota.group 6,144 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/1) rsync: rename "/srv/dev-disk-by-uuid-cd3e2c82-6290-4419-9ea6-0b5099ae0a9f/.aquota.group.tjaMfE" -> "aquota.group": Operation not permitted (1) sent 6,245 bytes received 182 bytes 12,854.00 bytes/sec total size is 6,144 speedup is 0.96rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3] ERROR - exited with exit code: 23 Branches within 2.0% range: * /srv/dev-disk-by-uuid-3508d6d2-600b-4aeb-9e39-94a526b3fdce: 68.60% free * /srv/dev-disk-by-uuid-a9ed4e06-21e1-41dd-8601-0bbdfed14773: 98.56% free * /srv/dev-disk-by-uuid-cd3e2c82-6290-4419-9ea6-0b5099ae0a9f: 98.59% free
Any ideas on what this means or what I need to do?
Thanks again for the help,
Dennis
-
-
Hello all. I am using OMV 5 with mergerFS. When I set this up I set the policy to "Existing path, most free space". I am guessing you can see what happened. One disk filled up before any of the others. I have set the policy to "Most free space" now and would like to run "mergerfs.balance", which I have downloaded and installed.
I found the id of the disk that has filled up:
I tried to run mergerfs.balance like this:
And I get this error:
What am I doing wrong please?
Any and all help would be appreciated.
Thank you,
Dennis
-
Yea! It's all good and I can't thank you enough.
Now I have more of an understanding of what steps to do.
Dennis
-
Wonderful, thank you.
Dennis
-
Hi again, it's your pest.
I ran Gparted and copied old disk to new disk. Put it back together and ran "snapraid status" and then "snapraid diff". The diff shows a difference.
What would be my next step to correct this please?
Thanks again,
Dennis
-
This is great. I have just downloaded GParted and made a bootable USB Drive using my Mac. I confirmed that the Gparted USB boots my OMV machine. Next I will unplug all drives except the old and new dive and give it try.
Very much appreciate all of your help. Is there anyway for me to buy you a cup of coffee for your time and help?
You are the only person that responded to my help request both here and on the Snapraid forum.
Thanks again,
Dennis
-
Thank you, your help is very much appreciated.
Now my next project is to upgrade one of the 2tb data drives to a new 4tb drive. One question about that please. Would Clonezilla or rsync be better to copy all of the files from the old one to the new one? Is there a step by step tutorial you would recommend I read for replacing the data drives?
I am learning a lot and really like OMV.
Thanks again,
Dennis
-
Good morning gderf. The sync ran all night and is now finished. The good news is that all data is intact and there are not reported errors now. Yea!
I attached an image of the status and it shows the array is 100% not scrubbed. I did run a scrub but that did not seem to do anything or change the percentage of the scrub. Should I be concerned about this or will it eventually be corrected with fill adds and weekly sync and scrub functions?
Thanks again for all of your help,
Dennis
-
Thank you. I will try that and let you know.
Dennis
-
Hello. Thanks for the clarification on the parity disk size.
I ran the scrub, sync, and -e fix again. The errors went back up to 1082047. Seems like it's just going in circles and not getting better. I did check the SMART data and all disks seem to be ok and marked green.
Is there are way to force a parity rebuild since I have backups just in case something goes wrong while that is in progress?
Thanks,
Dennis
-
Thanks again. I will run again and hopefully will get less or no errors.
One additional question please. The physical size of the parity drive(s) need to be larger than the "pooled" data amount, correct? For example, I have a couple of 2tb and 4tb disks in the array. This gives me the data size of 6.2 tb. So the physical size of the parity drive(s) need to be bigger or equal to 6.2 tb, correct?
Just trying to understand and learn more about this.
Thanks again for you help,
Dennis
-
Hi. I ran "snapraid scrub" and have attached the result. Looks like a lot less errors, but still errors. What would be the next step please?
Thanks,
Dennis
-
Oh, maybe I didn't do that. I will try again and let you know.
Thanks, sorry for being a newbie.
Dennis
-
Hi. I did run this command: snapraid scrub
I showed:
1081560 errors
1081560 recovered errors
0 unrecoverable errors
Should there be a flag to go with the command that I am missing?
Thanks
-
Hello all. I ran the following commands in this order:
1. snapraid -e fix
2. snapraid sync
3. snapraid status
I am still getting errors and they seem to have tippled. I'm not sure what steps to take now.
It seems that all of my data is intact as I can get to the shared folders and files. I also have a backup of my data as I use the "usb plugin" which seems to work well.
Attached are the screen shots of the above steps. Please help if you have any idea what might be going on. I would really appreciate it.
Thanks for your time and help,
Dennis
-
Thank you for your help and time to respond. I will run the "snapraid -e fix" command, sync, and then scrub. Will post what I find and thanks again.
Dennis