Posts by hubertes

    Hi,


    After reading a loton news ransomware and because my kids grow up, I would like a guide or some advise on setting permission.


    I have a shared folder with ACL to "users" on RWE. But "others" too (RWE) : I think it's not such a good idea (my omv is nbot open to internet though)


    No user or group set (but plex)


    don't need a folder for each user. Just an easy way to prevent deletions, or attacks from ransomware


    thanks

    Hi,


    after a bunch of bad things (RAID6 : clean, degraded), I went to linux-raid on irc where some told me to use write intent bitmaps for better recover when a disk jump out an array (if I get that clear) or in case of power failure etc...


    I red some trackers, like here : http://bugtracker.openmediavault.org/view.php?id=669 and this : http://blog.liw.fi/posts/write-intent-bitmaps/


    Thing is, if you specify the resolution, it appears that the impact on speed is not so big.


    So do we manually need to mdadm --grow --bitmap=internal --bitmap-chunk=256M /dev/md0 via CLI or is it done by default ? Or is it not recommanded ?


    Thanks

    Thanks ryecoaaron :)


    I know about the backup, I have some data on another backup, but not "all" datas.


    Just fast wiped dev/sdg, raid tab is still not showing grow, just recover. (the size of the array is still the same, so I don't understand why to grow)


    I did reboot after a wipe, still the same. Here the details :



    Here :


    Code
    root@Zetta:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] 
    md0 : active raid6 sdf[0] sdb[4] sde[3] sda[2] sdc[1]
          5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUUU_]
    
    unused devices: <none>


    Code
    root@Zetta:~# blkid
    /dev/sda: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="70983681-b5c2-9f70-1801-948b1b7c97d1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdc: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="2873612e-8481-0c22-4d4f-9183d2bf6a6d" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdb: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="66843575-1a9c-30a6-8172-29f3ded468dc" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdd1: UUID="b364f189-c9af-4774-92ba-a49307966cf7" TYPE="ext4" 
    /dev/sdd5: UUID="c78b6a64-1caa-4290-bc1a-eebe67731ca5" TYPE="swap" 
    /dev/sdf: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="b2460066-00b1-5070-cfe3-7ac67aae96c1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sde: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="ee7eab8f-dc90-e3be-146d-a4e09d104418" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/md0: LABEL="ZettaFiles" UUID="76c7546c-5ae6-4884-ac9f-3ecda0f473bc" TYPE="ext4"



    EDIT : 03/01/2015 13:35 GMT : I went to linux-raid to have some help, I launched a badblock stress test on /dev/sdg while monitoring smart values and dmesg. Here the syslog I grabed in var/log when the disk was kicked out : http://pastebin.com/vjmVe7K7. Someone told me about write-intent bitmap that I might want to look into adding a write-intent bitmap for my array. Then I looked on a bugtracker http://bugtracker.openmediavault.org/view.php?id=669 about this feature, and slow speed http://blog.liw.fi/posts/write-intent-bitmaps/.
    The performance penalty depends of what bitmap chunk size I use, he said. He said I could use something big like 256MiB for example, he remember seeing some test results where bigger chunk sizes had a very minimal effect on performance. He wrote that it is easy to test for myself because bitmaps can be added and removed on the fly and that omv probably should revisit their decision of not enabling it by default.


    EDIT2-4 : Added the package iotop to monitor my badblacks writing, beacause it's veryyyyy slooooooooooowwwwwwwwwwww (and it is : 4849 be/4 root 0.00 B/s 7.19 M/s 0.00 % 99.99 % badblocks -w -s /dev/sdg). Added a -b xxx and -s xxxx and badblocks writing is now on 129M/s


    Edit 3 : Ryecoaaron, should I add the disk via the cli (beacause the gui won't let me, via mdadm --add /dev/md0 /dev/sdg ?


    EDIT 5 : I did add sdg to md0 via CLI, now it's recovering. should I add mdadm --grow --bitmap=internal --bitmap-chunk=256M /dev/md0 after ?

    Thanks.


    Code
    root@Zetta:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    root@Zetta:~#


    Cannot grow : not available.



    Should I really wipe dev/sdg ? Will the grow option be available then ? Sorry but I always fear to loosse some data... :/

    I do not see md127 in the raid management tab.


    Should I begin with the grow button anyway ?


    EDIt : I cant grow, I only have recover that I can use.


    What happened before ? Do you have an idea why I got a degraded state ?

    Code
    root@Zetta:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] 
    md127 : inactive sdg[5](S)
          1953383512 blocks super 1.2
    
    md0 : active raid6 sdf[0] sdb[4] sde[3] sda[2] sdc[1]
          5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUUU_]
    
    unused devices: <none>


    Code
    root@Zetta:~# blkid
    /dev/sda: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="70983681-b5c2-9f70-1801-948b1b7c97d1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdc: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="2873612e-8481-0c22-4d4f-9183d2bf6a6d" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdb: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="66843575-1a9c-30a6-8172-29f3ded468dc" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdd1: UUID="b364f189-c9af-4774-92ba-a49307966cf7" TYPE="ext4" 
    /dev/sdd5: UUID="c78b6a64-1caa-4290-bc1a-eebe67731ca5" TYPE="swap" 
    /dev/sdf: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="b2460066-00b1-5070-cfe3-7ac67aae96c1" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sde: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="ee7eab8f-dc90-e3be-146d-a4e09d104418" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/sdg: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="5692ca13-808b-56c0-8bf5-066f5574a5c4" LABEL="zetta:0" TYPE="linux_raid_member" 
    /dev/md0: LABEL="ZettaFiles" UUID="76c7546c-5ae6-4884-ac9f-3ecda0f473bc" TYPE="ext4"


    I had a mail this morning :


    Code
    This is an automatically generated mail message from mdadmrunning on ZettaA DegradedArray event had been detected on md device /dev/md/zetta:0.Faithfully yours, etc.P.S. The /proc/mdstat file currently contains the following:Personalities : [raid6] [raid5] [raid4]md0 : active raid6 sdf[0] sdb[4] sde[3] sda[2] sdc[1]      5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUUU_]unused devices: <none>


    Here is the details of RAID



    Seems that sdbg is missing (added it one month ago via a pci-raid card)


    Here the smart of sdbg (smart is showing some errors) :


    http://pastebin.com/cdnhBb7G


    Thanks

    Thanks again for your time.


    If I'm at home and look on the transfert rate, it appears that they're far at the limit of my isp, does it go through smartphone-wan-internet nebula-wan-omv or smartphone-wan-omv ?

    Thanks subzero79 worked with private windows :)


    is there any more complete guide on Sync ?


    i did add a folder in omv but seems not to be the right way if I just need to backup some folder on my phone.


    Deleted the share folder and added some on my phone then pasted the link on the gui , is it the right way ?


    If I'm at home, could the file only use my WAN ?


    (sorry for all the questions)

    Hi,


    totally lost with btsync, I tried to add some folder, no permission, I see those folder, cant delete them....


    Should I add a user ? I don't have any


    Any guide or tutorial ?


    thanks


    Red this, but again... lost... what should I do ?