Posts by datadigger

    Hey....long time no see

    Hey sub....yeah, private focuses have been changed....I have two grandchildren, my big Suzuki motorbike wants to be moved and my wife said that I spent too much time with IT. Guess who set the priorities.... :)

    There is no recycle when deleting shared folders through the web UI, only when deleting through samba and only IF is enabled. So as i see it data is gone.


    You can run the classic du -h --max-depth=1 in /srv/dev-..... to show the size layout in the first level.


    Totally weird...du said that the remaining share has the size of the formerly two shares and fills up the disk. And the size of the original data in the remaining share is about 11TB less. But I cannot find the data of the deleted share so I decided to kill everything from the command line, removed the remaining share and rebuild it. Now the file system is empty and ready to receive rsync data again.
    Brandnew created Raid on brandnew 10TB disks, so it shouldn't be a hardware problem. Sometimes IT sucks. :)

    Hi folks,

    a strange thing happened. I created two shared folders for rsync'ing data from two production NAS. But the amount of data for the second one was too much, so I decided to delete it. All references removed and then I tried to delete the shared folder, folder and data. An error message appeared after a while (Unable to connect....). After that the shared folder was gone but the data didn't. The disk is still nearly full.

    Now I opened a Putty session and searched with MC all over the disk, but for the life of me I cannot find the data of the removed share. Nothing below /srv/dev-disk-by-label-.... - there's only the other share directory but no remainings of the deleted one.


    Any idea where I can find the orphaned data?


    Oh and I recreated the share with the exact name and permissions but that one is empty.

    If these two disks are a part of the raid don't forget to set them as failed and remove them before you wipe.
    mdadm --manage /dev/md0 --fail /dev/sdi
    mdadm --manage /dev/md0 --remove /dev/sdi
    and so on...otherwise the whole raid can break.


    Afterwards try to re-add them again.

    Hey good news, mdadm seems to have enough informations to start the rebuild. But I see a faulty spare, was there one before?


    I would let it rebuild without further action, then make a backup of all your data and start from scratch with a fresh installation of the whole box. And check all the hardware for other problems, such a thunderstorm can do a lot of not so funny things. That's why I have a UPS for all my boxes.


    Even if the raid is rebuilding you should see your shared folders from a windows box. Do you?

    @Wabun :


    Code
    root@OMV:~# mdadm --manage /dev/md127 --add /dev/sdi
    mdadm: Cannot open /dev/sdi: Device or resource busy
    root@OMV:~# mdadm --manage /dev/md127 --add /dev/sdj
    mdadm: Cannot open /dev/sdj: Device or resource busy
    root@OMV:~#


    sorry - same as before ;-/
    i will reinstall OMV agaian an check if there is any difference ...


    That may lead to the same situation. Now we have to check why these two disks cannot be added to the raid.
    Give the result of blkid. After all these actions to get the raid back they possibly belong to another raid definition (Like disk 8 to md126...). blkid will tell if this is the case.

    Until these two disks are not a part of the raid you can restart the box as long as you want, that won't bring it back.
    These two disks have "lost the race" while the raid was assembled, udev can avoid the completion. I would start over building the raid from scratch.
    At first check if these two disks responds:


    smartctl -a /dev/sdi and smartctl -a /dev/sdj

    to make sure that they are well-connected.

    Then start over:
    mdadm --stop /dev/md127 (This raid definition should now be removed from mdadm.conf)
    udevadm control --stop-exec-queue
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force


    (If these two disks are still missing try to add them manually as stated above.
    mdadm --manage /dev/md127 --add /dev/sdi
    mdadm --manage /dev/md127 --add /dev/sdj)


    If the raid is complete start udev:
    udevadm control --start-exec-queue


    Now check if the raid was build correctly:
    cat /proc/mdstat
    mdadm --detail --scan


    If mdadm starts to rebuild, run initramfs and look for errors. If the raid was named correctly in mdadm.conf then it shouldn't spit out any error.


    Just fought the same battle last weekend when I moved a raid from an old machine to a new installation, udevadm did the trick.

    No need to start initramfs before the raid is complete. When mdadm sees all the disks and the raid is complete, it automatically starts a rebuild. The initramfs command adds it to the boot image.


    @ahab: Stop that raid 126 with mdadm --stop /dev/md126, that kills the raid126 and frees the disk 8. Then add it manually to the raid 127 with mdadm --manage /dev/md127 --add /dev/sdi (Correct /dev/sd<letter> is important!).
    Then do the same with disk No. 7: mdadm --manage /dev/md127 --add /dev/sdj - when I read your posts right this shoud be sdj.
    If mdadm can read the disk correctly and it is ok then mdadm will start rebuilding, have a look at the web-ui.
    Then you can run the initramfs command as wabun suggested. If initramfs finds an error it will tell that.

    I always prefer testing programs made by the manufacturer of the disks at the first line, they are tailored for their products.
    After reading all the postings I believe that only one disk may have problems.


    ./edit: This thread should be moved to the /Storage/Raid subforum.

    A NAS without a NIC is useless, that's right. But I believe the TO has a NAS running on this laptop and due to a defective NIC he is not able to access it anymore. Is that right, littleliner?


    If so, there's not that much you can do. You can't open the web-gui from the NAS itself. You need another network adapter to connect it to your network. At Amazon they offer USB to LAN adaptors but I just don't know if they work with OMV.
    If there's no other way to connect it to a network via an USB or PCMCIA adaptor you can only rip off the disk and copy your data to another installation.

    Yes you can use different brands as long as they have the same size. I wouldn't mix up drives with very different capabilities, i.e. standard SATA drives with 7200rpm and drives with higher (Or lower) speeds.
    In a corporate environment I always use the same brand and type of drives with the same firmware for better performance and stability.

    That's right, but with only two drives your possibilities are very limited. Raid 0 or JBOD and Raid 1 are possible, for Raid 10 you need a minimum of four drives. Raid 10 is something I use in corporate environments for the sake of resilience, but not at home because the useable space is only half of the overall capacity of the drives.
    I would take my hands on another 2 TB drive and build a Raid 5 which is expansible up to the end of your money. If that's not possible then do it like KM suggested but in that case you have to move your data away if Raid comes into game.

    The mentioned ISO is the right one to start with (If your system is x64-capable). The current version is 1.9, that's right - but the developers do not frequently update the basic ISO and you can easily update to the current version via the web-gui clicking on update manager after the system is up and running.

    At the directory /usr/share/openmediavault/mkconf/proftpd
    ....
    Am i doing something wrong ?


    Yes. This path is wrong, you are one step too deep. /usr/share/openmediavault/mkconf is the right path and there you will find the proftpd config file for omv-mkconf proftpd.
    The anonymous block starts at line 169 and according to subzero's suggestion you should change the standard value Denyall at line 181 to AllowAll. After saving the file run omv-mkconf proftpd from the CLI and try again.
    If that doesn't work I can submit a version of how I managed it to allow uploads for anonymous users.

    Raids created by mdadm will be recognized. You should see the raid in the raid management tab in the web-gui and the file system, but no shared folders.
    You have to create them in the web-gui using the same names like they had before, set permissions and link them in the smb/cifs tab. The content should be there like on the former machine. I've done that before several times and it worked every time.