Posts by Jopa

    Ok, then


    1. go to /media. View should be something like picture attached. Look into folders listed - the twin, in this case "cdrom/cdrom0" can be ignored -, there must be one empty. Note its name. Here it is "18e2ee88..."


    2. go to /etc/openmediavault and edit config.xml (F4). Search for "<fstab>" and comment out the corresponding <mntent> section (<fsname>!) by bracketing it with "<!--" and "-->" like this:


    3. go to /etc and edit fstab. There will be one line with the same name. Comment it out by putting a "#" in front of the line:

    Code
    # UUID=18e2ee88-7163-408c-ba50-4d5f6456999a /media/18e2ee88-7163-408c-ba50-4d5f6456999a ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2


    4. restart OMV. Your ghost filesystem should be away now. At least mine was away, after I did these steps.


    As I find this out being valid for OMV3, I'm not quite sure, whether it is valid for OMV2 as well. But as I still have an OMV2 at company to compare with, I can look for differences if needed.


    Edit: OMV2 at company does not show any differences.


    Hope it helps.
    Johannes

    Sorry for delay, außer der falschen Benachrichtigungseinstellung hier hatte ich auch sonst noch ein paar Sachen, denen ich mich dringender widmen musste.


    Also meine totgerittene SSD hat sich auch erst nur beim initialen Schreiben bemerkbar gemacht. Das auch beinm OMV2 schon vorhandene tempfs hat wohl noch einiges auf später verschieben können. Nichtsdestotrotz ist ein 2er OMV ohne Plugins aus dem omv-extras Repo noch nicht wirklich SSD-geeignet. In wieweit sich das beim 3er geändert hat, habe ich mich noch nicht getraut, auszuprobieren. Irgendwie ist mir ein ein auf einer VelociRaptor zuverlässig laufendes System einfach wichtiger als ein auf ein weiterer Zusammenbruch des Sytems auf einer SSD.

    Hi Massimo,


    I had the same because of not having unmounted a filesystem in time, i.e. as long as disk(s) were online and working.
    After trying out the hints, that ryecoaaron gave, I was able to get rid of the "ghost" filesystem.
    I guess you are more familiar with Windows, aren't you?
    If yes, then in OMV GUI enable SSH (under Services) and there, if not enabled by default, "Permit root login" and "Passwort authentication".
    Now at your Windows client install the freeware "WinSCP" and try to login onto your OMV.
    If you are successful so far, please feed back, and I'll tell you the further steps.


    Best regards

    Sorry, and no: 146GB 10k rpm SAS drives are not comparable with drives of >2TB formatted with 520B sectors at all. Or to say: I actually needed an extra system running Cent-OS and use sg-format to get my disks formatted with 520B sectors reformatted to 512B per sector to get them usable with with any debian. Yes, I actually tried out not only OMV...
    Btw: Linux world at all still not seems to be clear about supporting bigger SAS drives. No problem with drives of 147 or 300GB each, but if one goes beyond 2TB, common reaction is kind of "Uh, really want this?" rather than "Come on and get it", which I'd prefere.

    Sorry, but you are not seeming to be really experienced in using SAS drives, are you?


    Re 2: Of course I needed to format the drive, that I wanted to add to the RAID. Or were you able to add a drive formatted with 520B sectors to a RAID based on 512B sectors yet? Second format I just run to be safe, that first did not fail accidentally.


    Re 4: It is kind of funny for me, to get recommended to trow all data away. Of course this may be the "easiest" way. Provided, I had a clean sytem and must not care about existing data. Only I needed to care about existing data.


    BTW, as I posted, recover after OMV 3 default mdadm did include the 4th drive. So why should I still go to backports or use systemrescuecd? I mean, I just want to use OMV and not spend my time trying out this or that or yet something else.


    Best regards,
    Johannes

    Moin,


    es könnte sein, dass Deine System-SSD das Problem ist. Konkret, wenn Du OMV 2.x fährst. Wie es beim 3er ist, habe ich noch nicht wieder ausprobiert...


    Beim OMV 2.x war es jedenfalls so, dass das per Install eingerichtete Hardware-Überwachungs-System ohne erhebliche eigene Eingriffe auch die beste SSD binnen kurzer Zeit "zersägte", weil es im Wenige-Sekunden-Takt laufend winzige Datenbrocken auf die Systemplatte schreibt. Im Effekt treibt sowas eine SSD in höchste Wear-Levelling-Höhen, und irgendwann dauern Schreibzugriffe auf die Silizium-Festplatte gefühlte Stunden, und auch initiale Zugriffe auf das Daten-RAID brauchen länger, weil sie halt irgendwie auf der System-Platte protokolliert werden. Das betrifft übrigens auch das initiale Lesen, da dies jedoch schon beim Systemstart stattfindet, um das RAID zu erfassen, fällt die Verzögerung hier meist nicht so auf wie beim ersten Schreiben.


    Ich bin deshalb auch bei der OMV-Systemplatte zur guten alten mechanischen "Drehscheibe" zurückgekehrt.


    Gruß

    Recover finished, while I was at work - and OMV is showing 8.1 TB for the RAID now. I.e. the 4th drive was added.


    Now I just wonder, why OMV was not able to do that job by its own means. Why did I need to go down to debian command line and run mdadm by hand?


    Btw, re your suggestions:


    1. I spent some time to make the originally RAID (IR) controller run as a HBA (IT) with latest firmware.
    2. I formatted the 4th drive twice, before I started this thread.
    3. I always thought, adding just another drive of the same brand and series to a running software RAID5 should not depend on the kernel. And as results show, it actually did not depend on kernel.
    4. Why? See 3.
    5. Again: Why? See 3. and my starting post.


    Best regards,
    Johannes

    Ok. Yes, and sorry, I did not tel...
    MoBo is an intel s1200btl. CPU an i3-3240, RAM is 4GB ECC.
    HBA is a LSISAS2008 or 2108 with IT-BIOS, i.e. pure HBA w/o hardware RAID support.
    Actually this worked well with initial 3-drives-RAID5, I only run into problems when trying to a drive 4.

    Hi all,


    guess this will become a report rather than a question ;-)
    First my history and intention:
    After my proprietary 2-disk RAID1 NAS got to its limits, I looked around for a more flexible solution and found OMV. As it is debian linux based, and I'm familiar with debian linux so far, it looked well for a trial.
    But now, as I'm running it, I got to have some trouble with it now and then.
    Most of it obviously have to do with my decision for SAS drives. E.g. SMART shows "Status" greyed out, and only Extended Information availably, and when trying to add another drive to my running RAID5, the "grow" from OMV menu threw an error, and added the new drive as a spare. By googling, I found out, that
    mdadm --grow --raid-devices=4 /dev/md0could to tell system of the new disk in RAID, and actually OMV web GUI showed RAID management's options "Grow" and "Remove" greyded out after this. As "Detail" showed 4th drive not as spare no more, I started "Recover", which is still running. (10% at the moment.) Hopefully I'll find space of 4th drive added, when Recover finishes.
    To answer eventual questions in advance:
    1. No, my RAID does not run out of 32bit/16TB limit. I'm just wanting to add a 4th 3TB drive to a RAID5.
    2. OMV is running on a SATA drive, the RAID5 is on SAS drives.
    3. I had no trouble with setting up a 3-SAS-drive RAID5. I just want to grow it.
    4. My LSI-SAS HBA is definitively not the problem. It can work upto 8 drives with at least 4TB each. And the Linux driver works the same.


    I'll come back, after recover finished. Nevertheless I'm interested in any experience with OMV and SAS drives.


    Best regards,
    Johannes

    Hallo zusammen,


    ich bin im Forum zwar noch Neuling, aber doch schon seit V1.x mit OMV am Gange und betreue in der Firma eine V2.1, brauche also keine "Tips für Anfänger" mehr ;)


    Allerdings habe ich bisher nur mit SATA-Platten über 2TB zu tun gehabt, und so stellen meine privaten neuen, sehr günstig erworbenen SAS-Platten mit 4TB eine echte Herausforderung dar. Denn die SAS-Controller, auf die ich zugreifen kann, können entweder maximal 2TB händeln, oder das Controller-BIOS verschluckt das SMART-Feature (LSI 9260) bzw. der OMV 3.x/Debian 7-Kernel-Treiber untersützt im Nur-HBA-Modus bloß SATA (Promise TX8660).


    Deshalb meine Frage: Welcher Controller untersützt auch SAS-Platten in der genannten Größe im direkten HBA-Modus, so dass ich unter OMV 3.x so weit volle Kontrolle über die einzelnen Laufwerke habe?


    Gruß aus Hannover

    Hi all,
    at company we are going to move our smb file server to omv, and want to use a sas hdd for system. It works for the best basically, only the SMART watching is not as we like it to be. In particular omv not seems to be able to read the SMART info from the sas drive by it's own - although SMART is enabled, the button at the very right still keeps grey, and only the very right tab of SMART info tells a bit about the state of the drive.
    Is this a basic thing with sas drives at all, or can we get the drive to "normal" watching by adding a plugin or other piece of software?
    Best regards,
    Johannes