Posts by patman2097

    Hi Forum


    I was tweaking a few subnet settings inthe web control gui, that i guess were not correct, and now i seem to have lost access to the ethernet card that i connect to the web control panel GUI.


    As you can see in the attachments, there are a few dockers installed and also a next cloud control panels (192.168.0.10 which still works when i type into a browser)






    But alas not the ethernet netword card and address that i previously used to log on to the web gui (192.168.0.X)


    Ive tried the omv-firstaid to refconfigure but i cant see find the correct network interface to repair.



    I did some seraching on web, that suggested the Network device may have gone missing from /etc/systemd/network. I searched inside there but could not find the network card listed that i used for the web control panel or connect to my PC on the network,.






    Im guessing something has got corrupted or lost .... IS there any where to fix this or do i need to do a fresh install?



    Im using OMV 5, on a x86 box.


    Thanks

    thank you, i think ive narrowed it down to is a sas driver issue.. as this error comes out in the logs


    error




    mpt33sas_cm0: _base_spin_on_doorbell_int: failed due to timeout count(10000), int_status(c0000000)!


    mpt33sas_cm0: doorbell handshake int failed


    mpt33sas_cm0:failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:10539/_scsih+probe()!


    /dev/sda1: clean, 190551/14483456 files, 2661363/57904384 blocks





    does anybody know how i can updated the drivers on OMV or is it a problem with my sas card Firmware. Its has been working previously, ( i beleive it was on IT firmware too)

    i noticed this on my OMv booting up also




    not sure if it has any relevance. it looks like my sas card is not being detected at boot?




    Another qustion, if i just migrate my current setup which is using the sas card to the sata connectors, to use the regular sata connectors

    on my existing mother board, will i be able to mount my raid again?


    and will the order i connect the drives , ie random sata connectors effect the intergrity of the raid. will i be able to access my raid array easy , or is that a long rebuilding process

    I booted up today and noticed all of my raid drives have dissapeard from the OMV> storage>disks page








    and subsequently the raid is misssing and i cant acess any of the data on the shared folders..

    I can see them in OM gui shared mount points, they are still there but OMV cant physcially select the raid drives. see pic


    Strange thing is that one of the shared folder points (appdata = drive x:/ )has mapped to my OMV OS drive, so that strangley works, however all the other mount points are inacccesble. see windows finder picture. I was playing with dockers just prviosuly, do you think that would have anything to do with this hardware mounting error.


    I not sure what has happened. I tried to mount a new drive and scanned the hardware, nothing new came up . Does that mean my drives are faulty or my SAS card is gone?



    Will my data on my raid be safe if i rebuild them and mount into a new system???





    Please help..




    Thanks




    im running BTRFS raid 6. OMV 5.39-1(usul) . linux 5.3.18-3-pve


    Im experiencing the same issue with no space left. i tried using filters with this command



    >>sudo btrfs fi balance start -dusage=5 /sharedfolders/documents


    but it didnt seem to do anything, but did release so chunnks of lost data.



    so i had to rebalanced the full drive using command



    >> btrfs balance start /btrfsdrive



    took 2 +days. seemed to have worked but after a days use and a heavy copy, the drive and cache have filled up again...



    Im on vesion 5.3.2-1 (usul)




    I tried lookign at my performance graphs, strangle they all blank, CPU and drive usage.




    Any other suggestions, i dont want to be repeating this balancing command and exhausting wait every 2 days

    As you are using btrfs have a look here:


    https://btrfs.wiki.kernel.org/…22No_space_left_on_device



    Ok i manage to find my mount points



    this is the output i get from the command is this





    sudo btrfs fi balance start -dusage=5 /sharedfolders/extra


    Done, had to relocate 5 out of 11045 chunks
    root@openmediavault:~#
    root@openmediavault:~# sudo btrfs fi balance start -dusage=5 /sharedfolders/documents
    Done, had to relocate 0 out of 11040 chunks




    the command has seem to run, but i still have 0 bytes error in my windows finder

    As you are using btrfs have a look here:


    https://btrfs.wiki.kernel.org/…22No_space_left_on_device



    thanks i went to that page and found this


    I get "No space left on device" errors, but df says I've got lots of space



    First, check how much space has been allocated on your filesystem:
    $ sudo btrfs fi showLabel: 'media' uuid: 3993e50e-a926-48a4-867f-36b53d924c35 Total devices 1 FS bytes used 61.61GB devid 1 size 133.04GB used 133.04GB path /dev/sdfNote that in this case, all of the devices (the only device) in the filesystem are fully utilised. This is your first clue.
    Next, check how much of your metadata allocation has been used up:


    $ sudo btrfs fi df /mount/pointData: total=127.01GB, used=56.97GBSystem, DUP: total=8.00MB, used=20.00KBSystem: total=4.00MB, used=0.00Metadata, DUP: total=3.00GB, used=2.32GBMetadata: total=8.00MB, used=0.00Note that the Metadata used value is fairly close (75% or more) to the Metadata total value, but there's lots of Data space left. What has happened is that the filesystem has allocated all of the available space to either data or metadata, and then one of those has filled up (usually, it's the metadata space that does this). For now, a workaround is to run a partial balance:


    $ sudo btrfs fi balance start -dusage=5 /mount/pointNote that there should be no space between the -d and the usage. This command will attempt to relocate data in empty or near-empty data chunks (at most 5% used, in this example), allowing the space to be reclaimed and reassigned to metadata.
    If the balance command ends with "Done, had to relocate 0 out of XX chunks", then you need to increase the "dusage" percentage parameter till at least one chunk is relocated. More information is available elsewhere in this wiki, if you want to know what a balance does, or what options are available for the balance command.




    however i cant seem to be able to run the command see my error messages ,,



    root@openmediavault:~# sudo btrfs fi show
    Label: none uuid: f2af6f52-8f93-4522-8f3e-45bee0188ea0
    Total devices 1 FS bytes used 10.78TiB
    devid 1 size 38.20TiB used 10.80TiB path /dev/md0


    root@openmediavault:~# sudo btrfs fi df /mount/point
    ERROR: cannot access '/mount/point': No such file or directory


    root@openmediavault:~# sudo btrfs fi balance start -dusage=5 /mount/point
    ERROR: cannot access '/mount/point': No such file or directory



    anybody else help me???





    no sure what my mount point is?

    As you are using btrfs have a look here:


    https://btrfs.wiki.kernel.org/…22No_space_left_on_device



    thanks i went to that page and found this


    I get "No space left on device" errors, but df says I've got lots of space



    First, check how much space has been allocated on your filesystem:
    $ sudo btrfs fi showLabel: 'media' uuid: 3993e50e-a926-48a4-867f-36b53d924c35 Total devices 1 FS bytes used 61.61GB devid 1 size 133.04GB used 133.04GB path /dev/sdfNote that in this case, all of the devices (the only device) in the filesystem are fully utilised. This is your first clue.
    Next, check how much of your metadata allocation has been used up:


    $ sudo btrfs fi df /mount/pointData: total=127.01GB, used=56.97GBSystem, DUP: total=8.00MB, used=20.00KBSystem: total=4.00MB, used=0.00Metadata, DUP: total=3.00GB, used=2.32GBMetadata: total=8.00MB, used=0.00Note that the Metadata used value is fairly close (75% or more) to the Metadata total value, but there's lots of Data space left. What has happened is that the filesystem has allocated all of the available space to either data or metadata, and then one of those has filled up (usually, it's the metadata space that does this). For now, a workaround is to run a partial balance:


    $ sudo btrfs fi balance start -dusage=5 /mount/pointNote that there should be no space between the -d and the usage. This command will attempt to relocate data in empty or near-empty data chunks (at most 5% used, in this example), allowing the space to be reclaimed and reassigned to metadata.
    If the balance command ends with "Done, had to relocate 0 out of XX chunks", then you need to increase the "dusage" percentage parameter till at least one chunk is relocated. More information is available elsewhere in this wiki, if you want to know what a balance does, or what options are available for the balance command.




    however i cant seem to be able to run the command see my error messages ,,



    root@openmediavault:~# sudo btrfs fi show
    Label: none uuid: f2af6f52-8f93-4522-8f3e-45bee0188ea0
    Total devices 1 FS bytes used 10.78TiB
    devid 1 size 38.20TiB used 10.80TiB path /dev/md0


    root@openmediavault:~# sudo btrfs fi df /mount/point
    ERROR: cannot access '/mount/point': No such file or directory


    root@openmediavault:~# sudo btrfs fi balance start -dusage=5 /mount/point
    ERROR: cannot access '/mount/point': No such file or directory



    anybody else help me???

    My btrfs omv nas build suddently reported that 0 bytes free in windows Finder. And i now cannot copy any new files to my drive , however in my omvconfig settigns its clearly shows i have



    10.77tb used and 27.41 tb avaiable. All my data seems to be there still. However when i delete files, no free space is freed up..




    Ive tried rebooting windows and the nas a few time to no succces. ive managed to amnke another shared folder onthe nas but this still reports 0 bytes available..


    I googled and found this , bnot sure if its is relelvant,



    https://forum.openmediavault.o…p/Thread/2441-Free-Space/


    the tune2fs command seems like it might help , how do i used this,


    I assume i just need to rebuild a table on my nas to get it working smoothly, again,



    Any help would be greatly appreciated.




    thanks

    Totally agree that storage is his bottleneck. Nevertheless it depends on what particular drive he is using to make an evaluation of his transfer rates. Especially since he talks about maximum transfer rate. That's why I found it a bit surprising why you simply say that he can't expect more than 150 MB/s per disk anyway. You don't need to feel attacked at all. I only asked if you could explain the 150.

    im using 14TB iron work standard drives 7200 in a raidz2 config with 4 drives in total at present, although i planne to increase up to 6 or possible 8 of these drives. just testing at moment

    Sorry but could you explain that? I would think it should be at least 200...250 MB/s/Disc

    i tried using a sinlge ssd on my OMV copying over to a NVME on my pc , a slight increase to 300-310 MB/sec.


    a slight increase obviously, but still far off what im expecting, DO you think also using window OS and SMB is also effecting the speeds im achieving?

    Yes. Each one of those disks probably can do about 150 MB/s each.

    That is what those connections will do if your storage is fast enough. Your storage is nowhere near fast enough to saturate 10GBe. 40GBe will require hardware that you will probably not be able to afford.

    Network adapter can connect at different speeds. On Linux ethtool will show you the connection speed. On Windows, you have to look at the status of the network adapter. I'm guessing your adapters aren't connecting at 40GBe. The fact you are hitting 270 means you are probably connected at 10GBe.


    on the OMV side i ran a ip add



    on the windows side the connection says 40gbe Full duplex, (but its connected via ethernet protocol rather than IB although i was told that potentially maxes out at at 10gbe, not 100% sure about this)




    on the OMV side,



    Supported ports: [ FIBRE ]
    Supported link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    56000baseCR4/Full
    56000baseSR4/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Link partner advertised link modes: 40000baseCR4/Full
    Link partner advertised pause frame use: No
    Link partner advertised auto-negotiation: Yes
    Link partner advertised FEC modes: Not reported
    Speed: 40000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000014 (20)
    link ifdown
    Link detected: yes



    However the MTU onthe OMV side reaed


    <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000






    How do increase the MTU on the commandline ?









    so i figured out the ifconfig command line to increase the MTU on both sides , tried s few differnt figures 1500,3000,4000) not much gain to be honest

    Yes. Each one of those disks probably can do about 150 MB/s each.

    That is what those connections will do if your storage is fast enough. Your storage is nowhere near fast enough to saturate 10GBe. 40GBe will require hardware that you will probably not be able to afford.

    Network adapter can connect at different speeds. On Linux ethtool will show you the connection speed. On Windows, you have to look at the status of the network adapter. I'm guessing your adapters aren't connecting at 40GBe. The fact you are hitting 270 means you are probably connected at 10GBe.


    on the OMV side i ran a ip add



    on the windows side the connection says 40gbe Full duplex, (but its connected via ethernet protocol rather than IB although i was told that potentially maxes out at at 10gbe, not 100% sure about this)




    on the OMV side,



    Supported ports: [ FIBRE ]
    Supported link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    56000baseCR4/Full
    56000baseSR4/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Link partner advertised link modes: 40000baseCR4/Full
    Link partner advertised pause frame use: No
    Link partner advertised auto-negotiation: Yes
    Link partner advertised FEC modes: Not reported
    Speed: 40000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000014 (20)
    link ifdown
    Link detected: yes



    However the MTU onthe OMV side reaed


    <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000






    How do increase the MTU on the commandline ?

    How much speed did you think you were going to get out of spinning disk? What speed is each side connected at?


    Why the large font?

    sorry about the large font dont know what happend, what do you mean speed at each side i dont quite understand what you mean?


    So is about 200-300MB/sec max i should be expecting to get from my setup, that seems a little on the low side. I thought at least i would achieve towards 1000MB/sec but im at a 1/5 of that.


    Any ways to improve upon that.

    is that about the right speed i should be expecting?


    my 1GBE connection i get 100MB/sec. , in an ideal world i thought 10GB would give me 1000MB/sec but more likely 600MB/sec, so i thought 40GBE should be faster still.




    is about 200-300MB/sec max i should be expecting?

    my current setup


    WIndows 10 box with mellanox connectX3 MCX354A-FCBT 40gbe NIC card - ethernet connection enabled


    connected directely with DAC QFSP+ cable to OMV V5.5 setup with ZFZ RAIDZ2 using 4 3.5 inch drives, mellanox connectX3 MCX354A-FCBT 40gbe NIC card.


    when i transfer files across this network, through SMB, the Max speed im getting is only 270MB/sec.



    What am i doing wrong> Ive tried different filesystems too,BTRFS. but still similar speeds.


    Where is the bottle neck, Some help would be most appreciated,



    I was thinking i can adjust the Various settings in the Windows Mellanox driver such as MTU etc. but how can i do this on OMV side.


    Thanks