1000Mb/s basically is 100MB/s. So, you are getting full speed.
Oh, I see. It's the "B" vs "b" that I missed, correct?
1000Mb/s basically is 100MB/s. So, you are getting full speed.
Oh, I see. It's the "B" vs "b" that I missed, correct?
Help with identifying why my large file transfer speeds is locked at ~100MB/s when both my workstation and OMV dedicated server report 1000Mb/s speeds on the same segment and same switch.
Workstation and OMV connect in ports of the same switch and segment.
The workstation and OMV speeds are attached as pics.
Transfer speed is attached a pic.
Any help will be greatly appreciated.
I used ethtool and ipconfig to do initial troubleshooting and data collection.
Regards,
Rob
Display MoreThe plugin is called "multiple device" if I recall correctly.
Some systems require running an additional script to apply a few fixes to the upgrade. copy/paste this into a terminal.
wget -O - https://raw.githubusercontent.com/OpenMediaVault-Plugin-Developers/installScript/master/fix6to7upgrade | bash
Once that is run, you should see all the plugins.
This worked great. Problem solved. I want to thank you so much for providing support. I love the open source community and people like you who take the time with idiots like me.
Regards,
Rob
Ran the command that you provided. It worked. Thank you.
Rob
I just upgraded from OMV 6 to 7. It all looks good and appears to be working. During the upgrade process there was a message about the openmediavault software manager (MD) would have to be installed again. I guess thats because I have RAID 5 configured.
The GUI for RAID managemenet is missing. So is the plug on the list of available plugins.
Will someone help me fix this?
Regards,
Rob
As the error says, the backports repository is no longer available. Disable backborts in the omv-extras plugin then try again.
Once the upgrade is done, you should be able re-enable backports if you need it.
The other option is to start from scratch on omv 7. If you have to do this, make sure yo make copied of all your docker compose files and take not of the user and share settings so you can re-create it all.
Bern. I disabled it like you recommended. It looks like it upgraded properly including my plex setup. Heres the rub, during the setup it gave me and FYI message about having to add the openmediavault software raid manager (MD). That management feature is now missing under the Storage menu.
Do you know how I get that back? If memory serves, its a plug in. But I don't see it on the plugin list. What am I missing?
BTW, I never mess with my OMV OS drive first. I clone it and use that one to mess with until I get this right. So far, so good with your help. Thanks.
Rob
I'm new to linux but I have been running OMV 6 for awhile. Its an amazing tool that's been rock solid as a NAS. Have Plex Server running too.
I tried upgrading due to EOL support using the commands sudo omv-upgrade OR omv-release-upgrade and it failed. I get the same results on screen: see attached pic.
Will someone help me figure out what I'm doing wrong?
Regards,
Rob
SMB file transfer rate appears slow on 1 gig network (local). 1.5 gig video file transfers at 112 MB/s. Is this normal? If so, do I have to upgrade to 2.5 gig network to get faster transfer speeds?
Setup:
Workstation connected to same 1 Gig switch as OMV NAS with files
The attached pic shows that the network card is connected at 1 gig
OMV 6 NAS Server connected to same 1 gig switch as Workstation
The attach pic shows that the network card is connected at 1 gig
the transfer rate -- 112 MB/s. Its the same transfer rate for all large files on the server.
I isolated the two machines to and another switch and tested again with the same results: tp-link TL-Sg108 /w 8 ports
Notifications is not working after setting it up according to the OMV guide and other examples provided by users. I was using Google but heard Google stopped that ability.
Will someone please help me out. What free mail service can I use for OMV to send its notifications too; then that service relay/forward emails/notifications to my email?
Regards,
Legacy boot versus UEFI boot. maybe?
Can't answer question about "repairing" OMV installation as I've not studied that thread. But I believe omv-regen may do what you want.
After extensive testing, I found the problem but not the root problem. For some reason when I turn off unused onboard MB devices to save power, that caused my problem. To fix the problem, I loaded the MB BIOS defaults. Once rebooted, all of my problems disappeared. My system is up and running just like the results I had on my test bed. You see, I never touched the test bed MB Bios when I setup everything. All worked. So, on a final desperate hunch, I loaded the defaults on my production rig and it worked. I swear, this is crazy! How can touching a BIOS for basic stuff like shutting off audio and/or limiting the CPU TDP to specifications cause my problem?!
The good news, I'm up and running again with the functionality I needed. ~Rob
How much was the SATA3 card that is giving you trouble?
Have you looked for LSI SAS cards in that price range? My last purchase was an LSI 9207 that I paid $35 for. Many available on eBay, some even include cables.
After extensive testing, I found the problem but not the root problem. For some reason when I turn off unused onboard MB devices to save power, that caused my problem. To fix the problem, I loaded the MB BIOS defaults. Once rebooted, all of my problems disappeared. My system is up and running just like the results I had on my test bed. You see, I never touched the test bed MB Bios when I setup everything. All worked. So, on a final desperate hunch, I loaded the defaults on my production rig and it worked. I swear, this is crazy! How can touching a BIOS for basic stuff like shutting off audio and/or limiting the CPU TDP to specifications cause my problem?!
The good news, I'm up and running again with the functionality I needed. ~Rob
If the raid has been created by software (case of any Raid created in OMV) there will be no problem. Any different system should be able to recognize that Raid.
The problem arises if you have created the Raid by hardware, for example in the server bios. In that case the Raid is linked to that hardware. If the hardware dies you need equivalent hardware to recover that Raid.
Good point. I used the OMV 6 native RAID solution. I just created another thread similar to this one. I used the same HW on a test bed to mess with different scenarios to test what I can and can't do to my satisfaction since getting answers has been spotty at best. I guess that is how it is with free solutions.
The 3 WD Hard Drives where from another RAID 5 set on completely different HW. Amazingly, OMV recognized it like you indicated as the diff between HW and software based RAID solutions. Take a look at my other thread on this. I'd love to get your take and expert advice. Thanks ~Rob
Display MoreRaid is not a backup remember that. I would avoid raid at all costs.
If You replace the gear with the same OMV boot drive all will be well (other than raid cards)
If a drives dies omv will say on the raid page its dead. Than you will have to risk all the data to remove the old drive add the new drive resync the whole thing and hope all is well at the end.
When I say risk I mean if anything happens to the other drives when you are doing the above its gone all gone.
I would worry more about the drives than hardware.
see my link in the sig about raid.
Sorry for the late response. Thank you. I will take a look and your advice is solid. ~Rob
Legacy boot versus UEFI boot. maybe?
Can't answer question about "repairing" OMV installation as I've not studied that thread. But I believe omv-regen may do what you want.
Great question. I can't remember what I used in the production system install. I can tell you, in the test bed install, it ask me if I wanted to force UEFI install, and I opted for "Yes".
How much was the SATA3 card that is giving you trouble?
Have you looked for LSI SAS cards in that price range? My last purchase was an LSI 9207 that I paid $35 for. Many available on eBay, some even include cables.
I have not. But I don't believe its a card issue as indicative of my recent post where I setup a test bed with exact hardware with exceptions. The card works fine with OMV 6 telling me its likely a OS config and/or driver issue. If the card is detected with a fresh install like in my case, everything works great. The fresh install proved its not the hardware. Its likely an OS config/driver issue or me (novice). The card was ~$65 with hardware RAID capabilities:
RocketRAID 600 series HBA's are an ideal 6Gb/s SATA storage solution for any PC or Mac platform, and are available with 1 to 4 Mini-SAS/SATA/eSATA ports in internal and external configurations. The compact PCB design, available low-profile form factor and industry standard port connectors make any storage upgrade, integration or expansion project a snap.
RocketRAID 600 HBA's deliver HighPoint's Industry-proven RAID Technology, and a comprehensive RAID Management Suite. Customers can quickly and easily configure a wide-range of storage configurations, including RAID 0, 1, 5, 10 and JBOD arrays.
Thanks for the lead on good HW. I will look into that if I can't solve this.
But which Marvell chipset? These were common in the past 88SE9230/5, but I've no idea about that JESOT card. I did come across an old thread which suggested you might need to turn virtualisation off in the m/board BIOS when using some of these Marvell based SATA cards.
KR, I don't know but I tried adding it to a know good computer system that has windows on it. Win10 could not recognize it. I believe the new card is defective out of the box.
Here is something interesting I did to test my theory. I always have redundant new hardware identical to my server with OMV and Plex server on it. That said, I setup that hardware on my table. Here is what I setup:
GigaByte H610I DDR4 ITX MB
8Gig Memory Stick
Highpoint 640L RAID card for the PCIE slot
(4) WD 3TB Red Drives (old ones laying around)
(1) Samsung SSD 250 GIG SATA drive
Added 2 drives to SATA ports built in the MB (SSD and WD)
Added 2 drives to SATA ports on the Highpoint 640L card
Have 1 WD drive ready to add to grow RAID 5
Connected the SATA3 cable to the SSD/WD on MB and 2 WD drives on the Highpoint card. Waiting to simulate adding the other drive after I installing OMV 6 to grow the RAID
Install went well unlike my production OMV server where I'm having problems with the highpoint 640L card being recognized by the OS.
Created the RAID 5 and file system. Added one large movie to a shared folder. No errors reported.
Connected the last WD drive booted the system. The system recognized it. I turned on SMART and reboot system.
Quick wiped the drive. Then added it to the RAID 5 using the grow function. Currently, the "STATE" is in clean, reshaping meaning its recalculating the RAID to accommodate the added drive. That will take time.
So far, no issues at all. All the same hardware used. Here is what is different from my production server vs test bed:
Using SATA3 SSD vs NVMe drive for OS (I'm wondering if there is a BIOS config or resource sharing issue)
Test bed vs Case with hotswap bays with a back-plain
Here's my question. How can I reinstall OMV 6 over my existing install (after I clone the drive of course) to preserve all of my configurations like user accounts, ACLs, RAID 5 configs, etc? My hope is that if there is a resource conflict or driver issue for the new highpoint SATA card, perhaps that gets worked out during the reinstall using the install script. Here is a OMV forum post that says it can be done. I just need clarification on how to do it. Is it simply popping the install USB drive in and booting to the install program on it or do I have to capture the script from the OMV library somewhere. The following is the link and a cut and paste:
If you want to play it safe, make a working clone of the OS drive PRIOR to try anything further.
To try to fix whatever issues are happening, you can run the install script to see if it will fix it:
----------------------------------------------------------
With my lab. I plan on testing both script and new install. With the new install, I want to make sure my RAID and data on it is preserved and can be remounted. Then I can take my screenshots of my configurations for my plex, user, services, extras, etc, you know what I mean, yes? Sometimes a fresh install is better. I know with Windows 10, I can easily install a fresh install and it will recognize its own raid and dynamic disks and remount with one click. So that's my hope here. I don't dare experiment with my drives with data. Unless I'm completely sure.
Please provide your thoughts and technical input. I greatly appreciate your time.
Rob
sudo wget -O - https://github.com/OpenMediaVault-Plugin-Developers/installScript/raw/master/install | sudo bash
Display MoreRE: highpoint rocket 640L RAID controller
You need to ensure this ( and any other addon SATA card) is configured to work as in pure HBA mode when using Linux software MD RAID in OMV.
This document indicates the procedure to follow: https://filedn.com/lG3WBCwKGHT…ck_Installation_Guide.pdf
You might find the configuration easier to do/test on a Windows machine before re-installing the card in your OMV box. This may not solve your problem but should be done regardless of other checks.
What about the other card I listed which isn't a RAID card? The BIOS does not see it but I will check to see if windows will. Why isn't that one not working? The Marvell chipset is supposed to be the most compatible for Linux, what I heard.
I've tried to add SATA3 Cards to expand beyond the 4 ports that come on the motherboard. I want to add more drives to grow my RAID5. One card the Gigabyte H610I MB bios sees and the other it does not: highpoint rocket 640L RAID controller (bios sees this card) and JESOT 4 port using the Marvell chipset (bios can't see this).
When I attach only 1 disk to the highpoint, OMV boots up correctly without error. The new drive shows up correctly. But when I attach more drives we get the same errors during bootup as shown in the attached picture. None of the drives show up in OMV.
When I use the JESOT, the onboard LED's light up indicating the drives are attached but i still get the same bootup errors and no drives show in OMV.
OMV will eventually boot and functions properly. My existing disks and RAID5 show up and function properly. But no new disks can be added through the SATA3 adapter.
I'm desperate for help.
Regards,
Hypothetical Scenario:
If my OMV6 server motherboard and or SATA PCIe adapters die and are replaced, will the OMV automatically recognize the RAID 5 drives? Or do I lose everything? Why do I ask? Windows 10 will automatically recognize striping disks and/or mirror drive sets and ask you if you want to recover or add them to the new system. Windows makes it easy to discover RAID setup and associated disks. I'm worried about hardware dying and not being able to recover quickly my RAID 5 and associated 3 disks.
Please advise.
Rob
Setup:
Problem:
Hep Request:
OMV Diagnostic Report:
= Network interfaces
================================================================================
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether d8:5e:d3:da:26:98 brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
inet 192.168.55.60/24 brd 192.168.55.255 scope global eno1
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:0c:c2:6d:e7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
--------------------------------------------------------------------------------
Interface information eno1:
===========================
Settings for eno1:
Supported ports: [ TP ]
Supported link modes:
10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes:
10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Auto-negotiation: on
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
MDI-X: on (auto)
Supports Wake-on: pumbg
Wake-on: g Current message level: 0x00000007 (7) drv probe link
Link detected: yes