Posts by vlad1966

    I've installed OMV on plenty of different systems in the past & had very little issues, but this time is a PITA.

    I built a new NAS yesterday with the following Specs:

    ASROCK B760M-ITX/D4 WiFi mobo

    Core i3-13100 CPU

    16GB Corsair Vengeance DD4-3200

    Samsung 980 500GB M.2 SSD

    2 Seagate Exos 14TB HDDs

    Trendnet 2.5GbE PCIe NIC

    OMV 6.0.24

    I've tried installing OMV 6 or 7 times & the end result is the same:

    Everything installs fine during the installation process, no issues at all. I remove, the install USB, reboot, it start the reboot, passes thru the Grub boot loader, & always freezes on the following message:

    /dev/nvme0n1p2: recovering journal

    /dev/nvme0n1p2: clean, 41625/30433280 files, 2721259/121715200 blocks

    It seems to be having some kind of issue with my M.2 Samsung drive ? (nvme)

    Thing is, if I reboot, the numbers will be slightly higher for the first set of numbers, for example:

    /dev/nvme0n1p2: recovering journal

    /dev/nvme0n1p2: clean, 41631/30433280 files, 2723636/121715200 blocks

    If I reboot a second time, the first set of numbers will be higher yet again:

    /dev/nvme0n1p2: recovering journal

    /dev/nvme0n1p2: clean, 41636/30433280 files, 2725865/121715200 blocks

    It's like it's trying to run some kind of check on the M.2 drive but getting stuck.

    I've tried enabling/disabling all kinds of BIOS settings I though might help but nothing has worked.

    Secure Boot is currently disabled in the BIOS but enabling didn't help either.

    Please help me fix this. Thanks

    Hello Everyone,

    Currently installed version is 6.0.24-1 (Shaitan) on my OMV server.

    I go to the web interface to System > Update Management > Updates & it finds the following updates:

    openmediavault 6.2.0-2

    libnss-systemd 247.3-7+deb11u1

    linux-image-6.0.0-0.deb11.6-amd64 6.0.12-1~bpo11+1

    linux-image-amd64 6.0.12-1~bpo11+1

    I click the Install Updates icon & I get the following:

    Reading package lists...
    Building dependency tree...
    Reading state information...
    Calculating upgrade...
    The following packages have been kept back: linux-image-amd64 openmediavault
    0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.


    Is there a way to get these updates installed???


    So, Like a dummy, I sold my Penium G4560 / ROG Strix B250I Gaming motherboard (combo A) recently on ebay because I wanted to get more up-to-date hardware to run my OMV Server. Latest version of OMV had been running great off that combo. I replaced that combo with a Celeron G5900 / ROG Strix B460-I Gaming motherboard (combo B), along with 4 4TB Toshiba NAS HDD's. I intended to just use the same boot drive I had been running on combo A with the new combo B.

    When I booted the new combo B, I discovered that OMV could no longer find my NIC, the Intel I219-V - the same NIC that was on my previous motherboard. Yeah, this makes sense :/

    So I figured I'd try a new fresh install on my new combo B today, latest downloaded version of OMV. No go! I tried even the wireless AX200 on the mobo, also not recognized. I tried using all the various Intel drivers during the install, not of them were recognized, including the e1000e. WTH!

    Only thing I could think of is maybe the Chipset B460 is too new to be recognized by OMV? Doesn't make sens since I tried booting Linux Mint 20.04 & the I219-V was recognized no problem.

    Any ideas anyone?

    Is this a seperate PCIe card (not built into the motherboard? If it's installed in a PCIe x16 slot, try moving it to a PCIe x1 slot (assuming your motherboard isn't a Mini ITX Board.

    I bought a Fenvi PCIe WiFi card not too long ago that has the AX200 chipset - Win10 or latest version of linux wouldn't recognize it in any PCIe x16 slot - driver issues. So I returned it. If I'd used in an x1 slot it might've worked.

    Give it a shot if you have an x1 slot on your mobo.

    Not sure what I'm going to do about the high 35-40C temps the drives are under, even when the platters are spun down, and the drives aren't being used. Don't really want to have a fan constantly blowing on them. Sort of defeats the purpose of the HC2. If I put them back into a PC (node 304) running OMV, at least then they can be powered on/off.

    It's easy to power them off thru OMV web GUI, but REALLY wish there was an easy way to power the HC2's on, without constantly having to unplug/plug them back into the AC adapter. The high drive temps are a deal-breaker.

    Anyone have any ideas for easily powering on the HC2's with them being plugged into the AC & turned off?

    There being no power button & not supporting WOL was a double-bonehead move by ODROID AFAIK.

    So I bought a 10TB WD MyBook and shucked the drive to use as a backup in another HC2. I managed to figure out how to flash the FW in both HC2's to set spindown to 5 minutes, but it doesn't seem to be helping drive temps.

    It seems like the disks in the drives stop spinning after 5 minutes, but the drives get hot. After spindown the WD gets to about 40C & the Seagate to about 45C. I've had to unplug them last night until I figure this out because I didn't want them getting any hotter. I think Odroid made a mistake not giving the HC2's a power button that could turn these things on/off.

    I actually had to point a small Vornado fan at both drives when the Seagate was transferring files to the WD so they wouldn't get ridiculously hot. Not a problem if your only transferring a few GB of data at a time, but not 7TB worth of data.

    So, how can I get temps to normal (mid 30C?) when the drives aren't in use without having a floor fan pointing at them full blast?

    Isn't there a spindown time automatically set in the bridge firmware already? Or is the default 0 so that it doesn't spin down at all?

    How do I upload the updater to the HC2?


    So I bought an Odroud HC2 recently, running a 10TB Barracuda Pro hard drive in it. Installed Armbian (not a big fan), would rather just install OMV "bare metal" but (since there seems to be no way to do that now) whatever.

    Problem is I can't seem to get spindown to work reliably. It works one time if I set it under the Disks section, set to spin down after 5 minutes if there is no activity, but then it doesn't work a second time. If I change one of the parameters, it works again the 1st time, but not afterwards. Very annoying.

    I've tied playing with Suspend from the Power Menu & that works no problem, but then can't figure out how to wake the system, since apparently the HC2 doesn't support WOL (great choice ODROID).

    Any ideas?


    Here's my situation:

    I was using a WD Red 12TB HDD with my Odroid HC2, a Data drive. Not sure if it matters, but HC2 was running OMV4. This 12TB Data drive should be formatted EXT4, nothing exotic, no partitioning, etc. This drive has all my Movies, TV shows, software, etc.

    I recently built a new OMV Server running OMV 5, 6 x 4TB drive software RAID 0 array. I removed the 12TB drive from the HC2 & installed it in my new server, thinking it would be faster to move the files internally to the RAID array from the 12TB HDD over the internal SATA bus than over the network (if I'd kept the 12TB drive in the HC2).

    How do I "import" the 12TB drive into my new server without formatting it, so that I can transfer all my data from the 12TB drive to the RAID array?

    I'm trying to avoid having to put the 12TB drive back into the HC2 to transfer the files over my network (server has 10GbE, but HC2 is only GbE).


    Bad idea. One drive fails, all data is lost.
    You might want to have a look at mergerfs. There is the unionfilesystem-plugin from OMV-extras for that.

    That's why I'd be "rsyncing" the RAID array to the 12TB Red drive, eventually to another 12TB Red for full capacity backup :)

    How does mergerfs compare in speed to RAID 0 ? I might reconsider going with unionsfs if it can match RAID 0's speed.

    So Early next week, I'll be putting together my new OMV server. Can't freakin' wait :)

    The hardware will be as follows:

    - Pentium Gold G5400 CPU
    - ASUS Z390M-Pro TUF Gaming WiFi mobo
    - 16GB DDR4 RAM
    - 128GB NVMe SSD for boot
    - Fujitsu 9211-8i D2607-A21 LSI SAS2008 Controller in IT Mode
    - Mellanox ConnectX-3 10GbE PCIe NIC
    - (6) Seagate Iron Wolf 4TB HDD
    - (1) WD 12TB Red HDD (will add another in the future - only have about 11TB of data on it)
    - Fractial Design Node 804 Case
    - Corsair SF450 Platinum PSU (Great cables on this puppy)


    Install all 7 HDDs into the Case.
    I'll create a software RAID 0 with the 6 Iron Wolfs
    Move 11TB of data from the WD Red drive onto the 24TB RAID 0 array
    Leave the 12TB WD Red as a standalone backup drive inside the system

    Right now, the WD drive is running on an ODROID-HC2

    Looking to do this not only for capacity, but also for speed over a 10GbE network

    I figure having the 12TB Red internally should make backing up the data from the RAID array pretty quick. Eventually I'll get another 12TB Red to be able to backup the full capacity of the RAID array.

    I'm pretty good with hardware, but software/networking is sometimes my Achilles Heel.

    Need Help: :?: Trying to find a plugin that would automatically backup individual folders from the RAID array to the standalone WD Red 12TB drive (drives eventually).

    Thanks! - Apologies for the long-winded setup. Any/all suggestions welcome (hardware/software/network-wise).

    Thanks Adoby,

    You've given me some things to think about/investigate. bcache? fscache? Looks like I'll have to do some investigating. :)

    I'd love to use large 16TB drives, but $$$ is a concern. Besides, I currently have only about 11TB worth of Data/Files, so don't need massive storage for now. I do want 10GbE everywhere on the network, since I'm an impatient SOB who HATES waiting on GbE file transfers :sleeping:

    Note really too concerned with power consumption. Each server will be 8GB RAM, Pentium G5400, + 10GbE NIC + whatever drives.

    Never tried installing OMV on top of Debian. That could prove interesting. Any guides around for that? I've used Linux Mint, Manjaro a bit, but a Debian noob.


    So, like the title mentions, I plan on building a couple of new exactly the same OMV servers (one Main & one Backup). Thinking 18TB total capacity in each, will use 10Gb Ethernet with Mellanox ConnectX-3 cards interconnecting directly with my main PC (for fast file transfers from PC to either server & also server-to-server). Trying to get a nice blend of capacity & performance without spending a ton of $.

    I was thinking each server would have (3) 6TB Seagate Iron Wolf drives (each server will be housed in a Cooler Master Elite 110 case, so not a lot of room for HDDs, 3 max - I want a small server).

    I'm thinking if lucky, I can maybe get 450-500MB/s data transfer rates across this 10GbE network.

    But, what if I decided to splurge a little (alot, actually) and replaced one of the 6TB Iron Wolf's with 6TB of SSD storage? Like (3) Samsung 2TB QVO SSDs in each server (or a faster SSD?).

    *** Is there any way in OMV to do this for performance like a RAID 0, without losing the 4TB of capacity of the HDDs, since RAID 0 total capacity is based on the smallest drives in the array? So I'd be looking at combining somehow (if possible) 2 RAID 0 sets? A RAID 0 set of (3) 2TB SSDs + RAID 0 set of (2) 6TB Iron Wolfs. I'd be looking to do this on both servers.


    Adoby, one more question on the HC2:

    I notice it doesn't seem to have an on/off button. Do you keep your drives connected running all the time? Or do you put them to sleep & then wake the with WOL? How do you handle that?


    Adoby, are you using the Odroid HC2 with the 16TB Exos X HDDs? I thought the HC2 was limited to 12TB drives.

    I'm thinking about getting a couple of the HC2's & using one for shared storage & one for backup like you but am nervous about the SATA being USB-based. Any issues with them? Especially when both are used at the same time (backing up from 1 to the other?)


    PS: Wish there was a version where the SATA was PCIe-based w/10GbE