I used m2 disk and upgraded bios OMV installer is seeing my disk I used lastest release and I flashed it into pendrive
Is the drive you installed it to set in the BIOS to be the first boot drive?
I used m2 disk and upgraded bios OMV installer is seeing my disk I used lastest release and I flashed it into pendrive
Is the drive you installed it to set in the BIOS to be the first boot drive?
Depending on the device used, it can not pass a serial number or pass its own "serial number" or possible mix them up.
Device in terms of hard drive, or device in terms of SAS/SATA/RAID controller (keeping in mind, I really haven't figured out how those controllers interact yet, my one server has 2 different controllers, which really upset my limited understanding)?
ZitatMaybe with plex but not kodi streaming directly from nfs."
I'm really not sure about Plex, didn't know that about Kodi. My brain is still working with what information I dug into 15 years ago lol. Haven't really studied up on it since.
As for NFS, sounds like I need to learn more. I've messed with NFS, but with it being more troublesome than SMB I just stuck with SMB. I had read somewhere that it wasn't as fast as SMB, and with SMB multichannel I've found more reasons to stick with it.
ZitatDVDs and Bluray are compressed already. dvds are mpeg2 and bluray is usually mpeg2 or h264. mkvs are usually h264 or h265. I get it that you have more or less aggressive compression from those protocols. I re-encode my rips to h265 mkv. I doubt most people would be able to tell a difference in a test where they didn't know which is which."
Agreed, but I don't see the benefit of compressing it more. Unless I learn of a lossless encoder, I'll stick with the ISO's as much as I can - looking forward to the day that BD menus can finally function in Kodi too lol. Obviously I avoid Plex for the fact it doesn't handle ISO's at all.
My understanding from TrashGuides is that h265 is an inferior format to h264. As for telling the difference, no, most people don't even look at picture quality, period. I find myself distracted from the movie when a dark or bright scene, or a scene with a lot of sky in the background come up and you see pixelation.
I do not do ANY Transcoding.... I play DIRECT PLAY from Server to Kodi, Office to TV via CAT8 LAN.
Transcoding, I noticed, when doing a 4k Movie to TV, actually downgrades. All my movies are 40-90GB 4K Movies, I do not Transcode unless I am viewing remotely.
Maybe it's my knowledge being dated, my understanding was that as soon as you stream over a network, transcoding takes place. I never really looked into Direct Play on Plex, as the only time I ever use it is when I'm on a TV without Kodi.
Transcoding absolutely does exactly that, it compresses the data. I even avoid .mkv conversion though, I rip straight to .iso whenever possible. Not to say I don't have .mkv's, but when I compare both the .iso and a 'high quality' .mkv, there's still a difference.
Any of them have a sata/sas expander or usb or hba in front of them?
They all would, they're in 12 bay servers.
Well that's crazy then. I've got several disks like that.
dd can't change the serial number. It is very difficult to change the serial number of a hard drive. No one can accidentally do it.
I haven't seen the serial number wrong on mine. I have four different manufacturers in my server and all 10 are correct in the Disk tab, smart, and drive label.
I'll have to check to see which one's correct. Judging from the SMART details and the hours presented, I assume the SMART information is more likely to be messed with, but more likely to be obvious?
I can only guess at why they'd be showing 2 different serials.
Look at the drives in the pool and then get the serial number of the disk from the Storage -> Devices tab. Next time the server is off, slide the disk out and find the serial number on the label.
Just noticed this part of your response - I've done that in the past with OMV - and noticed there's some inconsistency in where you'll find the actual serial number that's on the label, depending on whether you're looking at the Devices tab or in the S.M.A.R.T. information - they're often different from each other. Maybe it's due to using previously owned drives or something, where they've been dd'ed or something?
For example, my current drives:
From Devices under Disks:
/dev/sdc
ST8000NM0055-1RM112
ZA11TJ7L
ATA
7.28 TiB
Same drive under SMART:
/dev/sdc
ST8000NM0055-1RM112
ATA
ZA128D0Q
7.28 TiB
32 °C
I have Plex docker running server side then on my TV I just install Kodi on my Xbox Series X and stream direct 4k flawless.
See, that's the thing. As soon as it's transcoded, I'm not so sure it's flawless. Just compare Netflix or Amazon "4k" to a good local copy of the same video and you'll see what difference it makes.
How would it be better at video over a network than a dedicated GPU direct from drives without transcoding?
I should probably qualify that - I don't like using stripped down compressed video files. I prefer raw ISOs direct from discs. This is an effort to move away from using Kodi on an Xbox Series S over wired 1Gbe.
Just want to thank all the people who make OMV great. After spending dozens of hours messing around with building a 60TB Debian based server and trying to get things to work, hour after hour of searching up commands and firmware and fixing, and tweaking and frustration - I was finally "done". Now I could finally upgrade the hardware on my backup machine, which luckily runs OMV and my first use of ZFS, which hadn't let me down in over a year so far.
After a couple of hours of setting up the server hardware and installing updates, I threw in the OMV install USB. Going onto M.2 drives, the install took maybe 20 minutes. Booted up, logged in, no errors, no hardware missing, everything was recognized and seemed to be working. That's the Linux I remember. Ran some updates, stopped by the forums to remind myself how to install ZFS for the transfer of the drives, and decided to commit to just pulling the hard drives out of the old machine and throwing them into the new one.
By now, it was 2am. I just wanted to be finished.
With the new machine running, I started plopping drives into it. All the while, realizing - Oh no - I didn't keep very good track of which hard drive came from what slot, and with 2 volumes, I'm not sure how I'll match them back up again. MDADM? No idea. I'll have to find a guide or something.
I figured, well, I'm probably in for a mess to clean up no matter what, so just throw them in and deal with the consequences. I'm sure I'll get it, just don't lose the data. Maybe I can find some way to match them up via UUID or label, hopefully it won't be too bad to deal with.
Coming from typically using RAID, I had no idea.
Finished inserting all 12 drives and crossed my fingers that I wasn't in for another sleepless night.
Logged into OMV, went to ZFS, and decided to give 'Tools --> ZFS import --> Import All --> Import' a try. I cringed and looked away as I clicked.
DOH. An error. No surprise. Clicked on it to hold it in place while I read it.
I swear OMV was just playing with me, as the message read in a nutshell, almost sarcastically - "Hey, these pools came from another machine ya know. You COULD use -f to force the import if you want, but you know, it's up to you".
I knew it. Here comes the final error telling me I'll need to learn the entire structure of ZFS to make this work.
Back to the menu, check off the 'Import all' and 'Force' buttons and try again.
I click 'Import'.
No error message. That's good.
I then watched as my 2 ZFS pools populated the screen, labelled as I named them, with the correct volume size and usage.
This can't be.
Is it really this easy? Did it seriously work on the first try? Did it seriously not care where in the array the drives were located, and it just matched them up anyway?
Set up my shares and checked everything. It was all there.
The process took all of about 30 seconds.
Damn. Thank you OMV. After all the headaches of the past week of setting up my main server, this was the one redemption for Linux (Debian especially). And thank you to the people that made OMV what it is. If only I could have a desktop environment to run Kodi directly off my server to my theatre, I'd be using OMV for every computer from now on. But I'll take what I can get, and thank you for that.
If you are doing the transfers over a samba mount you will still probably his a bottleneck due to samba. The last I checked, samba is a single threaded protocol, and will probably peak out in the 2 to 2.5 Mbps area even if "tuned". You will likely get better performance via nfs or rsync over ssh.
As I mentioned, I've tried every combination I know of for protocols. With a single, or multiple 1Gbps connection, it maxes out, then cuts in half with a second transfer, no matter what combination I'm using.
If the direct connection is using a static ip that is not on your lan sub-net, you could use something like rsync to sync/copy between the systems via ssh or volume mounts that are made over that ip address. That should keep the massive copy over the dac connection since the ip is specified in the mount and/or rsync command.
I've done that, but now the trick I'm trying to pull is keeping 2 simultaneous transfers from affecting each others' transfer speeds when doing so, by segregating volumes and bottleneck pathways.
BernH Yeah, it wouldn't be any different from a typical ethernet P2P setup. But my question lies in what to do to direct all traffic to and from a specific volume once that's done.
Alles anzeigen"I grew up using computers before there was a mouse or the internet. We had to type everything and if you could figure things out on your own, you didn't use it."
Me too. But for 30 years it's been mouse-in-hand. There's no shame in taking advantage of the progress :D. Should dial in and look me up on mIRC one of these days lol.
"You have a narrow view of whats out there then. 3/4 of the Qnap is open source. They just put a web interface and a few packages on top of it. Is it easy to use for Windows users? Yes. Is it easy to use for advanced? No because it isn't flexible. So, you might fester away on it but it is good for many. I hardly login to most of my OMV boxes. They have run for years and years with just patching."
Possibly - but I tried at least a dozen options before I settled on OMV. And another dozen (wound up at Debian for the new server and what I want to do, and I'm questioning my judgement) or so for the new server. Qnap IS open source, and the web interface is what I appreciate about it. I don't need all the frills, I just need storage space, easy access to it and a few apps. There was a time when I needed domain and email servers and such, but that was 20 years ago.
"Well, I have written almost all of the omv-extras plugins and maintain all of them."
I figured you weren't just some run-of-the-mill OMV forum lurker. Well done, one of the things I appreciate about OMV is that I'm not chasing my tail and having to learn code (or have a deep, romantic understanding of ACL and permissions like with TrueNAS and others) just to get new functionality like I am with so many other distros. I'm a nerd at heart, but I already waste enough time in front of a screen. If I'm not likely to get paid to learn it, it's a hobby with limitations.
I got my first 4bay unit cheap. I like the size and it was intel based. I bought it just to put OMV on it. I would never buy their rackmount hardware.
I don't even run raid on my servers. I can't think of anything QTS would offer that would make me want to run it. I would NetApps at work and wouldn't want one of those either. I like open source.
Have you heard of omv-extras? : )
A cheap 2 bay is what got me hooked on them. As someone who's lazy and impatient, I like to click rather than type. Getting around using QTS didn't involve searching the web every 3 minutes looking up commands to run.
Looking at what's available on the open source market, I could see why someone would have low expectations - but it's honestly miles and miles ahead of anything else I've seen or used. Just simple, easy to use and works, you can get on with your day instead of festering away trying to make things work. Only issue I ever had that wasn't my own fault was a ransomware that hit Qnap and Synology.
As for omv-extras, I use many of them, but I'm not inclined to play around with them much.
Not trying to confuse things, but I do agree that OMV is designed to have a single connection or boded for failover. With that said, if you don't want the expense of 10Gbe, how about 2.5Gbe. You could pick up two 2.5Gbe nics and a small 2.5Gbe multi gig switch. Drop 1 card in each system, connect both the the switch and a normal 1Gbe lan connection to the swithch.
You could probably do the lot of it for $100 or so, and get the speeds that are up to 2.5 times as fast as you get on 1Gbe
I can get 10Gbe SFP+ with DAC cables for around $40, but again, why if I don't need to? And more than anything, I'm really thinking the idea of what I'd like to do just makes sense no matter what means the 2 boxes are being connected together with. Imagine the increased speeds across the entire network when specific volumes can be routed to individual IP addresses and ports, the volumes should become far more accessible at higher speeds no matter what type of connection.
I move drives to different servers often. I don't reformat them other than the OS drive. Then there is no need for an initial sync.
I've owned many Qnap but never run QTS.
Getting away from Qnap, I'm starting to learn that's an option. I keep a RAID NVME pair for the OS now, which I just stole from the Qnap I'm moving away from.
I'm surprised you've never run QTS - it's pretty much the only reason to buy a Qnap, as their hardware's nothing to write home about, especially getting into their rackmount hardware. I don't know if you're a part of the OMV development side of things, but I would highly recommend taking a look at it. Having an OS like that to install on non-Qnap hardware would probably be a goldmine and a dream for guys like me. The whole reason I chose OMV for my backup server was because it was the closest thing to the simplicity of QTS I could find.
Is it wrong that seeing "1.47 PB free" makes me a little excited?
Does rebuilding a server involve new drives? Call it a home lab I guess, I'm always exploring options, and have access to cheap servers, so I'm constantly upgrading and dealing with somewhat different configurations. Especially when coming from a Qnap duplicating data to a new machine means a fresh install on some other OS and filesystem. So in a nutshell, yes, it usually does.
Sometimes it's the backup server I'm rebuilding, sometimes it's my main server. I don't like to be without a backup for long, so I really would like to make the shortest work of it as possible. I definitely can't afford a third set of drives. The backup server configuration is kept as basic as possible, I don't want to get into trying to duplicate configurations, especially when I'm always looking for an OS that can keep it as robust, simple, versatile and functional as Qnap's QTS does. Not gonna lie, if I could install it on my servers, I would in a heartbeat.
My main server distributes Kodi and Plex streams as well as PC and phone backups, so when it's being rebuilt, having it up and running again without having to wait is the goal as well.
Once everything is complete, I just keep everything in sync with Rsync.
I just don't feel it's necessary at my scale to need to spend money on upgrades when I have several 1Gbe ports that I should be able to channel separately.
In enterprise applications, if you were trying to do 2 transfers at once, is it faster to retrieve data from 2 separately connected boxes at once instead of 2 streams of data coming from 1 box? Or is it the same?
My use case is 50+ TB of personal data, music and movies on 2 12 bay servers. When I rebuild a server, I'd like it to not take 2 weeks to move everything back over. If I could dd 12 drives at once across ethernet connection via individual connections, I would like to think I'd be getting the maximum throughput for each instance. Not sure if that's even a thing, but I hope it paints the picture that's in my head.
I totally understand if you don't want to toy with educating me ryecoaaron, I'm in awe with how much valuable information you provide here as it is, and I'm sure you have plenty of other things to deal with!
With your hose example, there is no problem feeding them. With a single system, the feeding is a challenge. Moving to 10gb is so much easier. You just throw everything at it and it handles it. No special handling trying to direct every little bit. And you get much higher single job speeds. OMV is really meant for a single adapter (or bonded for redundancy not throughput). You are going to fight every part of OMV trying to use your four 1GB setup and still probably have some things want to use a single channel. Just my two cents.
But that's exactly why I'm throwing out the theory that creating a direct link between volumes on the two machines should be able to handle the throughput far better, no matter what the protocol being used. I assume that if I'm transferring data directly between 2 servers with another 2 servers also transferring data between each other, they shouldn't affect the first two. With separate volumes, and CPU and RAM not being the bottleneck, why wouldn't a capable machine be able to mimic the same concept using two separate channels (ports)? If I'm only pulling one stream of data from each volume, that should eliminate the volume being the bottleneck when there's 2 streams with the second coming from a second volume no matter what, no?
From what you're saying about "some things want to use a single channel", doing an SMB file transfer within a file manager at the same time as Rsync transferring data via NFS should alleviate that - I've tried, and it doesn't (unless I did something wrong?).
This may sound confusing, but I tried this-
- Created an "internal" remote mount (Remote mount directed back to a folder/volume via one IP address/port on the same server)
- Created another remote mount for the second volume to another IP address/port
- Shared each of the remote mounts independently
- Mounted each of the shares on the second server via their separate IPs.
I'm sure there's a dozen reasons why it's a bad idea, but I took a shot at it to see what happens (I probably should've taken notes, I can't even remember what the results were, I just stopped doing it because it seemed over complicated and wrong) Which brought me here to see if there's some recommended way of getting dedicated channels between volumes on 2 servers using 1Gbe.
I'm throwing the idea out there as a novice user, and just hoping someone far more educated and experienced than me has either the answer or an explanation as to why it can't work. While moving to 10Gb might solve it for now, I'd still like to examine the principle as we'll surely see the day where 10Gb runs into the same problem.