Posts by kwon

    You used the setting "existing path". This means if you copy your data in one run, a quite large folder for example, it will all be on one drive, not distributed over the existing drives.

    I'm guessing no one had an idea and granted, the questions were a bit specific.


    In any case, if any of you are interested I'm happy to report back once I found solutions and post the detailed step by step manual here once I solved it. Since I'm gonna write it anyway.

    Might be quite a while, but like I said, if not too much off topic, I could paste it here.

    Hi there,


    I'm using rdfind [1] to convert duplicate files into hardlinks. Since I also use rsnapshot as a backup solution (usually one drive for data, backup1 and backup2 are setup as targets for two separate rsnapshot jobs) quite some space can be saved this way.

    Especially when I've build a new OMV and users start using it and rename and move a lot of files.


    However OMV with a Pentium (1155 socket) seems to slow down quite horribly and sends ressource limit warnings, while at the same time the CPU-load doesn't seem to be a problem at all (rdfind itself uses at most 30%, mostly around 10-15%).


    I thought it might be actually the hardrives causing the slow response, since rdfind of course needs to scan the whole filesystem. However the problem also occurs when I run rdfind on a backupdrive, leaving the data drive alone, which shouldn't slow down then anymore.


    Questions:

    - any other experiences with solutions to the duplicates problem apart from rdfind? (on ext4, not ZFS depub etc.)

    - any idea what the reason for the slowing down of the system might be? Or which way to investigate? I'm kind of out of ideas.

    - more general: if i'd want to limit the cpu-load that a specific scheduled job might use what is the proper way to achieve that?


    Hardware:

    - CPU Pentium G640

    - RAM 4GB

    - OS 320 GB HDD, 2,5"

    - 3x 4TB HDD

    - services: daily rsync, rsnapshot, rdfind; smb, openVPN

    - clients for openVPN simultaneously: 10 max, usually more like 3-5


    [1] https://rdfind.pauldreik.se/


    Thanks so far

    kwon

    Both threads are crossing over each other with no light at the end of the either tunnel.

    Like I said, I need my box of rubber balls. Sometimes pain is a very powerful teacher... ;)

    But maybe you guys joining in has an equal effect. Let's see if he's doing now what is necessary. Otherwise we could just give him a new nick... I vote for "deafballs".

    kwon you do realise you could be preaching to the hard of hearing :D:D:D

    LOL yeah. Sort of factored that in.

    But I rarely have time to hang around in forums and since I posted a question here anyway I thought I could help out with answering a few myself. I have to admit though that my interns used to learn faster once I took out the box of rubber balls. That is significantly harder to do online.

    I think I can do this by booting into 'SystemRescueCD' in the OMV GUI by selecting it in the kernel tab?! There should be Memtest included. But then the system boots to a promt/CLI? How to start the memtest the

    I usually just use the pxe boot or a CD lying around, but if you have neither this might work. After memtest starts you just see a blue screen checking your RAM with random tests.


    should be possible without losing the LVM.

    Yes. Do you know how to do that using the mainboard jumper? If not check the manual for clr cmos / clear cmos.


    Not? I have a couple of computers/devises with hard drive which I always shut down, computers, Dreambox, TV, Xbox, etc.

    You do not turn all these devises to off to preserve your hard drives? Whats about your electricity bill?

    1. other devices are a different story, this is a NAS / server that as it seems you are using every day. So different aspects come into play.

    2. you can tell OMV to spin down drives if indeed you think this is necessary, but suspending the system does not make much sense to me

    3. Energy consumption should always be watched - BUT: your cpu uses max 10W under load, with RAM and mainboard etc. lets say 20W, with HDDs lets make it 50W. depending on where you live and what power prices are like this translates to max. 150 € per year. Max! This does not take into account, that drives, cpu etc. in idle use much less. So realistically we're probably talking less then 100 € total cost with 24/7. BUT - you are still going to use the system every day even if you turn it off at night. So we would have to figure the difference, and that is probably at best one third of that. So you may safe 30 €. In a whole year. Worth it? How about you just eat soup instead of meat once a week? That is much more cost effective. Or whatever else may be easy for you.


    If you turn it off every night however, spin the drives up and down all the time for no reason they are more likely to fail, more errors may occur (hence this thread?) and this has not even taken into account how much time it takes for you to solve these issues then.

    Now I don't know how much you earn per hour, but I do not see the point for the hassle for 30 € in a year.


    If you are however very much concerned about the climate catastrophe - which you should be - invest in a solar panel. Or ten.

    Quote

    Why is the status LED for the SSD red and for the data drives green?

    That's a more complicated question then it may seem. SMART basically tries to estimate if a drive is going to fail by logging a bunch of statistical data as you see in your reports. However this is uncertain, as statistics are in general. So without giving you a two hour lecture on SMART values and differences on manufacturers etc. the short version is: If the SMART check says the drive may fail it really may do so soon. If the SMART check finds nothing it only means exactly that, it didn't find anything concerning but as with many tests in life, not finding a problem does not necessarily mean there is none.

    False positives are rare, but a negative result of SMART just gives an indication.

    So in this case it means the system thinks your SSD has a problem. That sometimes occurs with SSDs but since you have no data on the SSD, just the OS, I wouldn't worry too much for now and first check the rest of the hardware. If we find nothing else we may come back to this.

    You didn't do an extended test on the harddrives though, did you? You should.

    For the future: in the SMART settings in OMV you can create a job to let the drives get checked every month or so.


    Quote

    How do you realize large datapools?

    snapraid + mergerfs or ZFS


    We're going a bit into a basic tutorial here, if you don't want to get lectured do tell. But since you ask:

    - never use LVM for large data pools. If only one drive fails all your data is gone

    - especially do never do it if you have no backup (which of course can never occur because we all know if you don't do backups cute little rabbits die horribly every night so we all do backups!!!)

    - and never ever screw around with the LVM without backing up the data beforehand

    so in other words you did a couple of things that you should never do, but now you are where you are. If it will be possible to get out of this without data loss - we'll see. But if you take anything away from this: don't mess with any filesystem without a backup.

    Quote

    Better means, the last time I restarted the NAS I only had to press the power two times (instead 6 or 7 times) :)

    Dude... If one of my interns would have done that I'd be hunting him through the shop with a paintball-gun...

    Alright, sorry if I'm lecturing again but...

    In case you have a HDD that may be failing soon (which may be your problem, we don't know yet) you make it MUCH more likely the drive fails with every time you force a restart. In other words NEVER force a drive to restart over and over if there is even just the chance it may have errors. And... again, especially if there is no backup...

    Now in your case: one drive failing means all the data is gone. Thats what LVM does.

    So what you do if you just start the machine over and over is you are hating your data and trying really hard to destroy it.

    Quote

    Usually the NAS goes to spend at night, not completely off.

    Not a good idea.

    Quote

    not yet, cause I'm not sure to preserve, but it should?!

    Like I said above:

    1. SMART check (the extended one, your response was too fast to be able to do that unless you did it before)

    2. RAM check

    3. BIOS reset

    4. would be unplug everything and test if the mainboard is OK (which was already suggested in the other thread.

    If I may say so - I just scrolled through the other thread and it seems you had this problem for quite a while now - but you don't seem to want to go through the necessary steps to solve it.

    You're choice, it's your data and you seem to be balancing right now with no safety-net in 20m above ground and the rope has started to swing ...


    You're getting ATA errors and the system shuts randomly down... so if you do not want to lose all your data on those drives you really need to go through the diagnosis and stop turning your system on over and over without knowing what the problem is.


    But again, your choice. Sometimes it's also a good experience to start with empty new drives... ;) Nothing makes humans learn faster then a bit of pain every once in a while.

    I can't interpret it but I posted it yesterday here RE: OMV starting Problem "emergency mode" on every start

    This is only the SMART log for your SSD, but the problem may also be caused (much more likely actually) by an HDD. I recommend you start an extended test for each of those.

    The SSD seems alright-ish, the count for unexpected powerloss is high but obviously related to your problem. The CRC errors are zero, which is the first thing you should watch. You're temperatures seem a bit high for this time of the year (unless you're based in Australia) but nothing to be concerned about especially for an SSD.

    Quote

    The problems occur by adding the data drive 3+4.

    I usually don't use LVMs for large datapools, but what exactly did you do?


    Quote

    I think it is much better with the new 120W power supply (at least the suspend mode)

    LOL You're funny.

    "Better" meaning it still *does* turn off from time to time and you still don't know why?


    Alright, we know the SMART for SSD, now we need that for the remaining HDDs and the RAM-check. Did you try the BIOS-reset? I'd do them in exactely this order, mainly because if one of your drives is causing the issue I'd want to know that first of all.

    Quote

    What is proper in your opinion? An internal power supply?

    In my experience cheap picos are likely to create problems. Not necessarily the case here but one possible source of error. That's why I mentioned it. A system turning off randomly and disks possibly not spinning up... seems one good place to start.


    Quote

    The smart status I can check via OMV GUI?

    Or via cli. OMV GUI probably easier. Do you have experience with interpreting those values? If not feel free to post them here.


    Quote

    RAM isn’t checked up to now (because I thought I worked proper until the new data drives arrived)

    Again, your descriptions suggest a hardware issue, in any case it is advisable to exclude these as possible reasons for your problems.

    Please check RAM with memtest. A debian-Install CD or any other that has memtest included will do. Let it run at least until you get the report, that there are no problems (over night is a good idea). If you get red warnings right away you know the RAM has an issue and needs to be replaced.

    Do not let the HDDs connected during that, as long as you don't have a backup and don't know if the harddrives are indeed OK ... not a good idea.

    Plus: you mentioned that you restarted your system dozens of times when it didn't boot up - also VERY bad idea. If it does not start find the problem, solve it. Especially if you do not have a backup...

    Quote

    So now I have about 24TB of data without a backup.

    But you do have two 14TB drives ... so it might be a good idea to use those to first do a backup?

    Your data... I would *never* mess around with LVMs or any partitioning at all without a backup.


    Quote

    By the way: I think these problems are related to other(s) posted here OMV starting Problem "emergency mode" on every start

    Unlikely. Your system just turns off without reason. You may have more then one problem though. Fixing hardware issues or at least making sure there aren't any is the first step. Well .. doing a proper backup would be the fist step in my opinion.

    Hi there.

    The problems you describe suggest an hardware error. So we'd have to figure out that one first.


    Using another power supply is a start, but you wouldn't have a proper one lying around, would you? Those picos are not always very reliable, depending on what you bought.

    Your mainboard should only use about 10-15W, provided your footer is the hardware you're using. That also suggests you're using two HDDs with 12TB right? so in total your powersupply should be strong enough *if* it is a proper one.


    other suggestions:

    - did you check the SMART status of your drives? Are they reporting any issues?

    - did you try a BIOS reset? Sometimes if a mainboard got powersupply issues the BIOS is fucked so a reset is a good way to make sure you don't have any issues there.

    - did you check your RAM with memtest or similar?


    Again, this is something hardware related most likely so we'd have to start to check all that first.

    It really depends on what performance you may require.

    If you are familiar with EXT4 and just want a parity function while performance is not that important I'd recommend snapraid. It allows up to six parity drives, setup in OMV is rather easy and you can still access all the data on the datadrives directly as well. And you can add differently sized drives too.


    However if you need more performance you may want to take a look at ZFS which especially with SSDs as cache provides great performance for higher cost though and is less flexible about drives.


    A classical RAID is not a good idea anymore with 8TB drives. The time it takes to rebuild is so long that the likelyhood of another drive failing is to high. How many drives are you planning to use?


    Your questions towards XFS or JFS should be answered by someone with more experience with those then me.

    Hi there,

    I'm planning to build a Proxmox - OMV - HA system. First to just learn how to do it properly and gain experience, later to use it as a fileserver.


    3 identical nodes, each has

    - i7 CPU

    - 32GB RAM

    - 7x 1GBit/s NICs

    - 2x SATA SSD for OS

    - 3x4 TB HDDs, but will scale up as necessary (in production use 12 TB drives will be used)

    - additional SSDs for VMs as needed


    I do feel comfortable with OMV and ZFS, also Ext4 with LUKS encryption is a routine.

    However I have no practical experience with ceph and only rudementary experience with Proxmox so far.


    What I'd like to achieve:

    - fully encrypted Proxmox (on either one SSD or with two SSDs as Raid1 or similar for HA)

    - unlocking via Dropbear for remote SSH access and unlocking the System

    - OMV as VM managing storage (some / most / all of it?)

    - storage has to be encrypted.

    - scalability to be able to add more drives upwards of 80 TB total per node

    - couple of VMs if RAM is enough

    --> all that on 3 Nodes


    What I lack is

    - proper understanding of ZFS snapshots and how to use them for backups (usually use rsync / rsnapshot primarily)

    --> wondering what a proper Backup would look like

    - understanding the storage management in Proxmox and how to properly use that with OMV

    - understanding ceph storage and how to properly use that with OMV

    - understanding of encryption in ZFS and Ceph and how it compares to LUKS


    ------------------------------------------------------------------------------------------------------

    Questions (some of which may better be suited for proxmox and ceph forums):

    a) so far any comments on the setup and experiences?


    b) with 32 GB of RAM and the standard recommendation for Ceph 1 GB RAM per 1 TB of storage - will I run just a slower system or are more serious problems to be expected? (Mainboard does not support more then 32 GB)


    c) which way is best to manage storage with OMV?

    I assume it is wise to let proxmox manage the HW of storage instead of OMV?

    In ceph as I understand it Block Level Storage is the way to go yet if I'd want to use ZFS I'd have to use File Level Storage? I did not realy get what is meant by that distinction in practice.

    Since encryption is a must ZFS was for a long time not really an option even though I really like the system.

    Basically I just would really appreciate any help in getting a better understanding of which way to manage storage in that kind of system. The forums and wikis didn't clearify the open questions since I want to combine it with encryption.

    If I let Proxmox handle the drives - does OMV just let me format them as Ext4? If so where to add the encryption layer?I don't get how to do that properly from a security standpoint.


    c2) assuming I'd build the same without the Cluster-Ceph-Idea, just a single machine - would the best way be ZFS then? In that case I'd probably go for mirror DEVs mainly because of the expected number of drives and RAM Requirements. This might be the way for the Backup Server.

    Would a solution with Snapraid + mergerFS work with Proxmox as well in such a scenario (wich would easily let me keep existing ext4+LUKS drives)?

    Obviousy ZFS is much faster if properly setup, that is not a big issue here provided the speed suffices to do backups.


    d) proper backup for 80 TB+X; r

    sync / rsnapshot are an option still, since not that many changes hapen on the system. But maybe ZFS Snapshots (which I can only use if I choose ZFS over ceph) or Borg or any other solution is just a better way. Any suggestions are much appreciated.

    Midterm I'll get a tapelibrary, but for now I need another way to backup, will probably be another of those machines not included in the cluster as single OMV (the current production server)


    e) GlusterFS or any other solutions to achive this seem to be less favoured with proxmox, if you have experiences in the area I'm all ears.



    Thank you in advance.

    kwon

    Hi there,


    I encrypted a couple of disks on an newly installed Debian 10 buster with LUKS.

    I then plugged these drives into my existing OMV fileserver with OMV 4.1.35-1 Arrakis.

    Only then I realized, that Buster uses a new version of LUKS, therefore OMV Arrakis does not recognize the encrypted LUKS partition as such in the LUKS plugin.


    Which way would be the best to solve this?

    a) update Arrakis to OMV 5? (does it as well as the LUKS Plugin support LUKS2?)

    b) reinstall fresh OMV5

    c) update specifically the LUKS package via cli (if so, how would be the right way to do that?)


    installed Plugins are

    - backup

    - borgbackup

    - rsnapshot

    - urbackupserver

    - luksencryption

    - diskstats

    - autoshutdown

    - letsencrypt

    - omvextras

    - resetperms

    - wol

    - dockergui


    Thanks

    kwon

    Hi,
    I'm new in the forum but am using OMV for a couple of years now on about 10+ machines.
    So now I think it's time to give something back and get involved, the project is great.


    So how can I help? Where is manpower needed?


    I'm pretty good at documenting / writing howtos, I do that anyways. Am not that good at programming, so I'm unsure if I can be of much help there.
    I do have some experience though with fileservers, different hardware etc.


    So let me know where I can contribute.