Posts by chente

    I post this because I have not found a clear guide to do it, and after going around a lot and trying things in the end it turned out well. If there is already a guide for this, sorry, I have not found it, that some moderator delete the thread.



    FIRST:


    Have a CPU with Intel Quick Sync technology... obvious...


    see:


    https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video

    https://ark.intel.com/content/…873&0_QuickSyncVideo=True



    SECOND:


    Install the drivers


    It has worked for me with VA-API, see https://github.com/intel/intel-vaapi-driver


    There are several drivers depending on the CPU, look for the correct one.


    Install via console:

    Code
    apt install intel-media-va-driver

    optional install vainfo to check the result

    Code
    apt install vainfo

    and then run

    Code
    vainfo

    The system will spit out the graphical settings.



    THIRD:


    And finally include the graphical configuration in docker. I have done it using stacks, I put two examples, jellyfin and handbrake, both have worked for me. They are illustrative examples, each one must adapt them to their particular case, there is a lot of documentation on this.


    The key is these two lines, which inform the docker where the graph is:

    Code
    devices:
    - /dev/dri/renderD128:/dev/dri/renderD128
    - /dev/dri/card0:/dev/dri/card0

    For Handbrake:


    For Jellyfin:


    Note: In the case of Jellyfin, I previously assigned the user I use for that docker to the video and render groups. I don't know if this is necessary but I'm not going to touch it now that it works :)


    RESULT:


    Handbrake encoding video on the NAS in 7.5 times less time than without Quick Sync.

    Jellyfin streaming 4K encoded video at 10% CPU.

    NAS administrator, you don't have time to queue videos from Handbrake. :D

    Arctic Alpine 12 Passive (unos 10,00 €)

    Once the controversy about the system in general is over, another aspect to take into account.

    Regarding the fanless heatsink, I only see the justification for this if the system is going to be in a room where the noise can be annoying. The great advantage of a NAS is that you can install it wherever you want and access it over the network. If you can install it where noise does not disturb, an active cooling system is always preferable. If this is not the case, you can install a heatsink with a 12 cm fan and very low noise, such as

    https://www.amazon.es/Prolimat…or-plateado/dp/B003OPW3W6 if it's not out of budget. If this is too expensive there are other cheaper options. The one I use, I can barely hear it and it is designed to run 24 hours a day, 7 days a week.https://www.arctic.de/en/Alpine-12-CO/ACALP00031A

    Unfortunately there is no published data with statistical relevance to make an informed decision :(

    At this point you are right. There are no scientific studies on this. I draw on the experience of professionals who maintain high-performance servers and their advice. It's a matter of faith, but it works for me. Hard drives last me much more than 5 years without errors.

    I see the arguments resembles conversations of car lovers discussing "What is better, Porsche 911 versus Toyota Aygo, during city rush hour".

    You seem angry, sorry if I have offended you in something, it was not my intention. I think this is to simplify things a bit and skip a lot of my arguments. I'm not an OMV expert but I know something about hardware. I am open to any friendly discussion and willing to change my mind if necessary.


    My whole RPi NAS (details below) is just using the amount of energy during idle (= 95% of day) as the fans in above example.

    That means the discs are stopped. They start, stop, start, stop ... It is bread for today and hunger for tomorrow (it is a Spanish saying, I suppose it is understood in English^^). They will die sooner if they stop and start than if they are running 24/7. You consume more energy but more than make up for it in the longevity of the discs as I have discussed above. Also if you have that consumption I assume that you do not have fans on the hard drives, that means that the temperature will be high ... high ...

    Thanks for your solution . But I found another solution and I use just x1 slot. I found a pchie card what have a 2 sata what is compatible whit omv and I also found a m.2 wifi adapter frome where I.can use another 2 sata port. And they are also compatible whit my motherboard

    Can you tell what model it is and if it works for you? Thank you.

    that statement doesn't align with my personal experience due to the ATX power supply used in a PC.

    To get a baseline, would you be in a position to measure consumption of your setup and publish it here?

    I think it is not a question of starting a competition to see which one consumes less. It is clear that an ARM platform will consume less than a conventional PC. What I mean by that phrase is that the difference in consumption is not considerable considering the other advantages, at least in my case.


    And if you want, let's go to the numbers, so as not to leave you without an answer. I just did a measurement of my system, I have disconnected it from the UPS and I have started it with a meter. The total result has been 52W. Let's analyze this. First, my CPU is about ten years old. I use it because I have it and it's free. It is clear that if you wanted to make a computer now, you would buy something else with lower consumption. Still look at the numbers. In my system there are five hard drives, which are also always spinning, I don't put it to sleep (my manias), and four fans. This is what they consume:


    DISCS


    1 x TOSHIBA N300 12TB _________________ 6,5 W __x1__ 6,5 W

    2 x SEAGATE ST5000LM000 ____________ 2,1 W __x2__ 4,2 W

    2 x WDC WD40EFRX ___________________ 4,5 W __x2__ 9,0 W

    1 x OCZ VERTEX 2 _______________________ 2,0 W __x1__ 2,0 W


    FANS


    3 x BE QUIET SILENT WINGS 120MM ____ 1,44 W __x3__ 4,3 W

    1 x ALPINE 12 CO ________________________ 1,08 W __x1__ 1,1 W


    TOTAL 27,1 W


    Just half of the total consumption, coincidence, is taken by hard drives and fans. This cannot be removed with any system, it is a fixed consumption. You could remove the fans if you want but it is not recommended. My disks are always between 30th and 35th, each one who thinks if it is worth reducing the life of the hardware. The last album I bought cost me 300€. The truth is that I prefer to pay a few euros more a year in electricity consumption and that the hardware last me a few more years.


    And now you will tell me my system consumes 2W at rest. Very well. That means you stop the discs, you do the math when they go bad. If you keep them running, the consumption would be 27W plus the 2W of the ARM system on equal terms. Total 29W. Any modern CPU would lower that total consumption of my intel 3225 by at least 15W. Which leaves us an approximate consumption of 37W compared to your 29W. For a difference of 8W more than you I have an upgradeable, cooler and better maintained equipment. When the RPI5 comes out you will change it entirely. I can upgrade my RAM by adding more, I can even change CPUs, you can't. I can add expansion cards, connect 10 or more hard drives, etc.


    Regarding the issue of the power supply, just say that the actual consumption is what you are spending at that time plus the percentage of loss. Calculate a source for the consumption of the hard drives you want to have. Generally with a 300W it is more than enough and it will consume depending on what you have with less loss. To that add that a standard source will also make your hardware last longer, it has better protections than an external source in general.


    I insist, I respect all opinions very much, and of course ARM systems have their market. But I still think that it is much better to use a conventional system for OMV in a home, it has much more possibilities in the long term.


    PS: I have two RPI3s and I am happy with them, I use them as clients with Kodi. I do not consider using an RPI4 as a server because of everything I said before. ;)

    After reading the whole thread I still think it is better to build a NAS with conventional hardware. As long as it is for standard domestic use and there are no space problems.


    1. Consumption. Most of the time the CPU is idle. There is not much difference in consumption in this state with an ARM platform. Disks consume the same amount on both platforms.


    2. Durability. Hardware will live much better in a large box with proper fans and well-studied airflow. A standard power supply will always work more stable than a small power supply, sometimes without protections. The hardware will appreciate all of this in the long run.


    3. Scalability A system with standard hardware is scalable. An ARM platform is not.


    I think they are not comparable systems, they are just different solutions for different needs. Each one with its advantages and disadvantages.


    I would recommend replacing the Node 304 with the Node 804 with space for 10 hard drives and replacing the Mini-Itx board with a Micro-Atx board with 6 sata, 4 banks of RAM, etc. For example this, ASUS Prime B460M-A. If you have space to place it in your home. Perhaps now you will not need it but it is possible that in the future you will. It is a long-term investment.


    I only see recommended a small platform solution if the space to locate it is limited or if it is a backup solution.

    It is my humble opinion...

    I installed it by adding a "stack" in portainer with this code:


    Thank you. You are opening up study paths for me. I will still have to spend more time.

    Every tool in OMV has too many options. You have to know how to use them well to avoid problems. It is complex.

    It would be nice to enable a basic mode with only the basic options and an advanced mode with all the options. Sure there are options that are not necessary for the average user. If they could hide it would be easier. When you already know how to use the basics, you take one more step and activate the advanced options. In the style of kodi.

    The wiki was, originally, for Linux experts and developers

    Now I understand it all. There are still reminiscences of the origin of the text. You have done a great job.

    Tell me, did the translation make sense?

    Yes. It is perfectly understood. Illustrations in English are not a problem. I have next to my GUI in Spanish if there are doubts.


    Quote

    Off line means you can't access the ...

    Thanks for the explanation, I will use it well. It was just an example of the problems that I find myself. The famous disk is already in synology running smoothly. I changed one for another.


    This is life

    so is :thumbup:^^


    You and everyone else, to include me. I've been

    I get the impression that the problem with this thread came after doing several rsync for a backup. Something changed permissions to many folders and files. I tried to solve it by modifying ACL permissions and I didn't get anything. Is it possible that this ends up affecting the access of a disk? I had files copied to various sites. I made copies with rsync from a synology, from windows, from OMV between two disks ... it was varied.

    I had to reinstall the system for another problem. I can tell you that the disks with data can be reassembled without problems in the OMV GUI. You will not lose the data.

    When you have mounted them you have to create the same shared folders that you had before and you will be able to access your data. As for the snapraid I don't know if it is possible to take advantage of the parity disk. But it is a lesser evil. You can reconfigure snapraid and resynchronize the parity disk.

    Luck.

    That guide is here https://openmediavault.readthe…rd-drive-health-and-smart

    and google translates it for me on the screen without doing anything. This manual covers many topics but without going into depth and often in technical language. It is assumed that to manage / start in OMV you only have to be a windows user and have knowledge of networks, I have read this somewhere.

    I have been looking at the explanation of SMART, for example. When he says that a long test is an "off line" test, it is as if they spoke to me in Chinese. Will I be able to use the data while the test is done? I cant? Will I have the server stopped for 18 hours? Do I have to do something before the disc so that it is "off line"? Do I have to disassemble it? To find the answer I have to start googling what that means ... and in the end it takes three hours to understand everything. Or ask here continuously and bother for nonsense. It is an example, similar things have happened to me with other OMV songs. Sometimes I do things that I don't know what I'm doing, like the "Flash memory" settings, that I have to get into editing fstab and write things that I don't know what they mean. Or the issue of file permissions, which drives me crazy. Finding the detailed explanation is the difficult thing. And the procedures.


    Regarding synology, I know that it has its advantages and disadvantages. And I like the drawbacks less and less, that's why I'm here looking for alternatives. :) I hope to get to use OMV as a winchoff over time. You will not get rid of me and my questions. ;):D:D

    Finally I decided to start over with a clean installation ... I have already lost count of how many I have done ... I hope this is the final one.


    Thank you very much for the extensive explanation on SMART. Some things I already knew and others are new, you always learn.


    I know it's not fair to compare OMV to synology, but I can't help it. I've been with synology for many years and I miss a place where all the information is organized and updated. Here it is very difficult sometimes to find the information. It is all very disaggregated and often in a tremendously technical language, very difficult to understand for a user not used to Linux. On the other hand, it is the only thing missing. Everything else I know I have here and you just have to learn how to use it.


    Anyway my congratulations to all OMV developers. They are doing a great job. I'll keep trying, I guess it's a matter of patience and I have a lot.

    I don't know how to do a long test. And every time I do something in linux / OMV I waste three hours reading and looking for information.

    I have two other disks like that in a Synology server in RAID 1. I will replace the disk with one of those two. And I'll put that in synology, which is easier to test.

    Then I will retrieve the data with SnapRaid and we will see the result.

    This system is giving me a lot of problems and work ... I am learning a lot but at a very high price.