Posts by crashtest

    As a test:
    Use the user name root and the password for root as it is set on the remote machine to, set up the remote mount.
    (Essentially, use the same administrative username and password, that is used from administration of the remote server or PC If it's a Linux box, it's usually root.)

    I don't get why it is doing fine now but since it still denies my ssh login and web connection,

    "Fine now" doesn't look fine to me. Latency is all over the place.
    The bottom line is that the connection is not stable. There's no way to understand what effects this would have on the installation, as in what packages are not installed, partially installed or corrupted.

    If I redo the installation with ethernet will I still be able to use the wifi after the installation?

    While it's not a good idea to run a server's primary connection over Wifi:

    Yes. However, the installation itself requires a stable connection. Again, the reason why "wired" is specified is because of the build issues associated with wifi. (Bandwidth contention, interference, slow, in your case latency, etc., etc.) Even florescent lights can have a significant impact on Wifi.

    Do the setup (all of it) over a wired connection then worry about setting up wifi.
    You may have to manually define the wifi interface, in the GUI. Here's a sub-section from another install document that covers it. -> Wifi.

    Nearly 50% packet loss AND high latency? There appears to be a serious wifi network and / or adapter problem. If the external software portion of the netinst build (from the mirror) was done over this connection, it's no wonder why it took all night. I'm somewhat surprised it completed at all.

    As noted in prerequisites, "This installation process requires a wired Ethernet connection and Internet access."

    Powering off the NAS and turning the router off, waiting, then turning it on and then turning the NAS on did not work.

    The original problem (post #9) was "Now I just can't get the ip address to run in the browser. It just says unable to connect."

    The quote above doesn't add up to the following. Per the following, you'd have to have a working IP address to log on to the NAS.

    Now I have an issue where I can login on the NAS console but not through ssh for some reason.

    When installing the OS, under software selection, there's a graphic that shows the installation of an SSH server. That's required.

    If the SSH server is installed:
    While SSH should be active, in the GUI, look under Services, SSH. It must be Enabled AND the Permit root login should be checked.

    If you're running on a 1GB network, in your scenario, RAID has next to nothing to provide to you. The bottleneck is the network so you won't benefit from RAID5's parralel I/O. Other than availability which, arguably, you don't need, RAID1 is a waste of a drive.

    Rsycn'ing your network shares to either your USB backup drive or another internal drive provides backup. Backup, as in two independent copies of your data, on two different drives (not RAID1) is far better than RAID. If Rsync is set up in accordance with -> this, in the event of the failure of your primary data drive, you'd be able to fail over to your backup.

    PS - There is only one hdd in the computer, it has the OMV OS, as well as the files, but the HDD is partitioned into 4 parts. The last partition is the data files.

    In my opinion, for maintenance, this is not the best setup. You have a single point of failure in the server - the "all in one" drive. Recreating it, or attempting to restore to a copy of it would be complicated at best. (And all drives fail, eventually.)

    I believe you'd be better off to boot OMV from a Thumbdrive and set up the spinning drive as the data drive. A thumbdrive is easy to clone, for -> OS backup, and a single partition data drive is easy to replicate with Rsync.

    To sync data shares between your servers and to create a backup server, take a look at the remote mount -> doc. At the end of the doc, it will refer you to another (OMV5) document that discusses the in's and out's of creating a fully functional backup server that will be ready to go at a moments notice.

    The issue revolves around bandwidth. For a RAID array to assemble, each drive needs (roughly) equal bandwidth. To achieve that, again in rough terms, SATA and SAS interfaces are capable of providing parallel data stream processing to each of the drives in an array.

    USB, by it's very nature, is "serial". One drive is accessed at a time. While USB RAID can work, it's not reliable. As chente has said, there are examples on the forum.

    If you're using an SBC (a Raspberry PI or other), there are alternatives to mdadm RAID that are very similar and arguably better.

    Man to have land like that.... My yard is about 14x14 meters.

    By the meter, the property is about 264 X 167 meters. It's around 12 acres, with a clean stream running through. Roughly 6 acres is open fields. The rest is wooded with a nice selection of pines and hard woods. When I say we're out there, we are really "out there". We're just inside "the electrical grid". While we're at the hairy edge of DSL service (10mb down is about the best we can get), unbelievably, they brought a fiber optic cable us. (Frankly, that's shocking but it's a government sponsored program. We wouldn't have asked for it but we'll take it.)

    I lost the colony, in the previous post, to really cold weather early last fall. They didn't have a chance to spin up honey production before winter. In any case, I did what I could to give them a chance.

    Moving on, I now have three colonies. Two are Carniolans and one, in photos below, is an Italian colony.

    Following is how they "package bees". A package has a bit over 3 pounds of bees which is somewhere between 10,500 to 12,000 bees. They send them through the postal mail. When I went to be post office to pick them up, I noticed that the postal lady was nervous. She handed me the package in one of their (Government Only) plastic sorting boxes, "for safety". :) I think it was so she wouldn't have to touch the package.

    First, there's a piece of luan (really thin plywood) tacked over the feeder can hole. That's the first thing to remove.


    Next is getting the feeder can loose so it can be pulled out. Bees can gum things up with waxy build up. (The feeder can is filled with sugar water. With 2 tiny perforations in the bottom of the can, the bees feed on sugar water and feed/tend to the Queen in route.)

    Note the black strap. That's attached to the Queen cage.
    The trick is to slide the can out while covering the opening with the luan plywood lid or bees will take to wing.


    In the following:
    The box is upright, the queen cage has been removed and the top opening is recovered.
    Here's the queen cage. It has cork plugs in both ends. One end has a candy plug under the cork. The cork end, with the candy, is the one that's removed. With the cork removed, workers and the queen will eat through the candy to release her. If they don't (the candy is too hard) she's manually released by removing the cork at the other end.


    I attached the queen cage to one of the hive frames with rubber bands. With that frame in the hive and some of the other frames removed to make room, this is how it's done: With a sharp rap to loosen the cluster and a lot of shaking and rolling the box around, bees are literally dumped into the hive. They immediately began to cover the closest frames.

    The removed frames are carefully reinstalled to avoid crushing bees and the hive is closed. There were some bees that wouldn't leave the box, so the box was placed on a bucket at hive entrance. With some time, the Queen's pheromones lured them in. (About an hour or two.)

    This colony is truly promising. With no resources other than a limited supply of sugar water, and with less than 3 full days in the mail, they created two pieces of comb about this size in the package box. Amazing. So far, of the the three hives, the Italians appear to be the most active. It will be interesting to see how they do.

    Want to avoid usb sticks for booting, dont mind them for installing but just feel a sata ssd, is better,

    I'll take two Thumbdrives (a working drive and a backup) over one high end SSD any day of the week. With the flashmemory plugin, thumbdrives last a good while. I've had a good quality thumbdrive last approximately 5 years, in an OMV server, through two versions of OMV. (I will qualify that and say that I don't have a lot of server add-on's, running from the boot drive, which reduces solid state media wear.)

    Note that backup is only useful if it's easy to create AND easy to restore. When a server is down, that's not the best time to deal with a complicated restoration process. This is especially true when an upgrade takes out your server. Shutdown, pop in the backup, reboot and you're up again.

    You might be surprised at how many boot from a thumbdrive.

    Your call.

    Is there a step by step guide to

    Backup the OMV drive (mines sata), like a usb to clone to another drive

    Its seems the addons should all work from 6-7?

    Just want it so if anything doesnt work, its a case of revert back quickly

    If you're willing to boot from a thumbdrive, OS backup can be dirt simple. A thumbdrive is easy to clone and the clone is easy to test, before you'll need it. Restoration is a matter of minutes.

    -> OS Backup

    my last system was an asus intel board with an i5 and 32 gigs of ram with adata ssd 128. system drive.

    my rant is this what hardware should i use to have it last more than 2 to 3 years? or is it expected to have something fail?

    If you built the server yourself, from components, you've got to be careful about potential ESD damage. "Touching" the mobo's chip legs and solder points is a No-No. While some of the latest hardware has some ESD protection built into it, it's still possible to "zap" sensitive chips and you may not notice it. It doesn't take much.

    During assembly, it's best to use a grounded static cuff. Without a cuff, at the minimum, first ground yourself on the case chassis and handle the motherboard and it's various plugin components (memory, etc.) at their edges when installing them.

    Why are you working on the command line? OMV assumes that you're creating users, shared folders and SMB shares in the GUI. (I.E., there shouldn't be "sudo" anything, if you're working in the GUI.) If you don't work within the GUI, the server does not make the correct associations and log the changes into it's database.

    For a test, follow this process -> Create a Network Share for creating a new shared folder and a new (SMB) network share, in the GUI. At the end of the process, the network share will be accessible to all users on the local network. This will be your starting point. After that, you could tighten up permissions on the shared folder, if you like, in accordance with the NAS permissions document.

    Finally (assuming you're using a Windows Client), create a user in OMV's GUI that matches the username and password of your Windows client logon and you'll have transparent access, in accordance with the permissions set in the shared folder and SMB share layered on top of it.

    If you're using a Linux client, that's another story altogether..

    just so interesting to me when it goes up to 40-50% from 4-5% and get more or less stuck there regardless of what I do

    Just about any copy operation, downloader operation, etc., will allocate unused ram to disk cache. It will stay allocated to disk cache until something else calls for it. Then ram is released to the app.

    Note that Linux (and OMV) is very ram efficient. Both will run with as little at 512K. I personally have run Debian/OMV using 1GB ram, on an old model R-PI. Currently, I have a backup server with a ZFS pool (ZFS is known to grab memory) with as little as 4GB and an Atom CPU. I've had no problems.

    We (Helios64 owners) are currently experiencing significant problems when we try to use the script from...…Script/raw/master/install

    ... to install the current OMV 7 on a Armbian Bookworm system.

    You have to realize that you're using an unsupported "automated" Armbian build for Dev's only. Those builds are untested and unsupported, by Armbian and OMV. BTW: I'm in the same boat with an older SBC, the Rock64.

    This is the CLI logon for the Rock64 and this is what you're seeing as well:

    While I understand trying to get use some out of an older more specialized SBC, in the bottom line, it's not supported.

    So, the script works fine, but not to do a fresh install. Still same problem with static IP

    It should be noted, in the prescribed R-PI installation, that the first preinstall script must "execute" successfully. That means if a message like "could not resolve host" comes up, it has not executed successfully.

    After the preinstall script executes successfully, a file named is created at the following location:


    If the file is not there, the script failed and that's likely because be script's host couldn't be accessed.