Posts by ieronymous

    There is no reason to reset passwords or cache. If that seemed to work, it is probably because it just gave it more time to get everything working. Lots of services wait to start until networking is started. You can't expect this to happen instantly.

    Thank you for trying that out. In my post nowhere I referred to the time I waited until I try again. Now since you mention it I waited (and kept trying also) above 15min after I had the cable plugged in again. So its not the issue of t he 30 -40-50 sec I didnt give it time to load the appropriate services.

    Hi


    I thought to share this issue since its the third time happening to me. I noticed that if I start the OMV5 (installed in my case in bare metal system) without the Ethernet cable plugged, then no matter if I plug it afterwards / restart / shutdown it doesn't give access to the web GUI.
    This time frustrated me a little more by not giving me access even afterentering omv-firstaid mode and changing password. i had to reset cache also and change password and that's how it finally let me get in.


    Anyone else willing to replicate the situation (just start it without network and try to access the web GUI after you plug the Ethernet cable) and share results.


    Thank you

    small update.... I am able to access heimdall for example and through that my other containers like netdata , Airsonic...etc but not the main web ui page due to forbidden 403
    Also from cli omv-firstaid seems not to exist anymore

    Hello


    I has a few months to play with omv and new -old features. In the mean time got an email from letsencrypt that one of my certificates for my domain in ......duckdns.org expires. So I should somehow update it.
    Back then when I had it setup used this guide https://www.youtube.com/watch?v=pRt7UlQSB2g&t=21s . At some point he issued the command docker logs -f letsencrypt and I thought that this line created the cerificate since the outcome had the message <<Congradulations! Your certificate and chain have been saved at...blah blah. After a 2 month period and having received the letsencrypt email about expiration of the certificate I tried to find how to issue the update command and how that command would understand which cerificate need to update.



    I came across a site issuing the openssl comand and by the command


    $ echo | openssl s_client -servername NAME -connect HOST:PORT 2>/dev/null | openssl x509 -noout -dates


    got an answer as to when the certificate expires. The answer was not a straight forward one since I got this outcome


    notBefore=Jul 24 22:15:07 2019 GMT
    notAfter=Oct 22 22:15:07 2019 GMT (It doesnt say if it ends at a specific date or it is already expired)


    Anyway the real problem came afterwards since I couldnt Generate CSR From the Existing Key using OpenSSL with the command
    $ openssl req -new -key example.key -out example.csr -subj "/C=GB/ST=London/L=London/O=Global Security/OU=IT Department/CN=example.com"since I didnt know what to replace with what. In addition nowhere inside the letsencrypt folder had a file with .key only .pem So I came across another site mentioning the certbot. And all went wrong after command sudo apt-get install certbot -t stretch-backportsStarted downloading stuff and somewhere I noticed a message about removing packs. It ended with an error (unfortunately didnt copy the whole procedure) and after that couldn start the web ui (getting an 403 Forbidden nginx) of OMV even if it has the same ip address as before and can SSH with PUTTY to the same IP address. I dont know what to do. Sorry for my lengthy post. PS 1.What the hell did that command alters the whole system? 2. i re-run the command but I think now it finished with no errors. Still no web access


    Or the following scenario: your fs fails, you recover but loose some inodes. Just before you recover your rsync starts working and deletes your backup. You should really be using more than just a plain copy.

    Well I have an external backup having older state of data (1-2 weeks) and rsync both internally and externally are manually initialized so ... I am doing the best I can...thank you though

    You should add --delete to the command line.

    I suppose it doesnt matter if its at the end rsync -avuc --progress --stats --delete or somewhere in the middle rsync -avuc --delete --progress --stats



    That is why you may not want to use --delete.

    I already know the danger of using this switch but I am very careful when handling my music database (being there done that for 8 years now and manually in windows and a quadrant file explorer) but i need them to be identical



    The snapshots are not perfect, btrfs provides better atomic snapshots. But i take my snapshots at night when there is no activity on the network, so they are pretty OK.

    I am currently trying out a terramaster nas device with a TOS OS and the hdd's by default are formatted in btrfs. Used the command but gived an opportunity to save snapshots inside the array only and during import it errors a message and afterwards I cant see my sharess for several restarts of the nas.


    Anyway I ll try the delete


    Always there to help!! Thank you

    Hello again


    A tiny bit of knowledge each time. For a few weeks rsync worked probably as I wanted but truth is didnt since yesterday I came across a situation that didnt worked as expected.


    So the issue is.... i have a source folder in the nas and a destination one in an external usb device.
    Using rsync -avuc --progress --stats /mnt/md0/muz/ /mnt/usb/usbshare1/muz copies all contents from source to destination creates all dirs and subdirs as I want. But today I made 3 changes to the source which are
    -deleted a file
    -renamed another one
    -added another one in a different location
    From the above actions only the third worked as expected which is logical.


    The file I renamed in the source had as a result in destination to create a new folder with the renamed name and keep the original one which I didnt want to (since i want source and destination to be identical)


    Last the file i deleted in source kept in destination.


    Bottom line is what should I add in the rsync line to behave the way I want?


    Thank you

    I doubt you're really low on RAM since an ordinary NAS can run with just 512 MB RAM without any swapping

    Correct. i use 8gb total just wanted to know the what if scenario.... thanks for explaining.



    we're not running Chrome or Firefox with 40 open tabs on our NAS boxes

    HAahahah...also true

    enabling zswap is a matter of running a sufficient kernel (Proxmox kernel from OMV Extras) and tweaking stuff accordingly: ubuntu-mate.community/t/enable…ncrease-performance/11302

    Thank you for the link

    The plugin never modifies fstab. It is just an optional step. Making fstab changes will NOT give you the benefits the plugin gives you. The plugin's only point is to configure folder2ram. folder2ram is what reduces the writes. folder2ram never modifies fstab either. I'm done trying to explain this. If you want to learn more, look at the code of the plugin and folder2ram.

    Well thats a very different and wayyy better explanation here than the first one above few posts. Now besides you I m done asking about this because it makes more logical sense now than it did previously :P

    Only if you're really low on RAM you need to take care about your swap settings and then with an SSD I would immediately explore zswap (this uses the SSD as backend storage for data that doesn't change and limits the 'normal swap activity' to compressed RAM).

    Your recommendation intrigued me to search for it but didnt find adequate info to use it (the implementation is already there as i ve read)

    I figure SSD's and Flash Drives are very similar technology, so it really can't hurt anything.

    Even though its true I got stuck by technodad's video mentioning <<beware of this is the third time i m using this because got by system locked>> and if you followed my conversation with ryecoaaron, I just noticed that even if he installs flash plugin he ends up modifying a few lines in the /etc/fstab file which already pre-exists and someone could just edit it without installing the plug in. In conclusion to this issue I can;t understand the extra benefit of having this plug in installed if you end up modifying a file that already pre-exists and isnt been created by the flash plug in.


    PS How could this plugg in lock up the system? Which of his options / arguments whatever makes it dangerous for a potential lock?



    Unlike a lot of SSD's a lot of those are available in sizes that are perfect for OMV (8, 16gig) where finding those sizes in more main stream brands of SSD's is near impossible. If you intend to use the SSD for only the OS (as it's designed), you end up buying at least a 64gb drive and it results in a ton of wasted space.

    True about sizes. Also mine is an intel 128gb which I have partitioned in order to use the 16-32gb for OS space and the other 80gb for a possible VM installation (because of the potential snappiness it would have if it s installed upon an ssd) Probably have a third partition for Appdata but not really sure if I have that in a seperated partition or not.



    But unfortunately the math is not entirely precise because you can't trust in those cheap crap King* SSDs (be it Kingdian or Kingspec). They often fake SMART readouts (so you can neither read out an internal SSD temperatiure sensor nor query the 'wear out indicator' good SSDs provide) and they lack/fake TRIM capabilities. Those King* thingies will die way earlier than quality SSDs. But again: when using the flashmemory plugin this doesn't really matter as long as no other constant writing activities happen on the SSD (Docker containers, Plex database and so on).

    Many good points in the above paragraph. One notice though. You say <<this doesn't really matter as long as no other constant writing activities happen on the SSD>> but swap will constantly write small data. If you disable it in fstab by commenting out the line reffering to the swap (at least that is what I came to understand of a way to disable it) then how will you calculate the amount of ram is being used for swap and what if in a big copy paste full the ram and fail.

    @ryecoaaron Thank you for your answers. i am not paranoid Its the nature of me not wanting to use something without knowing the why. Its also what keeps me learning things.


    Anyway you mentioned above to your answer to @tinh_x7 that he has to check if his drive supports discard.


    After reading some info on how to check that I only came across how to check about trim support with command lsblk -D and checking the result <<TRIM/discard is available, if the DISC-MAX column is not 0B>>



    Mine:



    So it seems to support it.



    Now if I run the command


    Code
    fstrim --verbose --all
    
    
    I get the output
    
    
    root@openmediavault:~# fstrim --verbose --all
    /sharedfolders/Appdata: 90.3 GiB (96997376000 bytes) trimmed
    /: 13.1 GiB (14060220416 bytes) trimmed

    Does this feel like running the command manually? So I could somehow to create a cron task to run it regurarely?

    @ryecoaaron your answer to my question if there was a solution with the possibility to make the system read only, you clearly answered no but inside my fstab there is already the line ext4 errors=remount -ro . Doesnt this mean its already read only?


    If you want to reduce writes to the ssd, then you would need to install the plugin.


    If you check below, you can see that without installing the flash plugin I could easily go to the fstab and change the appropriate lines noatime, nodirtime...etc. What is the point of installing that plugin if in both cases (install it or not) you have to manually enter the fstab and make the changes?


    Most systems don't need swap and swap writes a lot to your drive if you use it. Once again, this is optional especially with an ssd.

    So should I just comment out the UUID=998f9df7-3fff-4ff4-9833-0ffa4d26a91a none swap sw and it wont use swap anymore? Does this mean that ram is going to be used instead and I should have aspecific amount always free? (something like zfs?)



    I ve read this info https://manpages.debian.org/st…il-linux/fstrim.8.en.html but no clear example (at least for me) on how I could use the command to my ssd drive. Could you help with this (I mean the command)
    I dont want to use fstrim -a and interacts somehow negative with my spinning drives (in case any errors occur)


    Finally should I add the options above noatime, nodirtime etc as in the tecnodad videos or not?


    Isnt there any clear answer likke you have an ssd do this and that and get over.

    Ok got it but there is no way to delete 1,2tb and copy everything again.. Its just dont worth the effort and the extra wear off the hdd. Thank you though

    You created the file Dir1sums in your /root/ which contains the checksums of your source folder. You compared them to your destination folders and did not get any errors. Now do the same thing with diff -r, if this does not prompt anything, your copy is correct.

    I didnt put it in there the command had that structured and I just followed If you want type both of the command exactly as it should be to save it somewhere for future use.



    You were not able to find dir1sums in your filesystem just because you named it Dir1sums instead.

    I typed it as Dir1sums also just forgot to mention it.... so problem was not there also

    Ok, so it is all fine and working as expected. The other command df -hT had an additional space.

    and last one


    Judging from this, your test scenario is not working. If you copied the files, you can check if they are correct with md5deep. To check, if the file structre is the same you need to do something in addtion.
    This here would work:
    diff -r sourceDir destDir
    It will display all differences in file structure (names only, not looking at data inside the files).

    Thank you very muchj for your time but in my senario nothing helps and does all the comparisons one pass. Each command checks something something else leaves it outside of comparison.
    Not for me

    Ok, so it is all fine and working as expected. The other command df -hT had an additional space.

    and last one

    I don't worry about it. On a physical machine (which I don't use very often anymore), I leave it at whatever OMV defaults to. I use the flashmemory plugin on usb sticks and SSDs. So, I doubt the setting will make much of a difference.

    Long time since someone edits here but why make a new since it already exists. So I use as an OS an ssd drive Intel 128gb if I can recall correctly. Did things changed and trim is auto supported without extra steps?


    As for the flash memory plugin, is there a solution with the possibility to make the system read only (as tecnodad life mentions). I have already setup and configure the system and i would not like to start from scratch but since i use an ssd it would be great to set it up as proper as it can be. Do I need to install the plugin or the etc/fstab folder already exists and I only have to add the ext4 noatime, nodirtime errors=remount -ro (does thsi mean read only?)0 1 and why he is commenting out # the UUID corresponding to swap??


    How can I check if all commands are already running?


    A better way to achieve enabling of trim command?



    Activation of weekly ftrim is very easy see the link i posted.

    Any guide of how to accomplish this?