Which file system and how to format it

  • I am currently about to upgrade from 1.x to 2.x and also upgrade my harddisks from 1TB to 3TB (see my other question here: Update from 1.18 to 2.x before or after hard disk upgrade?).


    At the moment the internal disks (2 disks in Raid1 and 1 disk as backup, mirroring the raid "manually" 1-2 times per week via Rsync) are formated as Ext4.


    So from 1TB OMV shows me a total volume of 916,89 GB per disk.


    I also have 2 portable external disks for rotating offsite backups (via the USBbackup plugin). Those are formated as NFTS so they can be easily accessible from Windows Computers in case something happens with the OMV installation and we need to access the backups.


    What I noticed is that there is far more space available on the external drives running on NFTS than on the internal drives with EXT4. This might have different reasons, but I was wondering if the way how a disk is formatted has an impact on how much space is available on the disk?


    Our data structure is very divers. We save a lot of small office documents (Excel, PPT, DOC, PDF and images), but also some bigger audio files and some pretty big video files from our projects. The majority of files (in number) are small ones of course. So I was wondering if I can optimize the format/filesystem of the disks for my use case?


    Or do I just format them EXT4 (the internal ones) and thats it?

  • And here another observation:



    According to OMV the disks have 916,89 GB of space, 856,65 GB used, 13,66 GB available. So 46,58 GB got "lost" somehow on the way. Either the calculations OMV makes are wrong, or, and this is what I guess there is some "loss" due to the file structure and the file system. If this is the case I wanted to know if there is any way to keep this "loss" as small as possible, because at the moment I am fighting for every GB to be able to keep operations running befor the upgrade to 3TB (will cause some downtime, so I am not sure when I will be able to make the upgrade).

    • Offizieller Beitrag

    By default, ext4 reserves 5% of the drive for root. If you want to change this, just tune2fs -m 0 /dev/sdd1

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks Aaron,


    As I have a hdd just for the operating system and these disks are just for the data, I guess I can easily live without those 5% for root and rather use the total space for the files, right? This might explain why there is so much more space on the external HDD with NFTS than on the interna EXT4 disks.


    Can I apply this command also to disks that already have data or would you rather recommend to do this only for disks that are setup fresh and new?

    • Offizieller Beitrag

    Yep, you can eliminate the 5% on data disks with no issues. And NTFS doesn't have this reserved space.


    You can apply to existing disks. Can't remember if you have to reboot or not but I would.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Raid1 has the same ext4 filesystem that an individual drive has. So, no problems.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I just started the work to change the Raid1 from 1TB to 3TB disks. But in order to be able to unmount, seems I need to remove all services that use shared folders (did that) and all shared folder. Here I am hesitating because it says that when I remove the shared folder all files will be deleted.


    Is there no way to "release" the shared folders, but leave the files on the disk, so that when I remove them, the still have all files on the disk? I do have a backup disk, but I thought I could keep the 1TB raid disks for a while, just to make sure? I feel a little uncomfortable to delete the files from these disks.


    Any chance to keep the files on those 2 disks for now, but be able to remove them?

  • OK, I went throught this again and now I noticed that at one point I have to click "no", so it just deletes the shared folder configuration and not the shared folder data. So that is fine.


    But now I have one shared folder that still says "yes" for in use. I can't figure out how to find what is using it. I'll try to reboot and see if this goes away.


    Is there any way to see what is using this shared folder? It seems actually more complicated to get rid of a disk / change a disk than to set the whole thing up in the first place. ;)

  • So after searching the whole WebGui, where there could be anything that is still using this one share that couldn't be deleted because it said it was in use, I just deleted it from the config file. I hope there won't be any problems in the future, in the moment things look calm.</p>


    Started the innitialization process of the raid on Saturday afternoon, but it took quite a while so I went home and left the server working. So the raid is set up, now I will create the file system (and use the command aaron gave me).


    Let's see how this goes.

  • Just wanted to thank everyone who responded and help to make this move from 1TB to 3TB as smooth as possible. What I found strange in this forum though, help comes mainly from the moderators (thank you to those guys) and the community itself doesn't seem to help much. Is this impression right or is it just my stupid questions that scare the others off? ;)


    I set up the shared folders and the SMB shares yesterday. I reversed the Rsync process that I had in place to syncronize manually from the Raid to the internal backup disk, now syncing everthing from the internal backup disk to the raid disks. It took longer than expected (loads of small office files and some audio and video files), but yesterday night at 8pm everything was syncronized to the Raid.


    First it seemed that I had some problems with permissions (couldn't access some random folders on the the drive). But once the sync was completely finished these problems vanished.


    I just have one problem which persists for quite a while already but I never tried to solve, because it is actually no big deal. I once had forgotten to disallow user folders and one of the users created a user folder. However, we were not able to delete this folder. Yesterday I tried to delete it, as I was working on the server anyway and it wouldn't let me delete it over the command line. I was able to delete the content (one image file), but when I try to delete the folder it either gives no error message (but the folder is still there when I check with ls) or it says that the folder doesn't exist but when I use ls, it still shows up. Any clues?

  • What I found strange in this forum though, help comes mainly from the moderators (thank you to those guys) and the community itself doesn't seem to help much.

    Hah, yeah... We have a few users here who help out really great. But you are correct. Most of the help comes from the moderators. Of course, one reason is we are fast :D


    and it wouldn't let me delete it over the command line

    Humm? Even as root? Is it an EXT4 filesystem? Have you tried the -r parameter?

  • Here is what I tried right now.


    Code
    root@pdb-matrix:/media/bbaa055f-7fd0-49bf-b63a-6d75bc3665f7/pdb_research# ls
    info  misc  pdb-ir  studies  sync.ffs_db  users


    So clearly in the folder "pdb_research" exists a folder "pdb-ir", which is the one I would like to delete.


    Code
    root@pdb-matrix:/media/bbaa055f-7fd0-49bf-b63a-6d75bc3665f7/pdb_research# rm -r /pdb-ir
    rm: cannot remove `/pdb-ir': No such file or directory


    So this is what I tried first. Says no such file or directory, which is strange, because ls shows it.


    Code
    root@pdb-matrix:/media/bbaa055f-7fd0-49bf-b63a-6d75bc3665f7/pdb_research# rm -r /pdb-ir/
    rm: cannot remove `/pdb-ir/': No such file or directory


    So tried it with a slash afterwards, but also not working.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!