Yup. Especially not if those: New 8TB "Archive/Cloud HDDs" from Seagate are coming... who trys them first? hit the market in the next year.
Greetings
David
Yup. Especially not if those: New 8TB "Archive/Cloud HDDs" from Seagate are coming... who trys them first? hit the market in the next year.
Greetings
David
It is a rare issue. It only happens when you create an ext4 filesystem less than 16 tb and want to expand to more than 16tb. The only solution seems to be to use 64 bit flag all the time instead of letting the system choose. The system could never choose correctly every time because it doesn't know if you will someday expand to more than 16 TB. We just need to figure out if there are any side effects.
I think this fixed my issue. I deleted my old file system and added a new larger one. I'll add some more drives as soon as I can and see what happens.
Fixed in openmediavault 1.8, see http://sourceforge.net/p/openmediavault/code/1632
Though it does not fix for you guys...
Greetings
David
Excellent, thank you. Happy new year!
Is there a command I can run so know if I'll experience this same limitation when I try to grow my array? If I need to backup my array as it is now and recreate it (as you say it's fixed) I'd rather do it now than later when it'll contain even more data.
If your array was created before 1.8, it certainly will be a 32bit filesystem.
Greetings
David
Sorry to post in an old thread but it seems the issue is still occurring. I created an ext4 file system about 3 days back of from 3 drives in LVM with total space of 10TB. Today I tried adding 2 more drives of 3TB each and the file system gives the same error "resize2fs: new size too large to be expressed in 32 bits" when I try to resize the fs.
Is your installation 32bit?
as described in another threads even if we add the flag the current wheezy e2fsprogs is not capable of handling the resize.
Can you post here dumpe2fs -h device
Issue is half fixed, to do the resize you need to boot into another OS of some kind and do the resize as the resize2fs in OMV/debian is an older version.
I used system rescue CD
installation is 64bit:
dumpe2fs:
Filesystem volume name: Main
Last mounted on: /media/23ddb01f-bd5d-4aa4-93f3-5053fda907b1
Filesystem UUID: 23ddb01f-bd5d-4aa4-93f3-5053fda907b1
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 366284800
Block count: 2930260992
Reserved block count: 0
Free blocks: 1560698105
Free inodes: 364959257
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 650
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 4096
Inode blocks per group: 256
Flex block group size: 16
Filesystem created: Sun Feb 21 05:58:07 2016
Last mount time: Thu Feb 25 22:34:12 2016
Last write time: Thu Feb 25 22:34:12 2016
Mount count: 1
Maximum mount count: -1
Last checked: Thu Feb 25 21:00:01 2016
Check interval: 0 (<none>)
Lifetime writes: 511 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 8bbf6a88-a606-4333-a33e-ed400c570fd9
Journal backup: inode blocks
Journal features: journal_incompat_revoke
Journal size: 128M
Journal length: 32768
Journal sequence: 0x00007d26
Journal start: 7383
Alles anzeigen
I'll try using gparted and see if that works
Gparted is they way to go atm
Didn't work with gparted for me, I was getting errors with the e2fsck there but I was sure my FS was fine and no errors with an fsck I did about 10 mins before it so I aborted that and tried with SystemRescueCd and it worked flawlessly. Thanks for the help guys
I ran in this issue right now.
Failed to grow the filesystem '/dev/md127': resize2fs 1.42.5 (29-Jul-2012)
resize2fs: New size too large to be expressed in 32 bits
Is there any step by step help using the "another OS" solution ?
I'm afraid that, to my knowledge, there is no way - risky or not - to alter this option. Not even with another OS.
Greetings
David
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!