LVM shows less than 3TB

  • I have 2 3TB HDD Hitachi HUA723030ALA640. I've made LVM, and when I look at Physical Disks I see 2,73 TB, which seems right, see screenshots below:




    But when I look at LVM PV I can see I have only 2TB of data available:




    Two disks also count 4TB which is exactly 2x2TB:




    My question is: can I safely pvresize disks in CLI without losing my data?
    I can reboot server, but I'm afraid I don't have a place to temporarily move files somewhere else from these disks now...
    Thank you in advance!

  • Thank you for quick reply!
    I use 64 bit OMV.

    Code
    uname -a
    Linux openmediavault 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux


    Code
    cat /etc/issue
    OpenMediaVault 0.4.0.2 (Fedaykin)


    pvdisplay output I get:

    Code
    pvdisplay --noheadings --separator '|' -C -o pv_uuid,pv_size,pv_free,pv_used,vg_uuid,vg_name,pv_pe_alloc_count --unit b /dev/sdc
      Failed to read physical volume "/dev/sdc"


    I guess I should use /dev/sdc1:

    Code
    pvdisplay --noheadings --separator '|' -C -o pv_uuid,pv_size,pv_free,pv_used,vg_uuid,vg_name,pv_pe_alloc_count --unit b /dev/sdc1
      BP2kPN-ASJZ-oCTd-XG51-e1DK-6uG6-1SYX5v|2199019061248B|772171366400B|1426847694848B|6X10x1-eVnm-QqmC-6Wci-T59x-ZEyo-hgfC8r|HITACHI|340187
    • Offizieller Beitrag

    Hmmm, pvdisplay reports 2199019061248B => 2TiB. But what makes me thinking is the device name /dev/sdc1. I think this should be /dev/sdc instead because /dev/xxx1 is a partition, thus the physical volume does not use the whole disk capacity.


    Please execute


    Code
    # cat /proc/partitions
    # blkid


    and post the output.


    If you are able to recreate the LVM please wipe the physical disks before using them as physical volumes (thus existing partitions are wiped). You'll find the wipe buitton in the physical disks panel.

  • So you want to use the disks for LVM and for nothing else right?


    The issue is, that you have created a DOS or MBR partition table (guess here). This partition table does not support any partitions larger then 2TIB. That means, that your partition 1 is 2TB and the PV you create out of part1 is then also 2TB.


    Workaround:
    Remove the partition and create the PV directly on the device, without any partition table.
    The command pvcreate /dev/sdc should do the job.


    Other workaround:
    Use a gpt partition table. You can create such a table with gparted/parted.


    You will face the same issue with md devices etc. pp. So best option is to not use any partition table. It also will avoid issues with alignment.

    Everything is possible, sometimes it requires Google to find out how.

  • The problem is like I said before - there is no easy way to wipe disks as they already contain a lot of data and I don't have space to move it...
    Here is what you asked to try:


    blkid

  • Not sure if this works, as the 32bit LBA can be a hard limit to all partitions.


    Try to create a sdc2 sdd2 sdb2 partition and create PVs on it. Then add the PVs to your VG. You can then resizes your lvol and the fs on top of it.


    Hope that helps. Not the best thing to do, but at least you can use the full space available.


    If this does not work, you need to save your data anywhere else and rebuilt the LVM. There is no other way of achieving this.

    Everything is possible, sometimes it requires Google to find out how.

  • Zitat von "SerErris"

    So you want to use the disks for LVM and for nothing else right?


    The issue is, that you have created a DOS or MBR partition table (guess here). This partition table does not support any partitions larger then 2TIB. That means, that your partition 1 is 2TB and the PV you create out of part1 is then also 2TB.


    Thank you for your answer! As far as I can remember I didn't make any partition tables on these disks, I simply attached these disks without any formatting after buying them and connected them as LVM from OMV web interface. That's all I did.
    Do I understand right that there is no way to see which partition table is on disk from fdisk output?


    But parted nevertheless says there IS msdos partition table...

  • I am not sure how you got to this partition table. My OMV did not setup any partition on it but used the raw device.


    So yes you have MSDOS or MBR partition table.


    Zitat

    Note: Like all operations involving partition manipulation, the below procedure carries some risk, and you are strongly advised to backup any critical data beforehand.


    You can try gdisk to change the partition table from MBR to GPT. After that, you can increase the GPT partition with parted and after that you can rescan LVM on the PVs to find out, that the disks have bigger sizes now (hopefully).


    I am not sure if this works and it is potentially very dangerous for your data. If it fails, your data may become unreadable.


    The procedure should be:


    • Unmount the lvm volumes. You can find the mountpoint under /media
    • run gdisk /dev/sdx on every drive, which you want to convert into GPT (sdc,sdd) (this is the dangerous part)
    • run parted to increase the partition to the maximum size
    • run pvresize /dev/sdx1 for every partition you converted above (sdc1, sdd1)
    • now you can remount the partition ontop of LVM with mount -a


    You can find additional information about this process here: http://askubuntu.com/questions…and-make-ubuntu-boot-from
    As this describes the full blown process for a dual boot windows7 installation, you do not need to worry about the EFI system partition and the size required here. We have simple data disks with no boot. If you want to ensure that you have enough space, you can move the partition with parted to a start point to 1MB before doing this activities.


    If all that worked, you manage to increase the usable space inside of your VG. You can now increase the LV and the Filesystem in it.


    There is another method, which you could use:


    remove about 300 GB of data, or copy anywhere else, so that the total amount of data fits into 2TiB LV.


    Then remove /dev/sdd1 from your VG. (pvmove and pvremove)


    run pvcreate on /dev/sdd and reintegrate it into your VG - it will now have 3 TB capacity.


    After that remove /dev/sdc1 from the VG and run the pvcreate on /dev/sdc (also now 3TB). Then reintegrate the PV into your VG.


    All that takes a looooong time as data need to be physically moved to free the PV you want to remove.

    Everything is possible, sometimes it requires Google to find out how.

  • Zitat von "SerErris"

    I am not sure how you got to this partition table. My OMV did not setup any partition on it but used the raw device.
    So yes you have MSDOS or MBR partition table.
    ...........
    All that takes a looooong time as data need to be physically moved to free the PV you want to remove.


    Thank you very much for all your suggestions.
    I realize it would be less harm if I find say a 2TB usb hdd to move data there and walk all this way again from the beginning.
    So could you please advise me how to do this correctly? Should I create partitions at all? I guess there's no need to do that if I would use LVM?

  • My question would be:


    What are your goals with this solution? Do you want any level of RAID protection? At the moment you have no Raid below the LVM. This is not a problem but simply means, that if one disk fails, all data is gone.


    So if you want to have a RAID 1 at least (2 disks) then you should configure at first RAID on both disks (you do not need any partition here). Then put the resulting raid device into a PV and use that.


    The benefit of Raid in your environment is, that you have a protection against disk failures (not against deletion). The downside would be, that you end up with 3 TB total space (2x3TB disks Raid 1 = 3TB usable space) and therefore trade in the second disk for protection.


    Okay - however you will proceed this the generic steps are:


    Copy everything to an USB drive, yes - good idea.
    Then to not loose all shares etc. pp. you should do the needed things on shell level. You can persists all your settings above the filesystem (shares, cifs shares etc. pp.).
    Stop all services (SMB, UPNP, NFS, etc. pp).
    Then unmount the your storage partition in question.
    remove the PVs from the VG

    Code
    lvremove /dev/myvg/myvol
    pvremove myvg /dev/sdc1
    pvremove myvg /dev/sdd1


    Now depending if you want raid create a raid array with the gui (or not).
    then create the new PV(s) without raid it would be this:

    Code
    pvcreate /dev/sdc
    pvcreate /dev/sdd


    With Raid it would be something like this:

    Code
    pvcreate /dev/md0


    Now create the logical volume again (same name would be best)

    Code
    vgdisplay testvg | grep "Total PE"
    Total PE              10230
    lvcreate -l 10230 testvg -n mylv


    Then create the new filesystem in the vg:

    Code
    mkfs.ext4 /dev/mapper/mylv


    Now you need to find the uuid of the lvol you created:

    Code
    blkid /dev/mapper/mylv


    Edit the /etc/fstab where the volume will be mounted to reflect the new UUID.


    Now you should be all setup again and able to restore.


    For further tunings you should read my post here about ext4 raid and lvm performance tuning. Most of it is anyhow relevant only for Raid5.

    Everything is possible, sometimes it requires Google to find out how.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!