USB Boot Drive Conflict

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • USB Boot Drive Conflict

      I have a Dell T30 server that I am using to run Open Media Vault. To preserve the SATA connections, I decided to install the OS on a USB. I am also using 2 external HDD for data storage.

      The BIOS on the server does not let me select which USB to boot from. When all 3 are connected, it picks the device to boot from in random order. I spoke to Dell support and according to them, as long as the other drives don't have any boot files, the BIOS will not select them. I checked this theory by trying to boot with just a EXT4 formatted USB drive and the boot USB drive. After multiple boots, it always picked the boot USB drive. The test USB drive didn't have any files on it.

      So I am struggling to figure out what in the data drive is it that the BIOS thinks is a boot file and gets confused. Any tips on how I can troubleshoot this would be highly appreciated.
    • utamav wrote:

      Any tips on how I can troubleshoot this would be highly appreciated.
      There must be some sort of boot record on the drive and your server is 'seeing' that and attempting to boot from it.

      What I would attempt is;

      Boot from the drive with omv installed
      Plugin the second usb, omv should see the drive under storage>>>disks
      Select the drive and click wipe
      Once the drive has been wiped, prepare it's use by creating the file system using omv
      Then retest the boot process.

      If that doesn't work, the other option is to use dd to erase the drive via cli, then mount and create the file system, but the above should work.

      Good luck
    • geaves wrote:

      utamav wrote:

      Any tips on how I can troubleshoot this would be highly appreciated.
      There must be some sort of boot record on the drive and your server is 'seeing' that and attempting to boot from it.
      What I would attempt is;

      Boot from the drive with omv installed
      Plugin the second usb, omv should see the drive under storage>>>disks
      Select the drive and click wipe
      Once the drive has been wiped, prepare it's use by creating the file system using omv
      Then retest the boot process.

      If that doesn't work, the other option is to use dd to erase the drive via cli, then mount and create the file system, but the above should work.

      Good luck
      Thanks, I'll try that over the weekend. Wiping the drive is the last think I want to do.


      ajaja wrote:

      utamav wrote:

      I have a Dell T30 server
      Which T30 and what Bios?

      I have T30 running on Bios 1.x booted from M.2. There is a slot on the MB.
      But then the BIOS will be stuck at 1.0.2. Mine came with 1.0.14 though I have the 1.0.2 BIOS, I don't want to stuck with insecure system.
    • danielwbn wrote:

      utamav wrote:

      Wiping the drive is the last think I want to do.
      You can remove the mbr without wiping the whole drive, but you have to be very careful with the commands!
      see e.g. option #2 in cyberciti.biz/faq/linux-cleari…r-boot-record-dd-command/
      Far be it from me to pour water over someone else's suggestion, but this can be done from the gui, the wipe option has a short and long option...the short erasing any mbr and any metadata on the drive....why use dd from the cli when you can use the KIS principle and avoid any pitfalls.
    • geaves wrote:

      danielwbn wrote:

      utamav wrote:

      Wiping the drive is the last think I want to do.
      You can remove the mbr without wiping the whole drive, but you have to be very careful with the commands!see e.g. option #2 in https://www.cyberciti.biz/faq/linux-clearinthe wipe option will in both optionsg-out-master-boot-record-dd-command/
      Far be it from me to pour water over someone else's suggestion, but this can be done from the gui, the wipe option has a short and long option...the short erasing any mbr and any metadata on the drive....why use dd from the cli when you can use the KIS principle and avoid any pitfalls.
      Oh, I didn't know it can be done in the GUI, though from reading the manual it still seems to me that when using either option of the wipe feature you will loose (easy) access to all of the disks data: "The quick option basically erases the partition table and signatures", which is what the OP wanted to avoid
    • danielwbn wrote:

      "The quick option basically erases the partition table and signatures", which is what the OP wanted to avoid
      To overcome the problem, and I agree with Dell on this, there is obviously something on those other USB drives which is preventing the USB boot device from loading and unless they are wiped to remove metadata, signatures etc it's not going to work.

      I tested a similar scenario on my own omv install, I had a spare drive which had previously been used in a zfs raid and bought an external usb case, but when it was connected to the server the server would not boot from my usb flash drive which omv is installed on. I spent the best part of a day trying to resolve the problem, my solution was what I did above....and as you pointed out when using dd 'be very careful with the commands'

      Personally unless someone is comfortable using cli the best option is to use the gui, backup each drive, wipe them, create the file system all in the gui, copy the data back....easiest option, because if dd goes wrong the data's gone.
    • Yay for weekends! I Finally got time to do some testing. All tests were performed with only 1 other drive connected apart from the OS drive.

      Control drive (4GB-EXT4-Flash drive)
      -1- OS boots
      -2- OS boots
      -3- OS boots

      1st data drive (4TB-EXT4-External drive)
      -1- OS boots
      -2- OS boots
      -3- OS boots

      2nd data drive (4TB-EXT4-External drive)
      -1- OS does not boot
      -2- OS does not boot
      -3- OS does not boot

      Swapped the drive ports to see if order makes a difference. Still OS did not boot. Which makes me think it's something on the 2nd data drive that is messing up the system.

      I see a difference in the fdisk output, even though both are formatted as EXT4.

      Boot conflicting drive:

      Source Code

      1. Disk /dev/sdc: 3.7 TiB, 4000786153472 bytes, 7814035456 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disklabel type: gpt
      6. Disk identifier:
      7. Device Start End Sectors Size Type
      8. /dev/sdc1 2048 7814033407 7814031360 3.7T Microsoft basic data

      Non boot conflicting drive:

      Source Code

      1. Disk /dev/sdd: 3.7 TiB, 4000752599040 bytes, 976746240 sectors
      2. Units: sectors of 1 * 4096 = 4096 bytes
      3. Sector size (logical/physical): 4096 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disklabel type: dos
      6. Disk identifier:
      7. Device Boot Start End Sectors Size Id Type
      8. /dev/sdd1 256 976746239 976745984 3.7T 83 Linux
    • So the problem drive is /dev/sdd1

      If that's the case then @danielwbn suggestion of using dd would work BUT you would have to be comfortable doing it and you have to boot omv then connect the drive to run dd from the cli.

      Personally, I would back up each drive in turn, boot to omv, connect each drive in turn, prepare them using wipe from storage>>>disks, then create the file system, then copy the data back. If you did that you could then test the boot process with each drive, then both
    • geaves wrote:

      'I think' this has a protective mbr....hence the problem.


      You are right.

      Partition table scan:
      MBR: protective
      BSD: not present
      APM: not present
      GPT: present

      But OMV, doesn't let me wipe the disk:

      Source Code

      1. Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; blockdev --rereadpt '/dev/sdd' 2>&1' with exit code '1': blockdev: ioctl error on BLKRRPART: Device or resource busy
    • I had to reboot it after unmounting.

      After wiping it and formatting it with GUI, I have the same problem.

      Source Code

      1. Disk /dev/sde: 3.7 TiB, 4000786153472 bytes, 7814035456 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disklabel type: gpt
      6. Disk identifier:
      7. Device Start End Sectors Size Type
      8. /dev/sde1 2048 7814035422 7814033375 3.7T Linux filesystem
      Somehow I end up with gpt again.
    • It turns out, I am stuck. My older 4TB (one that does not cause boot conflict) came with advanced partition which supports MBR for over 2TB. The newer 4TB (one that causes boot conflict), can't support MBR over 2TB. My only other option is to see if I can make the OMV boot drive GPT as well. As I read somewhere it requires installing minimal debian server with UEFI turned on in BIOS and then to install OMV.

      All in all, I am not sure if this is going to work with my current setup. I might have to just shuck out the drive from the external enclosure and put in in the chassis. Even with that there is a chance that the shucked out drive might need some tweaking for it to work with the normal power connector. ;(
    • Hm, so the wipe worked but didn't resolve the problem...I have had a look at the manual for the Dell T20 and according to that you can select a 'one time boot device' from the list of connected drives.


      If that's correct with all drives connected selecting your actual boot device 'should' overcome the problem as it must store that information within the bios...worth a shot.