Posts by chclark

    Hi


    So ive moved my system over to this the raid is fine now ive flashed the raid card to ITMode


    however im getting an issue where i find the server dell bootup screen saying its holted due the watchdog timer resetting the system and it wants me to press a key to continue the first time i brushed it off but its happening every couple of days, i dont know if this is the dell hardware or if its the OS, before i moved over to the hardware i was running OMV as a test on a mechanical drive and this never did this during the time i had it running it sat for about 2 weeks but it didnt really have any activity was jut turned on and running and i was messing about with dockers i havent used before.


    the only changes ive made to the hardware was changing the bios to UEFI which i did a fresh install onto the SSD because of this.


    any suggestions as its driving me mad at the moment. as since it does this and i then tell it to continue to boot the server boots and the dockers start faster then the FS so the config doesnt load for the dockers so they all start blank till i reboot again where it picks everything up.

    On OMV 5.x, the install docker button is now creating a docker systemd override file to wait to start the service until the local-fs target is ready. This won't help if you are using remotemount. Using sleep is just asking for inconsistent results and delays the start of your machine.

    How do I make sure your way is implemented I only recently set up the is drive as I've moved to the Dell t430 server but found I'm having issues watchdog is triggering a restart so when the system goes down and dell detects this it halts the server startup when I make it continue the server dockers start faster then the storage fs I'm assuming as if I load sabz for example it has the new user wizard showing I have to either stop all the dockers and tell the stacks to update and start or reboot the server again for them to start up with the normal config.

    I would agree, the only three options then I can think of is dd, dban, another distro, you can set dd to write zero's to the drive, dban and run autonuke, use a live linux distro have a look at the drive with gparted but they should be blank and dd from that.

    just had no free time but im going to try gparted and see if a few wipes in there and see if that does anything.

    the drives display but dont give any options as far as im aware it just passes the drives through to the os like they are direct attached now so the raid side of the card has completly gone i would have to flash it back to standard if thats even possible

    The reason I suggested accessing the menu is to see if the drives were displayed and could they be wiped in there, I've only ever dealt with Adaptec's, but I did find this which you probably already have

    yea sure ive checked and i didnt notice anything but ill have another look later on.

    I saw your post in My Nas Build :)

    That shouldn't be the issue, however these cards do have their own menu, it's accessible with something like ctrl + s or something perhaps going into that may shed some light.

    Yea didn't think it would be an issue. I've been into the card controller interface via the Dell system menu. Not sure if now it's flashed if there's another way to get into the card menu. In the end it's not the end of the world as the drives wouldn't be for production use just wanted to make use as I test the server.

    TBH I'm at a loss, me I would either now try another machine, but I do have a USB docking station for this sort of stuff or run autonuke on dban which can take hours.


    So there's no residual raid signature, the above comes back blank as does wipefs that would suggest the drives are clean, but you cannot create a raid nor a file system as it returns the drive as busy.


    What's the raid card and the make and model of the server

    Yea i haven't got a dock for 3.5" disks. if i have to i can jerry rig my gaming pc so i can plug a drive at a time direct in, or if you can suggest somthing i can leave it in the server and use another os the omv install on here is just to test the server my production server is on other older hardware at the moment.


    the raid card is what came with the dell t430 server i got which was been disposed of, its a PERC H330 but I've since flashed it from the dell firmware which is what was on the card when the raid on these two disks where made running a windows server os. I flashed it using this guide (freedos method) https://forums.servethehome.co…ps-hba-it-firmware.25498/

    as i wanted to use the card as a sata card instead of a propriety hw raid card.but i flashed the card before destroying the raid which is why i guess im in this situation, also doubt it has anything to do with it but the drives where originally installed with there os in bios when i did this install of OMV i switched the bios to UEFI to keep with the times.

    Then there is something on those drives probably from their use in hardware raid, I did a search on ddf metadata one site suggests installing dmraid apt install mdadm dmraid then run dmraid -r

    tried this and the final cmd just comes back no raid disks

    Then there is something on those drives probably from their use in hardware raid, I did a search on ddf metadata one site suggests installing dmraid apt install mdadm dmraid then run dmraid -r

    will give it a try and see. Knew I should of destroyed the raid before the flashed the raid card to itmode.

    Tried to make a file system on sdc and sdd but get the following error. not sure whats going on with these drives.


    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mkfs -V -t ext4 -b 4096 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 -O 64bit -L '3' '/dev/sdd1' 2>&1' with exit code '1': mke2fs 1.45.5 (07-Jan-2020)
    /dev/sdd1 is apparently in use by the system; will not make a filesystem here! in /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc:672
    Stack trace:
    #0 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): Engined\Rpc\OMVRpcServiceFileSystemMgmt->Engined\Rpc\{closure}('/tmp/bgstatusQ3...', '/tmp/bgoutputoH...')
    #1 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(688): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure), NULL, Object(Closure))
    #2 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->create(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('create', Array, Array)
    #5 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'create', Array, Array, 1)
    #6 {main}

    ill try make a file system on each disk now. And the raid is deleted from the GUI I did this after wiping them first time around there is nothing in raid section now.


    And no not omv4 this is a fresh install of omv5

    That's the right thing to do

    ?( did you wipe these drives before you removed the raid listed in raid management

    That gives a clue, (device or resource busy) you need to run wipefs -n /dev/sdc and the same on sdd that will give you information on the drives signatures.

    yes i wiped them in the disk section of omv before deleting the raid listing in OMV gui.


    ive ran the cmd you supplied but there was no output, and still cant create the raid they shouldn't be doing anything to be busy this is a fresh install no file systems or disks mounted.

    Hi


    im playing around with a new server ive got 2 of the disks what came with the server was in a hardware mirror using the dell raid card in he server with a windows install. I've since flashed the raid card into IT mode and added a 2,5" disk for the os to play with, also was gifted 2 more caddies and they had drives with though they are sas not sata like the original i can see all 4 in OVM i have selected wipe on them all and seen in raid section it listed the mirror i have also deleted the raid. i have gone to try make a raid with the 2 that was a originally a mirror but as a stripe this time but i get the following errors, can anyone advise?


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 2 -N Storage /dev/sdc /dev/sdd 2>&1' with exit code '1': mdadm: ARRAY line /dev/md/ddf0 has no identity information. mdadm: super1.x cannot open /dev/sdc: Device or resource busy mdadm: chunk size defaults to 512K mdadm: size set to 35183040835584K mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array mdadm: Defaulting to version ddf metadata mdadm: failed to open /dev/sdc after earlier success - aborting

    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 2 -N Storage /dev/sdc /dev/sdd 2>&1' with exit code '1': mdadm: ARRAY line /dev/md/ddf0 has no identity information.
    mdadm: super1.x cannot open /dev/sdc: Device or resource busy
    mdadm: chunk size defaults to 512K
    mdadm: size set to 35183040835584K
    mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
    mdadm: Defaulting to version ddf metadata
    mdadm: failed to open /dev/sdc after earlier success - aborting in /usr/share/php/openmediavault/system/process.inc:182
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(300): OMV\System\Process->execute()
    #1 [internal function]: Engined\Rpc\RaidMgmt->create(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('create', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('RaidMgmt', 'create', Array, Array, 1)
    #5 {main}

    Most of the time, you can just move the drive and then run omv-firstaid to fix the networking.

    Will have a play about with it before I do anything like I say got to get the drive caddy's first. Plus way I have the system set up on portainer stacks can set the system up again quickly.

    Went for it to have a play with at least. I've ran through the guide above and flashed the card so I think it should work as expected.


    Need to find some of the Dell drive caddy's though didn't released it only had 2 from the posted slots the rest are blanking plates.


    How would the move over to new hardware go would the os drive need a new install or would it work plugged stright into the Dell from the hp.

    Well, the N40L is using a 15W TDP laptop cpu. The E5-2609 is an 80W TDP server cpu. Just because the TDP is a lot higher doesn't mean it will idle a lot higher but it will be higher. Hard to say what it but I would guess it would be around 70 watts.


    Depends on what you do with your server. The power consumption is higher but it is a MUCH more capable server. You would have to weight the pros and cons against power cost. Personally, I would go for the T430.

    all I do currently is use it a storage Nas for my films and TV. And odd other bits using automation tools in docker and a unify controller.


    Was only thinking about it due to getting worried the n40l is getting on and has no expansion capabilities any more. I keep getting errors requiring a reboot a couple of them actually to come back online I thought it was the drive so changed the os drive for a fairly new ssd and was better but it fell over the other day not sure if it's the esata to SATA cable failing

    https://forums.servethehome.co…ps-hba-it-firmware.25498/


    Definitely faster and more expandable but will most likely use more power. Good server otherwise.i

    i dont know much about power comsuption any ideas how much more it would use i dont want to start racking up massive massive bills since i run my omv server 24/7.


    if you had the choice what would you do? just felt my n40l is getting on abit now i think it was 2010 or 2012 they came out.

    Hi


    i have an option currently to get a 2014 Dell t430 server which otherwise is off to a recycling company, its spec is 16gb of 2133 ddr4 ram, Intel Xeon E5-2609 . PERC H330 Integrated RAID Cont roller. ran out of warranty in 2018


    does anyone know if the perc H330 can be flash so it acts as a sata card instead as i use software raid.


    if i go for it this would be a replacement to a hp N40L is it worth it.

    i had the issue again during a fresh install from omv4 to 5, i did as normal removed raid disks leaving only the os disk attached reinstalled and as soon as the raid went in error. but remember seeing one the post further back if the raid is left plugged in it updates the needful during the end of the install. so i tried been careful what i selected during the install and it all worked fine booted right up after install raid all present and working.

    i thought id chime in as i posted in the earlier days of the thread,


    i decided to update my system to omv5 but did a fresh install tried to clonezilla first so i had a backup but something was wrong with the ssd it couldn't do it wouldn't install a fresh version either had to swap it out. however i backuped up my /config folders from where they was stored so move to there new homes in a appdata share on the raid, and took this time to learn compose files just watched a single video of technodad who did a portainer unifi controller compose set up (needed to do this container anyways knowing this i then managed to do portainer stacks for all my other containers as well didn't take long at all.