Beiträge von marvelous

    I'm new to docker and want to get lighttpd working using https://hub.docker.com/r/sebp/lighttpd my compose file is this


    but that fails to build


    Zitat

    500 - Internal Server Error

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; docker build --progress plain --tag 'uploader' '/srv/dev-disk-by-uuid-12388cef-cfe4-4169-97a0-fdfd4f3ae460/docker/uploader/' 2>&1': #0 building with "default" instance using docker driver #1 [internal] load .dockerignore #1 transferring context: #1 transferring context: 2B 0.0s done #1 DONE 0.0s #2 [internal] load build definition from Dockerfile #2 transferring dockerfile: 408B 0.0s done #2 DONE 0.1s Dockerfile:7 -------------------- 5 | # 6 | 7 | >>> lighttpd: 8 | image: sebp/lighttpd 9 | volumes: -------------------- ERROR: failed to solve: dockerfile parse error on line 7: unknown instruction: lighttpd:


    If i start in the command line it runs but get 404 when going to port 8999


    Zitat

    docker run --rm -t -v /var/www/html:/srv/dev-disk-by-uuid-xxxxxxxx/www -p 8999:80 sebp/lighttpd


    docker run --rm -t -v /var/www/html:/srv/dev-disk-by-uuid-xxxxxxxxxxxx/www -p 8999:80 sebp/lighttpd

    Done a fresh install on my N54L using the internal a brand new 32GB stick in the internal usb port


    ii openmediavault 6.0.24-1 all openmediavault - The open network attached storage solution


    However after installation is complete and rebooted the filesystem mounts as read-only, is it a dodgy usb drive or something else?



    # /etc/fstab: static file system information.

    #

    # Use 'blkid' to print the universally unique identifier for a

    # device; this may be used with UUID= as a more robust way to name devices

    # that works even if disks are added and removed. See fstab(5).

    #

    # systemd generates mount units based on this file, see systemd.mount(5).

    # Please run 'systemctl daemon-reload' after making changes here.

    #

    # <file system> <mount point> <type> <options> <dump> <pass>

    # / was on /dev/sdb1 during installation

    UUID=a6e85406-6507-4691-87db-ca5c12bfe60c / ext4 errors=remount-ro 0 1

    # swap was on /dev/sdb5 during installation

    UUID=478e3acd-d4b4-49c4-b758-7e38dffea016 none swap sw 0 0


    Code
    root@openmediavault:~# mount -o remount, rw /
    mount: /: cannot remount rw read-write, is write-protected.


    root@openmediavault:~# fdisk -l

    Disk /dev/sda: 28.91 GiB, 31037849600 bytes, 60620800 sectors

    Disk model: Flash Disk

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disklabel type: dos

    Disk identifier: 0x5b1c8ff0

    Device Boot Start End Sectors Size Id Type

    /dev/sda1 * 2048 58619903 58617856 28G 83 Linux

    /dev/sda2 58621950 60618751 1996802 975M 5 Extended

    /dev/sda5 58621952 60618751 1996800 975M 82 Linux swap / Solaris


    root@openmediavault:~# fsck.ext4 -f /dev/sda1

    e2fsck 1.46.6 (1-Feb-2023)

    /dev/sda1: recovering journal

    Pass 1: Checking inodes, blocks, and sizes

    Pass 2: Checking directory structure

    Entry 'passwd' in /etc (1048577) has deleted/unused inode 1049077. Clear<y>? yes

    Pass 3: Checking directory connectivity

    Pass 4: Checking reference counts

    Pass 5: Checking group summary information

    Free blocks count wrong for group #129 (31519, counted=31520).

    Fix<y>? yes

    Free blocks count wrong (6656927, counted=6654874).

    Fix<y>? yes

    Inode bitmap differences: -1049077

    Fix<y>? yes

    Free inodes count wrong for group #128 (6635, counted=6636).

    Fix<y>? yes

    Free inodes count wrong (1794634, counted=1794626).

    Fix<y>? yes

    Block bitmap differences: Group 129 block bitmap does not match checksum.

    FIXED.

    /dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****

    /dev/sda1: ***** REBOOT SYSTEM *****

    /dev/sda1: 40382/1835008 files (0.1% non-contiguous), 672358/7327232 blocks

    That's a non descript way of putting it ^^

    or, zfs, btrfs, TBH I don't have a problem with users having a raid setup the frustration for me is when they don't have a back up, they believe it's not necessary because if a drive fails the data on the remaining drive/s is still good

    Yeah people need to understand raid is not a backup, I have an onsite and offsite one.


    As aside i got plex running but need to sort the mounts/config.xml you said you might have an idea/punt on how to fix this.

    Better learn docker PDQ. The Plex plugin has been gone almost 3yrs. You should keep your system more up to date.


    Yeah i know and a new build is on the way is there an archive of the plex plugin somewhere that i can download. I know docker but don't really have a lot of time.

    I had the plex plugin installed on my 4.1.36-1 omv machine and it ran perfectly but i had to uninstall it and not i can't reinstall. I tried the one from here https://omv-extras.org/debian/…ediaserver_1.0.15_all.deb but that failed with



    I had previously tried this also



    I need plex working for a family get together tomorrow so any help re link to a workig plugin .deb or how to fix the current start issue would be much appriciated.

    By that statement alone you obviously have knowledge, but in relation to omv, sometimes that can be a bad thing if one edits config files created by omv


    Yeah been there got the t-shirt it seems, it obviously went awry a while back and i "fixed" it can't wait to get a fresh install up and running. Many thanks for the help with the raid if I ever do it again i might skip raid1 and just rsync it ^^

    I have an N54L as my omv server complete with the hacked bios :) with 6 drives

    ? explain, I am about to go out but will be back in about an hour


    According to your image the array is mounted, so the data is still on there, are the shares still accessible or did you remove them

    They are great little workhorses and never felt the need to upgrade to the newer gen microservers (I have enough servers as it is!)


    It was mounted but shares weren't working so i just had to do


    Code
    umount /srv/dev-md0/
    mount /dev/md0 /srv/dev-disk-by-id-md-name-box-0


    And everything is working again bar ssh login for a users home that is on /srv/dev-disk-by-id-md-name-box-0

    Ah that is what i was afraid of i'm planning a new install on my N54L at some point but would like to get this one working ready to transfer over i have a backup of home directories but will take ages to transfer 4TB so if you can help get this into some sort of useable state i'm willing to give it a go.



    sharedfolder example



    Surfice to say the mounts don't work in their current form not sure what broke and when as only noticed it after a reboot and it didn't come back up properly.

    Behold


    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdb[2] sdc[1]
    7813895488 blocks super 1.2 [2/2] [UU]
    bitmap: 0/59 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>

    No way....Essex, Sudbury is about 14 miles, the wife likes to go to Lavenham, Ipswich is just over 30 miles


    well at least it's nearly done, when you do a copy/paste use a code box, this symbol on the menu </> formats the information better


    No way! not far at all yep Lavenham is really nice.


    Yes I will do i started doing that but then thought you preferred the other way :D


    time for a last cuppa..........


    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdb[2] sdc[1]
    7813895488 blocks super 1.2 [2/1] [_U]
    [===================>.]  recovery = 99.1% (7750873856/7813895488) finish=11.3min speed=92793K/sec
    bitmap: 0/59 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>

    Same, where?


    Sunny/not so sunny Suffolk, you?


    the end is nigh.....


    cat /proc/mdstat

    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid1 sdb[2] sdc[1]

    7813895488 blocks super 1.2 [2/1] [_U]

    [===================>.] recovery = 95.5% (7466155648/7813895488) finish=57.8min speed=100091K/sec

    bitmap: 0/59 pages [0KB], 65536KB chunk

    unused devices: <none>

    soon.........................


    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid1 sdb[2] sdc[1]

    7813895488 blocks super 1.2 [2/1] [_U]

    [================>....] recovery = 83.5% (6528073408/7813895488) finish=179.6min speed=119314K/sec

    bitmap: 1/59 pages [4KB], 65536KB chunk

    unused devices: <none>


    Have another coffee or 10 ;)