Incus on OMV (an idea revisited)

  • ===============================================================================================================================

    This thread is about using INCUS on OMV. It's not a detailed [How-TO] but gives an overview.


    1. Added 17/03/2025 - Using incus to manage containers & VMs, how OMV storage and "bridged" networks can be used & the lxconsole UI.

    2. Added 18/03/2025 - Using a "macvlan" network in incus. An example of the official "Incus UI"

    3. Added 18/03/2025 - Using the incus command line & managing incus from your Linux desktop.

    4. Added 18/03/2025 - Using vlans in incus. A basic example.

    5. Added 20/03/2025 - Using OMV "shared folders" on EXT4 and BRTFS filesystems for INCUS storage pools.

    6. Added 20/03/2025 - Using LVM in OMV for an INCUS storage pool.

    7. Added 21/03/2025 - Using additional storage for VMs and giving containers access to OMV files.

    8. Added 22/03/2025 - Using ISOs to create VMs in incus.

    9. Added 22/03/2025 - Using incus to creating instance snapshots, duplicates and images from instances.

    10. Added 26/03/2025 - Using incus profile to configure instances.

    11. Added 31/03/2025 - Using incus exec, etc. in first steps to automation.

    12 Added 01/04/2025 - Using incus with cloud-init for automated configuration with examples.

    13. Added 03/04/2025 - Using incus with cloud-init: a last example and summary.

    14. Added 07/04/2025 - Using an incus container as a "docker" host.

    15. Added 09/04/2025 - Using docker images directly in incus - part 1.

    16 Added 09/04/2025 - Using docker images directly in incus - part 2.

    17. Added 09/04/2025 - Using docker images directly in incus - part 3.

    18. Added 11/04/2025 - Using incus-compose: Installation.

    19. Added 11/04/2025 - Using incus-compose: Managing OCI containers with examples.

    20. Added 11/04/2025 - Using incus-compose: Managing OCI containers with further examples and incus-compose commands.

    21. Added 16/04/2025 - Using incus to backup instance or volume, part 1 of 3 : taking snapshots.

    22. Added 16/04/2025 - Using incus to backup instance or volume, part 2 of 3 : exports and imports.

    23. Added 17/04/2025 - Using incus to backup instance or volume, part 3 of 3 : copy or migrate between servers.

    24. Added 22/04/2025 - Using incus: A Summary


    ===============================================================================================================================


    A few weeks ago trythat asked about using INCUS on OMV (Incus and LXConsole) . Apart from the fact that ryecoaaron has all the bases covered in their "KVM plugin", I was sceptical it could work on OMV at all. Well, I was wrong.


    If you've never heard of incus, or are uncertain as to what the project is about it's described as: "A next-generation system container, application container, and virtual machine manager.", there's more here: https://linuxcontainers.org/incus/


    If anyone wants to experiment there are guides on the net you can use/adapt to get started by installing both incus and the latest version of the lxconosle web UI (this is not the same as the incus-ui-canonical package) on OMV. I used the zabbly stable repo in my try out.


    The first caveat is about how incus uses storage, to quote the docs: "The two best options for use with Incus are ZFS and Btrfs. They have similar functionalities, but ZFS is more reliable.", but you can read more here: https://linuxcontainers.org/incus/docs/main/storage/ . You can use other storage types like "dir" or "LVM", but these are less optimized for incus.


    The second caveat is the use of networks. Similar to how libvirt/KVM/qemu works, incus creates a default NAT'ed bridge "incusbr0" by which both containers and VMs communicate with the outside world but not directly with the host. But just as you can with the KVM plugin, creating a bridge (or a macvlan) on a OMV nic is straightforward and this can be added to an incus profile.


    Using the lxconosle WEB UI removes the chore from using the incus at the command line. For graphical VMs, the vga console makes use of spice.


    I'm not suggesting this is a replacement for the well estbalished "KVM plugin" and the curious are probably best advised to set it up in a VM at first. But trying it out will give you an idea of the project's scope.


    In case you're wondering, I've run this on my now ancient desktop with limited resources which is set up for nested virtualisation.



    OMV screenshots:



    omv-nvme1.jpeg


    omv-nvme10.jpeg



    omv-nvme2.jpeg



    Incus create both filesystems for containers and zvols for VMs on the pool ( Incus would create subvols on BTRFS for both)


    omv-nvme11.jpeg




    lxconsole screenshots:


    omv-nvme3.jpeg



    omv-nvme4.jpeg


    omv-nvme5.jpeg


    omv-nvme6.jpeg


    omv-nvme7.jpeg


    omv-nvme8.jpeg


    More below

  • Adding a "macvlan" to Incus:


    1. Using an OMV host NIC


    omv-nvme14.jpeg


    Code
    root@ovm-nvme:~# ip addr show enp4s0
    3: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 68:05:ca:16:f3:fe brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.104/24 metric 100 brd 192.168.1.255 scope global dynamic enp4s0
           valid_lft 4365sec preferred_lft 4365sec
    root@ovm-nvme:~#

    2. Creating a "macvlan" profile on incus:


    Code
    root@ovm-nvme:~# incus profile create macvlan
    root@ovm-nvme:~# incus profile device add macvlan eth0 nic nictype=macvlan parent=enp4s0

    3. To apply "macvlan" profile to an incus instance either use incus profile assign <instance name> default,macvlan or "attach profile" via lxconsole WebUI. In this example the eth0 nic of the instance BORON-VM is changed from the default incus network to the 192.1681.0/24 subnet:


    omv-nvme13.jpeg


    omv-nvme15.jpeg


    3. The "macvlan" as seen on the OMV host:


    Code
    root@ovm-nvme:~# ip a | grep enp4s0
    3: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        inet 192.168.1.104/24 metric 100 brd 192.168.1.255 scope global dynamic enp4s0
    14: mac49d768f7@enp4s0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 500
    root@ovm-nvme:~# 
  • Thanks for sharing.


    Looks interesting and I do like to tinker when not smashed with my real work!


    My question is what is/are the use case(s) over the VMs and occasional lx containers I use?


    I’m keen to try this but everything I really need servicewise I have solved using docker. I just play with VMs for fun and testing stuff.

    OMV 7 (latest) on N100 Minipc (16GB) and RPI5 (8GB). OS on SD card. System ext4 on SSD. Data BTRFS on HDDs

  • Incus configuration and instance management at the CLI ranges from the simple to more lengthy commands.


    Listing all instances:



    The command that launched a new instance using a debian vm image and setting the roofs size and profiles:


    root@ovm-nvme:~# incus launch images:debian/12 deb12-vm --vm --device root,size=10GiB --profile=default --profile=macvlan



    The BORON-VM instance shows no IP as it was created from an uploaded iso in which the incus-agent.service is yet to be installed.


    If incus was set to be available over the network during it's initialization, then it can be controlled remotely. After installing incus on a linux desktop to acts as a remote client, a "trusted" connection and be established between server and client.


    OMV server:


    Code
    chris@ovm-nvme:/$ incus config trust list
    +---------------+--------+-------------+--------------+----------------------+
    |     NAME      |  TYPE  | DESCRIPTION | FINGERPRINT  |     EXPIRY DATE      |
    +---------------+--------+-------------+--------------+----------------------+
    | deb12         | client |             | db0b7d9c3974 | 2035/03/13 17:15 GMT |
    +---------------+--------+-------------+--------------+----------------------+
    | lxconsole.crt | client |             | b1ab2cca15d7 | 2035/03/15 15:52 GMT |
    +---------------+--------+-------------+--------------+----------------------+
    chris@ovm-nvme:/$



    Linux Destop:



    Switch context on the linux desktop for incus commands to apply to a given server:


  • Using vlans in incus. A basic example.


    1. Initial state:



    2. OMV NIC enp8s0 carries vlan20 provided by router.


    3. Create a profile and add the vlan to it.


    Code
    root@ovm-nvme:~# incus profile create vlan20
    Profile vlan20 created
    root@ovm-nvme:~# incus profile device add vlan20 eth0 nic name=eth0 nictype=macvlan vlan=20 parent=enp8s0
    Device eth0 added to vlan20
    root@ovm-nvme:


    4. Assign vlan20 profile to container and start it:



    5. Network inside deb12-cont-1 container



    6. Container as seen by router:


    incus_vlan.jpeg

  • OMV "shared folders" on EXT4 and BRTFS filesystems can be used for INCUS storage pools.


    1. OMV "shared folders":


    omv-storage1.jpeg


    omv-storage2.jpeg


    2. Creating Incus storage pools:


    New incus storage pool based is created with a given storage type and the absolute path of a "shared folder" as the source.


    In the case of BTRFS storage, incus will use subvolumes for both containers and VMs.


    3. Creating instances on a given incus storage pool:



    Note: The btrfs subvols created by incus appear on OMV as below and show that a virtual machine's rootfs is stored as an img file:



    4. A stopped VM instance can be moved from one incus storage pool to another:


  • To use LVM in OMV for an incus storage pool:


    1. First install the "LVM2 plugin", then create a physical volume using available block dev(s) followed by creating a volume group.


    lvm1.jpeg


    2. Create the incus storage pool based on the OMV volume group, in this example vg_inc_pool.


    3. Container and VMs can then be created on the LVM storage pool.



    4. Incus creates all the necessary "volume groups" .


    lvm2.jpeg

  • Adding storage to VMs and giving containers access to OMV files.


    1. If your VM's disk is full, you can resize it.


    Before resize:



    After resize:


    2. To add a virtual disk to a VM, first create the disk on a storage pool, and then attach it to the VM.


    For example, create a 10GB virtual disk block device in the incus lvm-pool:



    Attach the custom block device "deb12-vm-3-disk1" to vm "deb12-vm-3" :



    3. Incus containers can access OMV files via a "sharedfolder" disk device added to the container.


    The synax is "incus config device add <mycontainer> <mysharedfolder> disk source=<host path> path=<container mount path> shift=true"


    Idampping between host and a container needs to taken into account for permissions, but this is automagic when the option "shift=true" is used.


    This example shows the container "deb-b-cont" uses two "sharedfolder" disk devices but only one was created with "shift=true".


    The permission n the container and on the OMV host are:




    To correct the perms for the container "oplaylist" folder, remove and then re-add it to the container config using the "shift=true" option:


    Check the perms in the container:

    Code
    root@ovm-nvme:~# incus start deb-b-cont
    root@ovm-nvme:~# incus shell deb-b-cont
    root@deb-b-cont:~# ls -ld /mnt/oplaylist
    drwxrwsr-x+ 2 root users 2 Mar 16 15:54 /mnt/oplaylist
    root@deb-b-cont:~# exit
    logout
    root@ovm-nvme:~#
  • You're not limited to just using the images at https://images.linuxcontainers.org/ to create a VM instance as ISOs may also be used. But not everything installable from an ISO is compatible with incus as it expects UEFI and Virtio device support by default. Things that work with the KVM plugin should also work in incus. Creating Windows10/11 VMs may need extra steps ( see here: https://discuss.linuxcontainer…on-incus-on-linux/18884/2 ).


    Using an incus ui streamlines uploading a "custom iso" which is created as a volume in an incus pool, and can then be used to create a VM instance. Graphical installers and the installed VMs themselves are accessed via a "vga" console.


    The steps to achieve this via the incus command line are outlined here: https://linuxcontainers.org/in…vm-that-boots-from-an-iso


    It's important to note that OMV has no graphical environment of its own and to use a graphical installer and/or VM via the command line a remote connection must be used when not using an incus ui. For example:


    1. Accessing a graphical VM for a Linux Desktop via remote connection:


    custom_iso1.jpeg


    2. Using the OMV installer:


    custom_iso2.jpeg


    3. Accessing a fresh install of OMV:


    custom_iso3.jpeg

  • Nicely done, I didn't get much further that installing LXConsole in a docker container and spinning up a VM to try it, got side tracked, so easy at my age now :)


    I'm going to make some time for this and give it a go, to see which I prefer either KVM or Incus, even though I tend to use docker.


    If anyone is interested in a some deep dives into Incus via Youtube please have a look at scottibyte Youtube He does some great videos, from beginners upwards.

  • Creating instance snapshots, duplicates and images from instances.


    1. Instance snaphots:


    You can create a snapshot of an incus instance, make changes and then revert to the previous state by restoring from the snapshot. It's an ideal way to test various configurations within a container or VM. The support and limitations of snapshot depends on the type of storage pool used as described here - https://linuxcontainers.org/in…#storage-drivers-features - BTRFS and ZFS offer the best support.


    You can create duplicates of an instance, or instance snapshot, at the click of a button in a UI, or via the command line as described here:

    incus copy
    Copy instances within or in between servers Synopsis: Description:, Copy instances within or in between servers,, Transfer modes (–mode):,- pull: Target server…
    linuxcontainers.org


    Code
    root@ovm-nvme:~# incus snapshot create omv-vm omv-vm-base
    root@ovm-nvme:~# incus snapshot restore omv-vm omv-vm-base 


    2. Instance duplicates:


    incus copy [<remote>:]<source>[/<snapshot>] [[<remote>:]<destination>] [flags]


    For example, duplicate an instance by copying an instance snapshot to another incus pool:

    Code
    root@ovm-nvme:~# incus copy omv-vm/omv-vm-base omv-vm-copy -s dir-pool 

    3. Create an image from an instance:


    The ability to create your own "image" from an instance based on a modified standard image or custom iso install is a particularly powerful feature of incus. You use images to create instances which can become the basis for new images from which other instances can be created.


    For example, you want to test out various OMV configurations and you don't want to install from scratch each time and want always to start with OMV configured with DHCP for the single nic, a single user belonging to the _ssh, sudo and users groups, a dashboard with a given set of widgets, with omv-extras installed but no plugins and fully updated.


    To avoid installing from scratch for each test you decide to install and configure a working base once, using snapshots in case you want to revert any errors. You duplicate the final version as the base to duplicate again for each OMV you wish to test. But here is another way.


    Once you have configured an instance as you want it, the "incus publish" command can convert this into a new image. You can then base any new OMV instances on this new image.


    In the case of OMV, it's best to start by creating a VM instance from the standard incus debian12 vm image. OMV can then be installed in the debian 12 VM instance using the script at https://github.com/OpenMediaVa…-Developers/installScript .


    E.g: Here the instance debian-vm is a configured OMV install used to create the image named "omv-base-image". The "omv-base-vm1" & "omv-base-vm2" instances were created from this base image.

    Starting with the standard debian12 VM image ensures that instances created from the new base image get their own ip and a hostname that matches the instance name.


    custom_image1.jpeg


    custom_image3.jpeg


    custom_image2.jpeg


    custom_image4.jpeg

  • Nicely done, I didn't get much further that installing LXConsole in a docker container and spinning up a VM to try it, got side tracked, so easy at my age now :)


    I'm going to make some time for this and give it a go, to see which I prefer either KVM or Incus, even though I tend to use docker.


    If anyone is interested in a some deep dives into Incus via Youtube please have a look at scottibyte Youtube He does some great videos, from beginners upwards.

    I have seen some of those vids and the linked blog entries - very useful. Yet to really explore the power of "profiles" and other features.


    I adapted this to get lxconsole running: https://wiki.opensourceisaweso…incus-and-setup-lxconsole

  • I have seen some of those vids and the linked blog entries - very useful. Yet to really explore the power of "profiles" and other features.


    I adapted this to get lxconsole running: https://wiki.opensourceisaweso…incus-and-setup-lxconsole

    I do believe Brian from the Awesome Open Source channel is friendly with Scott, and gets a lot of info from Scott :)

    But I agree you can never have to much information, especially good information.


    Have you any preferences yet, between KVM and Incus?

  • trythat I've used libvirt/kvm/qemu with virsh & virt-manager for a long time, together with proxmox to a lesser extent. But it's early days in using incus and I've not made a side by side comparison with KVM in OMV as yet. I hope to make more incus posts over the next few days.

  • Every incus instance comes with a device on the "incusbr0" network and a rootfs, it's a configuration set by the "default" profile and applied to all instances. For VMs, the resources are set to 1 cpu , 1GB and a 10GB root by default. These configuration values can be set and overridden, along with adding additional devices, on individual instances or by using profiles that can be applied to any instance. For example:


    1. A Linux "Desktop Profile"


    You can generate a Linux Desktop VM with a single command or at the click of a button in Incus UI by using an Ubuntu or OpenSUSE "desktop image".


    Code
    incus create images:ubuntu/noble/desktop Ubuntu24-04 --vm

    But for a full desktop you might want 2 cpus, 4GB of memory and a 20GB disk. Use these commands to set these values on a specific instance.


    Code
    root@ovm-nvme:~# incus config set Ubuntu24-04 limits.cpu=2
    root@ovm-nvme:~# incus config set Ubuntu24-04 limits.memory=4GB
    root@ovm-nvme:~# incus config device set Ubuntu24-04 root size=20GB
    Error: Device from profile(s) cannot be modified for individual instance. Override device or modify profile instead
    root@ovm-nvme:~# root@ovm-nvme:~# incus config device override Ubuntu24-04 root size=20GB
    Device root overridden for Ubuntu24-04
    root@ovm-nvme:~#

    Notice how the root size is actually part of the "default" profile and has to be overridden not set. Checking the config in the VM shows:


    ubuntu_vm1.jpeg


    But the desktop VM is still in the "incusbr0" network. To use the OMV host bridge br0, adjust the desktop VM config using:


    Code
    incus config device override Ubuntu24-04 eth0 nictype=bridged parent=br0 network=""

    As the network was part of the original "default" profile we use "override" and not "set". Restarting the desktop Vm shows the change:


    ubuntu_vm2.jpeg


    Rather than having to repeat these commands on every instance, a new "desktop" profile with these configuration settings can be assigned to any desktop image instance.


    Code
    incus profile copy default desktop-profile
    incus profile set desktop-profile limits.cpu=2
    incus profile set desktop-profile limits.memory=4GB
    incus profile device set desktop-profile root size=20GB
    incus profile device set desktop-profile eth0 nictype=bridged parent=br0 network=""


    Creating an instance of the Ubuntu desktop with these settings is simplified to:


    Code
    incus create images:ubuntu/noble/desktop Ubuntu24-04v2 --vm -p desktop-profile


    ubuntu_vm3.jpeg


    Examining the "desktop" profile shows:




    3. A "Windows" Profile


    The same idea can be applied to creating a Windows 10/11 VM which needs a min of 2 cpu, 4 GB, a 60GB disk, and a tmp device.


    Code
    incus profile copy default windows-profile
    incus profile set windows-profile limits.cpu=2
    incus profile set windows-profile limits.memory=4GiB
    incus profile device add windows-profile vtpm tpm path=/dev/tpm0
    incus profile device set windows-profile root size=60GB
    incus profile set windows-profile --property description="Windows profile 2CPU, 4GB RAM, 60GB space"
    incus init win11vm --empty --vm -p windows-profile
    incus config device add win11vm install disk source=<path to windows incus.iso> boot.priority=10
    incus start win11vm --console=vga


    The Windows ISO has to be repacked using distrobuilder ( see: https://blog.simos.info/how-to…achine-on-incus-on-linux/ )


    This can be done on any Linux host or on OMV itself. The required packages to install are:


    Code
    apt-get incus-extra
    apt-get install -y --no-install-recommends genisoimage libwin-hivex-perl rsync wimtools


    A typical distrobuilder command is:


    Code
    sudo distrobuilder repack-windows --windows-arch=amd64 --windows-version=w10 Win11_EnglishInternational_x64.iso 
    Win11_EnglishInternational_x64.incus.iso --drivers=virtio-win-0.1.266.iso


    The first stage of install:


    win11_vm1.jpeg

  • More about profiles, to quote the incus docs @ https://linuxcontainers.org/incus/docs/main/profiles/ :


    Code
    Profiles store a set of configuration options. They can contain instance options, devices and device options.
    
    You can apply any number of profiles to an instance. They are applied in the order they are specified, so the last profile to specify a specific key takes precedence. However, instance-specific configuration always overrides the configuration coming from the profiles.

    Consider these further examples:


    1. An amended "desktop" profile.


    Removing the network config from the "desktop" profile to a separate profile gives the flexibility to use either a host "bridge" or "macvlan" network.

    The desktop instance is then created/launched with two profiles: "desktop-no-net" plus "bridge-net-profile.



    2. A Lyrion Music Player container.


    To run the "Lyrion Music Player ( Logitech Media Server - LMS)" in a container, access to OMV host folders can be configured in a "sharedfolders" profile. The container could be launched with these profiles: "default", "bridge-net-profile" and "sharedfolders".



    3. A OMV test VM.


    A running OMV instance needs data disk. To mimic installing OMV on a "2 bay NAS", stack "server-no-net" and "bridge" profiles with a newly created "2bay-disk-set" profile.


    First copy the "desktop-no-net" to a new "server-no-net" profile and then edit the new profile's yaml file


    Create the ""2bay-disk-set" profile.



    Create the omv-2-bay VM:



    Part 2 follows

  • Part2 of above:


    But there is a snag if you try to remove an existing drive from the "omv-2-bay" VM.


    Code
    chris@deb12:~$ incus ls omv-2-bay
    +-----------+---------+------------------------+------+-----------------+-----------+
    |   NAME    |  STATE  |          IPV4          | IPV6 |      TYPE       | SNAPSHOTS |
    +-----------+---------+------------------------+------+-----------------+-----------+
    | omv-2-bay | RUNNING | 192.168.0.207 (enp5s0) |      | VIRTUAL-MACHINE | 0         |
    +-----------+---------+------------------------+------+-----------------+-----------+
    chris@deb12:~$ incus config device remove omv-2-bay disk1
    Error: Device from profile(s) cannot be removed from individual instance. Override device or modify profile instead
    chris@deb12:~$

    The "2bay-disk-set" profile is of limited use and can be removed from the instance. If the VM is to act more like a real machine, you need to add the "data disks" individually to the "omv-2-bay" VM which you can then remove or add, as required. For example:



    To summarise:


    1. Incus profiles act as configuration templates.

    2. Incus profiles can be generic like the "bridge-net-profile" which can apply to any instance, or be specific like the "windows-profile" which is meant for Windows VMs.

    3. Incus profiles stack as in the examples above.


    Remember that using an incus UI reduces much of this to a "point and click" activity. But the range of incus CLI commands lends themselves to automation via writing scripts. The simplest just being a collection of incus commands.


    But what you really want in both the example of the "Lyrion Music Player" container and "omv-2-bay" VM is a step that completes the configuration within the instance itself.

  • Using incus with "automation" - the first steps

    =================================================


    1. Create instances by simple scripts.


    Incus commands can be lengthy, using simple scripts saves on error prone repetitive typing and are easily modified. For example,


    CLI commands:

    Code
    incus launch images:debian/12  lms-player --profile default  --p bridge-net-profile --p sharedfolders
    
    incus launch images:ubuntu/noble nobleVM --vm -c limits.cpu=2 -c limits.memory=4GB -d root,size=20GiB -p default -p bridge-net-profile


    Simple Scripts:


    Code
    #!/usr/bin/bash
    incus launch images:debian/12  lms-player \
    -p default \
    -p bridge-net-profile \
    -p sharedfolders


    Code
    #!/usr/bin/bash
    incus launch images:ubuntu/noble nobleVM --vm \
    -c limits.cpu=2 \
    -c limits.memory=4GB \
    -d root,size=20GiB \
    -p default \
    -p bridge-net-profile \


    2. Use incus commands to configure an instance post-creation.


    Typically instances need configuration post-creation, anything for adding packages to executing commands or adding files from the host.

    Use "incus exec .." to run commands inside an instance. Use "incus file .." to edit, delete, pull or push files to an instance.


    In the example of the Lyrion Music Player (LMS) container, it's configuration requires some standard packages to be installed and downloading and installing one other. The "incus exec .." commands are:


    Code
    incus exec lms-player -- sh -c "apt update -y && apt upgrade -y"
    incus exec lms-player -- sh -c "apt install wget perl libio-socket-ssl-perl libcrypt-openssl-rsa-perl -y"
    incus exec lms-player -- sh -c "wget https://downloads.lms-community.org/LyrionMusicServer_v9.0.2/lyrionmusicserver_9.0.2_amd64.deb  -O /root/lms.deb"
    incus exec lms-player -- sh -c "apt install /root/lms.deb


    This simple script to create a Lyrion Music Player (LMS) container from scratch illustrates how the creation of a single profile combines with "incus exec ..." commands to configure the container instance.



    Note: The "echo" and "read" statement are simple debug break points that can be removed .


    Boron is the current version of the BunsenLabs lightweight Linux distro. It can be installed via a script after first using a debian net install. The "bunsen net install script" is designed to be run interactively by a non-root used with sudo privileges. This is a example script of how this might be done in incus:



    Follows on ..

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!