I highly recommend the uNAS 810a: http://www.u-nas.com/xcart/product.php?productid=17640. I was cautious about it initially but after running an 8-disk setup for quite a while now, it's a dream.
Posts by ikogan
-
-
Sorry to resurrect an old thread but you mentioned you saw this on Flash media. I'm having a similar situation and I _am_ booting off USB flash. Do you have any more details on this or how to deal with it?
-
Even if some of the features aren't supported, that doesn't mean that a pool cannot be imported. It really depends on which features you're using on your pool and which are supported (or how well) on ZoL. Use `zpool get all ${your pool}` to see which features you're using and then you can check the `zpool-features` man page to see which are supported. ZoL's man page in source form is available here https://github.com/zfsonlinux/…man/man5/zpool-features.5. On my system, it lists:
async_destroy, empty_bpobj, filesystem_limits, lz4_compress, spacemap_histogram, extensible_dataset, bookmarks, enabled_txg, hole_birth, embedded_data, large blocks
As always, backups are a great idea.
-
The compression ratio is a read only property that tells you how compressed your data. It just means it hasn't compressed too much of mine.
-
To be fair, my compression ratio is currently sitting at "1.0" so the compression hasn't benefitted me that much at the moment :-/
-
I'm running Plex on that server and have yet to notice any issues, the CPU is barely working when it's streaming. I believe folks are starting to enable it by default.
-
I've been running with lz4...always, for years. I think that unless your server is a potato, you should be fine performance wise. In terms of compression ratio...I doubt you'll see much if the only thing on your filesystem is compressed video.
-
I don't believe so. I have yet to read any material that discusses data safety issues with files being in the top level dataset.
-
Quote
Could you please provide some reference as to why the file system created by "zpool create" shouldn't be used for data storage? Waht makes it any different than other file systems created by ZFS?
Well, there are some issues with data being on the root FS of a pool, I believe mainly to do with `zfs send`. From what I read, but not experienced, `zfs send` needs to be able to create all of the filesystems on the remote, and it can't do that with the root fs, so none of those files will send. Although, I cannot find any real documentation that talks about this situation.
From: http://www.solarisinternals.co…/ZFS_Best_Practices_Guide
CodeConsider using the top-level dataset as a container for other file systems. The top-level dataset cannot be destroyed or renamed. You can export and import the pool with a new name to change the name of the top-level dataset, but this operation would also change the name of the pool. If you want to snapshot, clone, or promote file system data then create separate file systems in your pool. File systems provide points of administration that allow you to manage different sets of data within the same pool.
I feel like it would be a bit counterintuitive to have the UI auto-create a child dataset and force you to deal with it. If you want to follow this best practice, then create some data sets after creating the pool.
-
I've submitted a pull request that fixes the cleanup issues you reported as well as adding support for multiple KDCs. The PR is here: https://github.com/OpenMediaVa…ediavault-kerberos/pull/2.
@ryecoaaron, could you do whatever it is that gets this pushed to the repos?
-
Hrm, that's a good point...I'm not sure why I thought I could see it that way. Ok, how about pasting the output of `docker inspect ubuntu`?
-
`docker ps -a` doesn't tell you the command line options passed to "docker" to start the container.
-
That would imply that the container isn't actually running?
-
Could you check the precise arguments used to start Docker? Just run "ps aux | grep docker" and see if you can find the one you're trying to run.
-
You can create a Dockerfile similar to the following:
Docker
Display MoreFROM ubuntu:latest MAINTAINER Your Name <you@domain.com> ENV DEBIAN_FRONTEND noninteractive # Install the toolchain and dependencies RUN apt-get install build-essential libbz2-dev libncursesw5-dev scons zlib1g-dev libglib2.0-dev libssl-dev libstdc++6 libminiupnpc-dev libnatpmp-dev libtbb-dev libgeoip-dev libboost1.55-dev libboost-regex1.55-dev libboost-thread1.55-dev libboost-signals1.55-dev libboost-system1.55-dev libleveldb-dev git # Clone and build airdcano RUN mkdir /source; cd /source; git clone https://github.com/airdcnano/airdcnano.git; cd airdcnano; scons -j2; scons install ENTRYPOINT /usr/local/bin/airdcnano
That's an example, it probably isn't complete. You can build the container with `docker build -t yourname/airdcnano` in the directory containing the Dockerfile. It may be worthwhile to put the last RUN command into a script instead, I would read the Dockerfile documentation.
However. It looks like airdcnano is a GUI app, or are you trying to use it's text mode?
-
Can you try to start your container manually rather than through the GUI? I'm not sure if they have any sort of entrypoint. I don't think they're meant to be run directly but rather have images created off of them? When you run the image, are you giving it any sort of entrypoint in the GUI? You should be able to specify one in the "Extra Arguments" section:
You should then be able to attach. Also, you may want to look into creating a Dockerfile instead of manually updating the image as it's a lot easier to update/maintain the image that way.
-
I'm assuming you've tried to just hit enter after attaching? Attaching won't necessarily generate any output right away, you might have to kick the shell.
-
I'll update my scripts to restart without using service, thanks. Has anyone reported this to Volker?
-
Ah, OK. The plugin doesn't currently support multiple KDCs...I'll see what I can do about that.
-
I remember doing something specific to fix it and I can't remember what...