Try adding /web to the end, for example:
192.168.1.101:32400/web
Beiträge von Doc
-
-
Think he made a workaround for it, but it might not be working on OMV 2.0. It was all I needed to make the plugin work. Didn't even need to manually untar after installing bzip2.
-
-
More that one device connected to server put you in the position to pay some money. I need a free solution.
You can stream to multiple devices using the free version of Plex. The paid feature allows to 'manage' the server with multiple users.
-
Is the way plugins are developed/written changed drastically for OMV 3.0?
-
I see. So does each hard disk connect directly to the motherboard to a SATA port? If that's the case any motherboard will be able to take in HBA right?
Sorry for my noobness.. -
I'm worried that the AMD processor and 16 gb ram on that won't be able to handle 50 TB of storage along with 5 concurrent plex streams...
and the motherboard uses DDR2 RAMs, wouldn't it be better to go for DDR4? -
The Lian Li looks great, ill try getting that if possible. I'm not too familiar with racks so I don't know if I need to get a separate motherboard/processor for this? And if so which would you recommend?
-
Hi I'm currently running out of space to add more hard disks to my server and I'm looking to get something where I have enough space to attach up to 20 (or a bit more) hard disks and handle more load in the near future.
I use it mainly for downloads and Plex. Looking at around a 50+ TB of storage (ext4) most of which are videos and not worried much about data loss.What would be the best option here, a server rack I'm guessing? If so what hardware would be best? If I could get some recommendations that would be great.
Thanks in advance!
-
Didn't work with gparted for me, I was getting errors with the e2fsck there but I was sure my FS was fine and no errors with an fsck I did about 10 mins before it so I aborted that and tried with SystemRescueCd and it worked flawlessly. Thanks for the help guys
-
installation is 64bit:
dumpe2fs:
Code
Alles anzeigenFilesystem volume name: Main Last mounted on: /media/23ddb01f-bd5d-4aa4-93f3-5053fda907b1 Filesystem UUID: 23ddb01f-bd5d-4aa4-93f3-5053fda907b1 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 366284800 Block count: 2930260992 Reserved block count: 0 Free blocks: 1560698105 Free inodes: 364959257 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 650 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 4096 Inode blocks per group: 256 Flex block group size: 16 Filesystem created: Sun Feb 21 05:58:07 2016 Last mount time: Thu Feb 25 22:34:12 2016 Last write time: Thu Feb 25 22:34:12 2016 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 25 21:00:01 2016 Check interval: 0 (<none>) Lifetime writes: 511 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 8bbf6a88-a606-4333-a33e-ed400c570fd9 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x00007d26 Journal start: 7383
I'll try using gparted and see if that works
-
Sorry to post in an old thread but it seems the issue is still occurring. I created an ext4 file system about 3 days back of from 3 drives in LVM with total space of 10TB. Today I tried adding 2 more drives of 3TB each and the file system gives the same error "resize2fs: new size too large to be expressed in 32 bits" when I try to resize the fs.
-
Could possibly be a corrupt file. Does it happen with all files? Try it with just 1 big file
-
The moving of sickbeard files from /home/sickbeard to /var/opt/sickbeard is not OMV specific (as far as I know) so the version of OMV should not matter (much). It was changed in the sickbeard plugin version. I was on sickbeard 1.0.3 when files were still in /home/sickbeard/ so I believe from 1.0.4 and onward it was moved to /var/opt/sickbeard/
-
Mosh I think the best thing for you would be to backup and do a fresh install of OMV, it'll be the 'simpler' fix. I recently did that for my OMV for which I was having issues with omv-firstaid and a bunch of other issues (experimented too much :p). Although, don't go on my word I'm just an average user of just over a year. Maybe someone else might be able to guide you better.
-
I'm thinking you faced the same issue I did, all sickbeard database/config files might be in the old directory (/home/sickbeard) try running ls -A /home/sickbeard
Edit: and also du -sh /home/sickbeard if the ls command worked
-
From memory OMV 0.5 used apache but from OMV 1 it used nginx.
You will need to at least set the chown level on the files as you probably copied them under the root user so they will be owned by root and not accessible by the couchpotato user.Think you meant sickbeard user? Anyway have that all working now.
On that note OMV+sickrage on a fresh install has no sickbeard user...should I add this manually? If so what should be the password I assign to it? Or should I not be adding it from the OMV web GUI ?
-
What version did you update from? And did you update via OMV's web-gui?
-
If I'm not mistaken OMV web GUI uses apache2 right? I was able to run both OMV web GUI and sickrage web interface (on 1.0.10, both on a fresh install and upgrading from 1.0.3 to 1.0.10). However, when I attempted to restore backup from one system to another by copying files but NOT setting chmod and chown as discussed above, sickrage would fail to start.
-
I got it to work 100% by:
1. extracting the backup to a temp dir, cd to the dir.
2. did a "cp -r * /var/opt/sickbeard/"
3. "cd /var/opt/sickbeard/ && chmod -R 777 * && chown sickbeard:users -R *"System back as it should be (for now...)
Brilliant! Got it to work this way, thanks! (In case anyone's wondering the files to be copied from OLD sickrage are in /home/sickbeard/.sickbeard/ to /var/opt/sickbeard/ )