The Class E Network

    • Offizieller Beitrag

    Have you compared the smb.conf files from the new server and one that's working?

    Yeah. That was one of my side by side comparisons. First the GUI and then smb.conf


    While I'm convinced there is a reason, or at least a workaround, this is something truly weird. I even went to the shared folder with WinSCP and applied the 777 mask - where everyone gets write (you do that by right clicking on the folder and properties). Still nothing. It's not a permissions issue. From what I can see with smbstatus, side by side with something that works, it's as if the samba service is running but the client is not connecting with the samba share VIA the designated network sockets.


    Then there's this entry in a client log file, that crops up once in awhile;


    chdir (/srv/dev-disk-by-label-WD1TB/ServerFolders/Backups) failed, reason: No such file or directory
    smbXsrv_tcon_disconnect(0x9cd58d3d, 'Backups'): set_current_service() failed: NT_STATUS_INTERNAL_ERROR
    smbXsrv_tcon_disconnect_all: count[1] errors[1] first[NT_STATUS_INTERNAL_ERROR]
    smbXsrv_session_logoff(0xf9695c89): smb2srv_tcon_disconnect_all() failed: NT_STATUS_INTERNAL_ERROR
    smbXsrv_session_logoff_all: count[1] errors[1] first[NT_STATUS_INTERNAL_ERROR]
    Server exit (termination signal)
    exit_server_common: smbXsrv_session_logoff_all() failed (NT_STATUS_INTERNAL_ERROR) - triggering cleanup


    It's absolute nonsense; that directory exists at that exact location and the samba service doesn't actually exit. smbstatus shows it's running fine, just not connected.


    Anyway, it's weird. I'll either figure it out or find a work around.

    • Offizieller Beitrag

    Again, bounce what I said RE education off your wife. See what she says. (I can take it , if she doesn't agree. )

    Well I copied and printed your thesis and presented it to her whilst she was putting on the war paint and filler this morning.....I am sure, comments and marking will be forthcoming as she was intrigued after my explanation.

    • Offizieller Beitrag

    Back again.


    I take it the wife doesn't agree. (Potentially, with comments about my being a crimundgeon or something.)


    I admit to being conservative. I think education is a privilege, not a right. Well, let me modify that. I think it's a right that should be revocable in the certain circumstances so, effectively, it could defined as a privilege.

    • Offizieller Beitrag

    Definitely true, but it should be a right in my opinion.

    I'd still maintain that education should be a privilege, versus a right. Why? In the US, kids are actually beating up teachers and in some instances teachers are not even allowed to defend themselves. (Child abuse.) If education is a "right", there's more latitude for the little thug'lets to stay in school.


    That's what I'm getting at when I talk about a "conditional" right. There are instances where something that is called a "right" can be revoked, as it is in many other venues. (Driving, flying, etc.) Even the US's 2nd Amendment "right" to bear arms is conditional so, in a practical sense, it's effectively a privilege enjoyed only by law abiding citizens.
    ___________________________________


    Linux, under the GNU General Public License, assigns specific "rights" to users. In this case, I think the term "right" is correct in that the terms of the GPL are not revokable.



    (I had to come up with a tie in.) :rolleyes:


    • Offizieller Beitrag

    I take it the wife doesn't agree. (Potentially, with comments about my being a crimundgeon or something.)

    No, I think quite the opposite, actually at present she has so much going on with the confirmation that she is to be kept on (originally the position was temporary) and they want to her to teach what is KS1 in the UK which is 4-7 year old's.


    But I agree with what you are saying, the focus is on the positive not the negative....so a child can be an absolute **** but if he/she does something positive that is pushed to the point that they receive some sort of reward. In most cases as my wife says it does work but you end up by constantly looking and highlighting the positives, the idea being you turn that child around.
    Take parents evening, mum and dad have about 10 mins with their child's teacher after school......they have to focus on the positive...they cannot say 'your little Johnny is a **** never does as he is told, never does any of the work' they have to focus on the positive and draw in the negative.
    She has had to suggest to some parents to take the child to an optician, because when they are trying to read the book is right under their nose, she had one child who seemed to ignore instruction....she worked out he was partially deaf, parents took him to the doctors and the child needed grommets fitting.
    My wife has had to handle numerous unruly 4 year olds.....in her last school the new head removed her from that class and put in a class she had never taught...this was to get rid of her...it's what they do in the UK.....anyway the teacher that was put in EYFS had problems so much so she, the head and the support staff went on 'restraint training courses'.....my wife never needed that, she has the ability to work with the child.


    ______________________________________________________


    "Maxey" I had to google that......it's probably about 1.5 hours away from where we live.....small village approx. 700 residents.


    _______________________________________________________


    Finally sorted rsync, I can mount the USB from the GUI set everything up and run it.....but you can't unmount from the GUI, my guess, it's because the USB is referenced to the rsync jobs....however you can unmount it from CLI, then the USB appears as 'missing' in the file system.....just plug it back in and mount it and it works......so another problem solved.


    Installed Win32DiskImager, that works, although I just might test dd on my Linux Mint to see if that works.

    • Offizieller Beitrag

    No, I think quite the opposite, actually at present she has so much going on with the confirmation that she is to be kept on (originally the position was temporary) and they want to her to teach what is KS1 in the UK which is 4-7 year old's.

    Our school systems play that game here, as well. With "temporary positions", managerial boneheads get to watch teachers do their thing and decide, whether or not, to keep them. I suppose that's important because, as it is in any profession, some are just not cut out for the job they're hired for. However, for those who are good at a job, the initial vetting process can be somewhat unnerving.

    Finally sorted rsync, I can mount the USB from the GUI set everything up and run it.....but you can't unmount from the GUI, my guess, it's because the USB is referenced to the rsync jobs....however you can unmount it from CLI, then the USB appears as 'missing' in the file system.....just plug it back in and mount it and it works......so another problem solved.

    If you want to run unmount from the CLI:
    If there's something that you need to run from the command line, rather than try to memorize commands, parameters, etc.; I'm using <Scheduled Jobs>. In the attached you'll see that I'm actually using for automation purposes, but it's easy to run preconfigured command lines by opening the page in the GUI, highlighting it and click on <Run>. It's a bit faster and a lot more precise than SSH'ing in and hand typing it on the CLI.



    Along those lines, I don't know what file system you're using but BTRFS assigns checksums to files. The "BTRFS Scrub" command, checks files against the original calculated checksums and reports errors. (Which is another, definitive, indicator of HD health.)


    This would be going a step farther but, since BTRFS supports pooling and RAID1 natively, and intelligently, I'm giving thought to going that route. With data check sums and two copies of data, BTRFS will select the uncorrupted copy to read and correct the second copy where the check sum doesn't match.
    One of things I like about BTRFS is, if one doesn't want to worry with advanced features, it can be used as a hands off, drop in, replacement for ext4. If something more complex is deemed useful, snapshots, or even BTFRS RAID1, it can be used for that as well.


    "Maxey" I had to google that......it's probably about 1.5 hours away from where we live.....small village approx. 700 residents.

    It's a shame you haven't been there. I'm sure you would find the original (Maxey) residents to be intelligent and engaging... :rolleyes:

    • Offizieller Beitrag

    The two Raid setups I have, a small mirror using btrfs and raid 5 which is ext4....now I could change that raid 5 to btrfs and it's on my 'maybe' list, my external usb is ntfs....this is so I can connect to my windows laptop.


    I've seen the scheduled jobs option, but this is more than I need as I'm quite happy to run some form of backup manually, so the choice was rsync, rsnapshot or UrBackup. The problem I faced was using the ntfs usb, I could mount the drive, I can setup shared folders, I can then setup rsync jobs using those shared folders and it works, all this from the gui. But you cannot unmount the usb from the gui, it's greyed out, my guess being the drive is referenced through both shared folders and rsync, hence unmounting using cli. What I don't want to do is to leave the usb permanently connected and mounted to the omv server.


    I have a number of spare 3.5 hdd of varying sizes and I'm contemplating getting some external usb enclosures and setting up each usb to complete a specific backup job, rather than getting another box and running something like flexraid to make use of the drives.


    I must admit, so far as I said before omv does what I need it to do....the apt tool plugin is brilliant, I can use it to install Emby Media server and it even installs any updates so I don't have to use the cli.


    I like the idea of the scrub cron job....I had this set up when I used nas4free and zfs.....omv is also reporting that I have some bad sectors on one of my raid 5 drives, never knew that otherwise I would have run dd to low level format and write zeros to the drive. I have spinrite, but I'm not sure if that would be of any use.....but as the raid is reporting clean I shall probably leave it.


    Edit: penny just dropped!! your username then reflects maxey from the village name...or is that way off base.. :)

    • Offizieller Beitrag

    The two Raid setups I have, a small mirror using btrfs and raid 5 which is ext4....now I could change that raid 5 to btrfs and it's on my 'maybe' list, my external usb is ntfs....this is so I can connect to my windows laptop.

    I know Linux recognizes vfat but, at some point, they must have built in NTFS recognition for directly connected drives as well. Some time ago, it was necessary to add a package to enable that.


    I've seen the scheduled jobs option, but this is more than I need as I'm quite happy to run some form of backup manually, so the choice was rsync, rsnapshot or UrBackup.


    I must admit, so far as I said before omv does what I need it to do....the apt tool plugin is brilliant, I can use it to install Emby Media server and it even installs any updates so I don't have to use the cli.


    I like the idea of the scrub cron job....

    Other than the BTRFS scrubs which should be down on a regular basis:
    I was thinking of using cron jobs as a command line launching tool, so I could run a command line (with parameters, switches, etc.) without having to SSH in, log in, hand type it out every time, and correcting fat finger errors.
    (Programmers are probably rolling there eyes at this.)


    I've been using UrBackup for clients, in a limited capacity, with the R-PI. Now with a server with a GB Ethernet int., I'll going start backup clients and try a bare metal restore. (But I have to wait on the 4TB drive...)


    Edit: penny just dropped!! your username then reflects maxey from the village name...or is that way off base.. :)

    Pick up that penny! You're on base.
    Part of my user name and last name are the same. While it stretches way back, into the dim mists of time as it seems, my ancestors were in Maxey, England. (Setting aside those who changed their names, etc.) In Wikipedia, they noted that if one has the last name of Maxey and has ancestry traceable back to England, it's highly probable that they are the progeny of Maxey, England.
    In my family, 2 brothers came over from England and got off the ship in Richmond, VA back in the day. One of the two, stayed in the eastern part of the US and represents my side of the family. The other went to the southwester region and represents the Maxey's in Texas and other areas close by.


    _______________________________________________________________


    Oh, the problem with Samba seemed to be:
    When I first set up the new server, so I could experiment with it a bit, I used UFS to join two smaller drives in a pool. (At least until I could order a 4TB drive.) I set up OMV shares from the UFS pool and built Samba shares on them. Something else occurred that made me look hard at the UFS pool. (Really, a UFS pool is the equivalent of a "symlink" for multiple drives that will translate between different files systems, ext3, ext4, etc.) So I broke the UFS pool, with the Samba and base shares in place. (A No, No. I should have backed out of Samba and the base shares first.)


    After the fact, the Samba shares did appear to delete normally. However, when the same Samba shares were created on the still existing base share (which was now on a single drive), it seemed as if there was a permissions problem. Samba config's were correct, all the attributes were right, but changing permissions in Samba and on the base share seemed to had no effect. I ran "reset perms" on the base share, a shotgun approach. Still, no change. (This is where I began to think something was truly odd. Reset perms should have taken control.) I rebuilt OMV from scratch. Still no change. After messing around with it some more, I decided to reformat the data drive. That fixed it. At a guess, something happened when the drive was under UFS that set permissions on the shares is such a way that they were permanent. (Or at least I didn't have the right tool to override.)
    I should have wiped the drive right off the bat but, in this case, trying to use a short cut and my own curiosity ended up in a lot of wasted time.
    ______________________________________________________________


    Given the age group your wife teaches, whenever I see this commercial, I think of your wife. (Especially after my rant about discipline in the classroom.) Play it for her. I'm sure she'll get a kick out of it. (Be sure to go full screen.)


    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.

    • Offizieller Beitrag

    After messing around with it some more, I decided to reformat the data drive. That fixed it. At a guess, something happened when the drive was under UFS that set permissions on the shares is such a way that they were permanent.


    Does UFS use some sort of metadata which is written to the drive? ZFS does, if it does it would explain why you had the problem reoccur when you did a rebuild.....surprised that formatting removed it, last time this happened to me I had to low level format the drive.



    I know Linux recognizes vfat but, at some point, they must have built in NTFS recognition for directly connected drives as well. Some time ago, it was necessary to add a package to enable that.

    There is a 'plugin' that allows mounting of ntfs partitions I think it's called ntfs-3g if that is installed then mounting ntfs is no problem.

    • Offizieller Beitrag

    Does UFS use some sort of metadata which is written to the drive? ZFS does, if it does it would explain why you had the problem reoccur when you did a rebuild.....surprised that formatting removed it, last time this happened to me I had to low level format the drive.

    Regarding UFS metadata, I really don't know. I used UFS on a lark, to get past using low capacity drives. Then I noticed a few other UFS effects that, while not real problems, I didn't care for. In any case the metadata idea makes sense. It would have to do something like that to create a transition layer between dissimilar drive formats.
    Actually, as OMV recommends, I "wiped" and then "formatted". When I reformatted, I went to BTRFS which, as I've noticed, doesn't actually format the drive. I believe it wipes the existing partition table, notes the size of the drive(s) and formats on the fly. I might have been lucky. Maybe reformatting with BTRFS helped to avoid a low-level format. Perhaps, BTRFS can clean up after ZFS.


    So what was your experience with ZFS? I hear a lot of hype but the available information seems to be thin and it seems to be complex right out of the gate. BTRFS, on the other hand, is simple to implement but sophisticated enough to where advanced features can be turned on later. (At least that's my take on it.)





    There is a 'plugin' that allows mounting of ntfs partitions I think it's called ntfs-3g if that is installed then mounting ntfs is no problem.

    I'm a little unclear about this. If you're mounting a USB drive from windows, it is probably formatted NTFS. So,,, the connection interface, whether it's USB or SATA shouldn't matter. (While "writing" NTFS is another matter altogether..) If you didn't install anything, apparently Linux can now read NTFS natively OR OMV has it provisioned out of the box..??

    • Offizieller Beitrag

    I got my 4TB drive and used clonzilla to clone the data on a 1TB drive over the 4TB. I then used Gparted to expand the resultant drive partition from 1TB to 4TB. I completed things by doing a BTRFS scrub. It went over like a lead ballon. (Just to be safe, I'm glad I did.) Something like 203 correctable errors showed up but 28,000+ errors were not correctable, so I stopped it. That's not to say that there were actual file errors, but BTRFS thought there were. ((If there were that many actual file errors, clonzilla would be worthless. I tend to doubt that.)) Apparently BTRFS metadata and file checksums don't like being copied by an external program. In any case, I just deleted all files data from existing folders and reran a scrub, which was 'clean' (on 60MB of folders), and started Rsync jobs again.


    In the bottom line, it's fairly obvious that the old methods won't work with the new file systems. Also, beyond setting an initial partition size, it seems as if BTRFS formats on the fly, as it writes. Given the way it works, that makes sense, but it appears to be completely unconcerned with unused space. Unless there's some method of periodically checking unused space for media errors, that may not be a good thing.

    • Offizieller Beitrag

    I'm a little unclear about this. If you're mounting a USB drive from windows

    I don't think I'm explaining myself very well, the drive is formatted ntfs as it was initially acquired to use with windows, however you can now 'mount' that drive in a Linux distro provided ntfs-3g is installed. As each distro is different some have this installed during an initial installation process some do not, I'm assuming that omv installs this therefore making it easy to mount an ntfs formatted drive.

    • Offizieller Beitrag

    So what was your experience with ZFS

    I never had a problem with it I was using nas4free and this gave you an option to use either Software Raid or ZFS, as everything was configurable from the GUI it just worked, it proved to be a steep learning curve. The one noticeable thing was that it 'appeared' to give me more space, but you could also apply compression which never affected it's use as it stored my movie collection.
    But I did learn about metadata after a scrub revealed a degraded array....removing the drive from the array I connected it to a sata to usb and ran a number of tests....nothing, drive reported as Ok, so removed the partitions....in essence I had a clear drive....put it back in the server added it to the array whereby it barfed, there way no way that drive was going back....i spent a number of hours on it......So with the help of google I discovered it was the metadata that was written to first 512k of the drive. So back to the drawing board and found I could use dd to completely erase the drive....added it back to the array and all was good.
    With ZFS being part of nas4free is was straightforward to work with and use.

    • Offizieller Beitrag

    I got my 4TB drive and used clonzilla to clone the data on a 1TB drive over the 4TB

    You should try using dd, ok it's all cli but it's short and it works, but as above you would still have to use gparted to extend the 1TB partition on the 4TB drive.

    • Offizieller Beitrag

    Given the age group your wife teaches, whenever I see this commercial, I think of your wife. (Especially after my rant about discipline in the classroom.) Play it for her. I'm sure she'll get a kick out of it. (Be sure to go full screen.)

    That put a smile on her face last night....and as she said 'if only' child free hotels.

    • Offizieller Beitrag

    I never had a problem with it I was using nas4free and this gave you an option to use either Software Raid or ZFS, as everything was configurable from the GUI it just worked, it proved to be a steep learning curve. The one noticeable thing was that it 'appeared' to give me more space, but you could also apply compression which never affected it's use as it stored my movie collection.But I did learn about metadata after a scrub revealed a degraded array....removing the drive from the array I connected it to a sata to usb and ran a number of tests....nothing, drive reported as Ok, so removed the partitions....in essence I had a clear drive....put it back in the server added it to the array whereby it barfed, there way no way that drive was going back....i spent a number of hours on it......So with the help of google I discovered it was the metadata that was written to first 512k of the drive. So back to the drawing board and found I could use dd to completely erase the drive....added it back to the array and all was good.
    With ZFS being part of nas4free is was straightforward to work with and use.

    That's not, exactly, a glowing review of ZFS. The day to day chore of reading and writing files is a given. Fat16 could do that. But, from what I gather here, ZFS introduced a problem that, otherwise, didn't exist. In trying to be proactive it seems that ZFS overstepped, by far, what a filesystem should be doing.
    Given yesterdays' experience with BTRFS, I'm now a bit leery of both of them.
    (Admittedly, I had a substantial role in creating yesterdays episode. What I did is not typical use.)


    Other than automating a few things to check disk(s) and keep the filesystem healthy, I'd rather ignore it. Don't get me wrong. I like the idea of maintenance and preventing problems before they crop up, with scrubs, warnings and the like, but having to get "hands on" with a filesystem, to me, seems like a step backward.


    So what was the outcome of the reinstalled drive, in the Z array? If it didn't degrade the array again, ZFS would be demonstrating a clear inconsistency.


    Along other lines, the error report given to me by BTRFS yesterday was totally inadequate. If errors are correctable, great. But telling me there was 28,000 uncorrectable errors (other than knowing "something" is seriously wrong) is not helpful. A list of affected files, if I wanted it, would be useful and given an admin a place from which to plan something akin to a recovery.
    ____________________________________________________


    As I study this (admittedly from 10,000 feet), with info from developers and practical application scenarios from the server jocks (horror stories), we are far from having a filesystem that can be completely trusted with drive sizes larger than 1 or 2TB. In a home or small business environment, without enterprise hardware, arrays that aggregate drives into storage pool sizes far larger actually make the situation worse. The only condition that makes it "seem" as if it's working is the relative rarity of problems. (Like car accidents, when it happens someone else and all data is lost, the rest of us stand back and think, "the poor slob".) However, as drive sizes increase, statistics suggest that the number of (undocumented) incidents must be increasing as well. What the numbers actually are would be an interesting dissertation for a Masters.


    What seems to be needed is a file system that assigns a checksum (a hash, etc.) that remains associated with an unchanged file permanently, even if it's copied off of the local machine. In that way, with a backup server, the destination server could determine that a changed file is actually a corrupted version of the original file, refuse to overwrite a good copy and inform the admin. If an "inter-server trust" is established, the corrupted copy could be corrected with the good backup copy from the backup server. Something like that would be a step forward. (While I didn't see it back in the day, now, something along these lines may exist at the enterprise level.)
    ____________________________________________________


    I did a rescurb this morning with a good chunk of my data in place. 0 errors. I'll be finishing up duplicating my on-line servers' data soon. For now I guess I'll be using BTRF, maybe I'll upgrade it to BTRFS-RAID1 for error correction, and with my backup servers I'll hope for the best. (Because, without jumping through hoops, there's little else that can be done with what's available to home users and small businesses.)


    As soon as I get just a bit more comfortable with OMV, with one or two successful client disaster recovery, I'll be looking forward to dumping Windows server, altogether. (After all, when it comes to file corruption, NTFS has nothing to offer.)

    • Offizieller Beitrag

    So what was the outcome of the reinstalled drive, in the Z array? If it didn't degrade the array again, ZFS would be demonstrating a clear inconsistency.

    It's still working!! in fact in the current raid 5 setup....when I added it back to the then zfs raid 5 it just resilvered (after running dd) I decided the issue was with the metadata written to the drive rather than the physical drive itself.
    What you would expect when it shows as degraded is the drive failing to sync due to a failed servo...that has a distinctive noise....one of the best things I ever purchased was Spinrite, this has rescued so many drives not just for me but for friends, where a drive has potential bad sectors it can move the data making it recoverable.



    What seems to be needed is a file system that assigns a checksum (a hash, etc.) that remains associated with an unchanged file permanently, even if it's copied off of the local machine. In that way, with a backup server, the destination server could determine that a changed file is actually a corrupted version of the original file, refuse to overwrite a good copy and inform the admin. If an "inter-server trust" is established, the corrupted copy could be corrected with the good backup copy from the backup server. Something like that would be a step forward. (While I didn't see it back in the day, now, something along these lines may exist at the enterprise level.)

    Sounds promising, have you started coding it yet :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!