Hello
The pendrive on which the omv system was damaged.
What now do you do to get access to your data as soon as possible?
For example, you can install a new system and mount data disks?
damaged pendrive
-
-
Pretty much, yeah.
-
"Pretty much" ??
The new system has installed the ssd disk (raid1) by switching off the data disks.
The installation went smoothly but the system does not start -
the message array was not found even though the disks are already connected and visible in the bios. -
The new system has installed the ssd disk (raid1)
RAID1 for the OS?
Does the system start, when the data disks are not connected?
If that is the case, login as root and do update-grub. Then connect the data drive and boot.What exactly does the error message say?
I guess it is the problem, that the OS disk got assigned a different name.
-
"Pretty much" ??
The new system has installed the ssd disk (raid1) by switching off the data disks.
The installation went smoothly but the system does not start -
the message array was not found even though the disks are already connected and visible in the bios.I ment that the way you described it is the way to go, sorry if I accidentaly shoked you. @macom propably gave you the right hint.
@macom: do you think raid 1 as sysdrive is bad (ignoring the two disk the raid 10 option)? I run it like that too, ssds are so cheap I dont mind to have an additional one running.
-
I just wonder if it is worth the expense.
-
raid 1 as sysdrive is bad (ignoring the two disk the raid 10 option)? I run it like that too, ssds are so cheap I dont mind to have an additional one running.
Same SSD model and same firmware version?
-
RAID1 dla systemu operacyjnego?
Why not?
Does the system start, when the data disks are not connected?
No
What exactly does the error message say?
Tomorrow I will send the message.
I guess it is the problem, that the OS disk got assigned a different name.
Yes. I left the default settings during installation
Thank you for your help.
-
Actually, most times yes. I know its not a good layer of security but it already saved a lot of time for me in different systems, especially if I can just let them keep running in degraded state until I find time to fix it. I try to use checksumming fs for it, when possible, which at least helps against silent corruption in raid1.
I know a lot of issues, but in my opinion it mostly comes down to not being as secure as many people think, still it may be helpful at some point. Do I miss something severe, like a real downside despite additional hardware? -
Do I miss something severe, like a real downside despite additional hardware?
With a RAID1 or mirror made out of two identical SSD with same firmware revision? IMO yes since SSD die for entirely different reasons than HDD. Imagine a firmware bug that strikes after n hours of operation (famous Crucial SSD firmware bug some time ago) or after n GB written (famous Crucial SSD firmware bug some time ago). If your RAID1/mirror consists of two SSD that will behave absolutely identical then the attempt is close to useless
Using two different SSDs in such a setup provides a lot more theoretical availability... but as we can see here every other day a lot of those RAID1 setups simply ruin availability when something went wrong like a power loss or whatever...
-
Right, I am aware of this, but its still not worse than no raid, is it? For my privat use its less about real availability and more about saving time when a drive fails. I am totally aware its not a real layer of security and I do habe backups too.
-
its still not worse than no raid, is it?
Not entirely sure since added complexity always has downsides too. What are the reasons an SSD is failing?
- physical damage (fire, water, electric shocks, whatever). Most probably both drives are affected at the same time
- firmware bugs. With identical drives/firmwares both drives are affected at the same time
- drive worn out due to usage. When using good SSDs (providing a wear out indicator/percentage via SMART) this is no issue since you replace the drive before it dies
I'm about to setup a new Proxmox/fileserver combo for a customer soon and we're definitely not using a zmirror/RAID1 for the OS drive (but will send snapshots of the boot drive to another SSD in another location to be physically replaced if something happens to the OS drive)
-
I have seen individual ssd fails before, but I agree its not as often as with spinning disks.
-
I have seen individual ssd fails before
My point was that there exists no reason to buy crappy SSDs (cheap Chinese stuff, used SSDs on eBay or Aliexpress). And if you buy a quality SSD then you monitor the wear out indicator and don't wait until the SSD fails but replace it prior to that. All quality SSD vendors expose the remaining life expectancy via SMART so it's just monitoring the specific SMART attributes and you're done. If a SSD is not equipped with this feature I call it crappy SSD.
-
I do agree on that one, smart is of great value on ssds, still I have seen ssds fail randomly, also some in data center grade quality. I think we should leave the discussion for now. I agree in general, that the usage of raid1 is low.
-
What exactly does the error message say?
-
It basically tells you what to do. Your filesystem needs to be repaired, and thus you need to perform a fsck. Do you have a working GNU/Linux machine where you can plug it into?
It may however leave you with a partial loss of data on sda1. -
Have you been able to solve your issue? If yes, it would be kind of you to let others know how you did it. It might help other.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!