It worked !
The array is back clean and seen in the UI
I have mounted the fs and it seems to be all good now.
Thank you a lot for solving my issue
Inzeback
It worked !
The array is back clean and seen in the UI
I have mounted the fs and it seems to be all good now.
Thank you a lot for solving my issue
Inzeback
Hi tiste thanks for helping me
the output of the commands are :
mdadm --examine /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7ac6251d:aa39e657:efbbf6cd:b7bca99b
Name : omv:raid (local to host omv)
Creation Time : Sun Nov 6 15:00:41 2011
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
Array Size : 5860538880 (5589.05 GiB 6001.19 GB)
Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 1e802b8d:e276ff02:1bb15eae:c30f1dc5
Update Time : Mon Feb 2 15:19:07 2015
Checksum : 9eeb73ea - correct
Events : 367
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing)
mdadm --examine /dev/sde
root@omv:~# mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7ac6251d:aa39e657:efbbf6cd:b7bca99b
Name : omv:raid (local to host omv)
Creation Time : Sun Nov 6 15:00:41 2011
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
Array Size : 5860538880 (5589.05 GiB 6001.19 GB)
Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 80e363b7:335a103e:9c81e130:ef695269
Update Time : Mon Feb 2 15:51:11 2015
Checksum : 2aececcc - correct
Events : 367
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing)
did stop the array OK
root@omv:~# mdadm -A --force /dev/md127 /dev/sd[fe]
mdadm: forcing event count in /dev/sdf(3) from 339 upto 367
mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdf
mdadm: Marking array /dev/md127 as 'clean'
mdadm: /dev/md127 assembled from 2 drives - not enough to start the array.
but the other failed
Any chance to do more?
Hi,
Could help me getting back my raid5 array up and running : my issue is similar to the other thread but I cannot get it to work and I cannot lose my data(kids photos)!
Scenario :
running lastest
raid 5, 4 disk up and running clean(+ 3 other disk non raid)
I did change a sata cable to remove a ATA33 error and boot up the machine with an unplgged cable +> result a failed array with 2/4 disk up
I halt the machine, replugged the cable and rebooted and damn, the raid array has gone
blkid
/dev/sda: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="20854919-ab30-ecd7-37f8-16182b2d3d7e" LABEL="omv:raid" TYPE="linux_raid_member"
/dev/sdc1: UUID="e371bdc4-f48e-4dfd-bb7f-7a714b3d3876" TYPE="ext4"
/dev/sdc5: UUID="cf803d90-9dc5-4cc0-b9d3-24696f3ac4b3" TYPE="swap"
/dev/sdb1: LABEL="sdd" UUID="c5c7758b-fdc7-4518-bdee-862606b01f3b" TYPE="ext4"
/dev/sdd: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="e4be9717-891b-1e5f-ec0a-b0225c27d310" LABEL="omv:raid" TYPE="linux_raid_member"
/dev/sdf: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="1e802b8d-e276-ff02-1bb1-5eaec30f1dc5" LABEL="omv:raid" TYPE="linux_raid_member"
/dev/sde: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="80e363b7-335a-103e-9c81-e130ef695269" LABEL="omv:raid" TYPE="linux_raid_member"
/dev/sdg1: LABEL="sdg" UUID="215b9648-518d-46a5-bd85-1808aee48560" TYPE="ext4"
lsmod | grep raid
root@omv:~# lsmod | grep raid
raid456 48453 0
async_raid6_recov 12574 1 raid456
async_memcpy 12387 2 async_raid6_recov,raid456
async_pq 12605 2 async_raid6_recov,raid456
async_xor 12422 3 async_pq,async_raid6_recov,raid456
async_tx 12604 5 async_xor,async_pq,async_memcpy,async_raid6_recov,raid456
raid6_pq 82624 2 async_pq,async_raid6_recov
md_mod 87742 1 raid456
/etc/mdadm/mdadm.conf
root@omv:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
root@omv:~#
cat /proc/mdstat
root@omv:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : inactive sda[0] sdd[2]
3907027120 blocks super 1.2
unused devices: <none>
/etc/default/mdadm
root@omv:~# cat /etc/default/mdadm
# INITRDSTART:
# list of arrays (or 'all') to start automatically when the initial ramdisk
# loads. This list *must* include the array holding your root filesystem. Use
# 'none' to prevent any array from being started from the initial ramdisk.
#INITRDSTART='none'
# AUTOSTART:
# should mdadm start arrays listed in /etc/mdadm/mdadm.conf automatically
# during boot?
AUTOSTART=true
# AUTOCHECK:
# should mdadm run periodic redundancy checks over your arrays? See
# /etc/cron.d/mdadm.
AUTOCHECK=true
# START_DAEMON:
# should mdadm start the MD monitoring daemon during boot?
START_DAEMON=true
# DAEMON_OPTIONS:
# additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog"
# VERBOSE:
# if this variable is set to true, mdadm will be a little more verbose e.g.
# when creating the initramfs.
VERBOSE=false
root@omv:~#
/etc/fstabb
root@omv:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
# / was on /dev/sda1 during installation
UUID=e371bdc4-f48e-4dfd-bb7f-7a714b3d3876 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=cf803d90-9dc5-4cc0-b9d3-24696f3ac4b3 none swap sw 0 0
/dev/sdb1 /media/usb0 auto rw,user,noauto 0 0
tmpfs /tmp tmpfs defaults 0 0
# >>> [openmediavault]
UUID=cc1efb21-1911-4278-ab3c-6f1176770916 /media/cc1efb21-1911-4278-ab3c-6f1176770916 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
UUID=c5c7758b-fdc7-4518-bdee-862606b01f3b /media/c5c7758b-fdc7-4518-bdee-862606b01f3b ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
UUID=215b9648-518d-46a5-bd85-1808aee48560 /media/215b9648-518d-46a5-bd85-1808aee48560 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
# <<< [openmediavault]
mdadm --detail /dev/md127
root@omv:~# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Nov 6 15:00:41 2011
Raid Level : raid5
Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Feb 2 15:56:39 2015
State : active, FAILED, Not Started
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : omv:raid (local to host omv)
UUID : 7ac6251d:aa39e657:efbbf6cd:b7bca99b
Events : 398
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 0 0 1 removed
2 8 48 2 active sync /dev/sdd
3 0 0 3 removed
root@omv:~#
boot extract
Mon Feb 2 16:13:59 2015: Setting parameters of disc: /dev/sdc.
Mon Feb 2 16:13:59 2015: /dev/sda.
Mon Feb 2 16:13:59 2015: /dev/sde.
Mon Feb 2 16:13:59 2015: /dev/sdd.
Mon Feb 2 16:13:59 2015: /dev/sdf.
Mon Feb 2 16:13:59 2015: /dev/sda.
Mon Feb 2 16:13:59 2015: /dev/sdb.
Mon Feb 2 16:13:59 2015: /dev/sdf.
Mon Feb 2 16:13:59 2015: /dev/sdd.
Mon Feb 2 16:13:59 2015: /dev/sdg.
Mon Feb 2 16:13:59 2015: /dev/sde.
Mon Feb 2 16:13:59 2015: /dev/sdc.
Mon Feb 2 16:13:59 2015: Setting preliminary keymap...done.
Mon Feb 2 16:13:59 2015: Activating swap...done.
Mon Feb 2 16:13:59 2015: Checking root file system...fsck from util-linux 2.20.1
Mon Feb 2 16:13:59 2015: /dev/sdc1: clean, 43461/6021120 files, 744207/24057344 blocks (check in 5 mounts)
Mon Feb 2 16:13:59 2015: done.
Mon Feb 2 16:13:59 2015: Loading kernel module loop.
Mon Feb 2 16:13:59 2015: Cleaning up temporary files... /tmp /lib/init/rw.
Mon Feb 2 16:13:59 2015: Assembling MD array mdraid_0...failed (not enough devices).
Mon Feb 2 16:13:59 2015: Assembling MD arrays...done (no arrays found in config file or automatically).
Mon Feb 2 16:14:00 2015: Setting up LVM Volume Groups... No volume groups found
Mon Feb 2 16:14:00 2015: No volume groups found
Mon Feb 2 16:14:00 2015: done.
Mon Feb 2 16:14:00 2015: Activating lvm and md swap...done.
Mon Feb 2 16:14:00 2015: Checking file systems...fsck from util-linux 2.20.1
Mon Feb 2 16:14:00 2015: sdg: clean, 12/244195328 files, 15387403/976754385 blocks
Mon Feb 2 16:14:00 2015: sdd: clean, 9937/244195328 files, 449433043/976754385 blocks
Mon Feb 2 16:14:01 2015: done.
If seems part of the raid array is still there but now mounting nor detected
Thank you for your help
Inzeback
I made a switch today to copy.com system instead of dropbox.
here are the sequences I followed to install the daemon on my OVM machine
login on SSH root on my headless machine
cd /tmp
wget http://copy.com/install/linux/Copy.tgz
tar zxf Copy.tgz
rm Copy.tgz
cp copy /opt/.copy.com
# if your system uses i386 instead of x64
# /opt/.copy.com/x86
cd /opt/.copy.com/x86_64
mkdir /media/***uuidofyourdisk***/***folderyouwanttosync***
./CopyConsole -username=***yourusernameemail*** -root=/media/***uuidofyourdisk***/***folderyouwanttosync***
Then we will create the init scrypt to launch the daemon at each system boot
and copy/paste the content
#!/bin/sh
# CopyConsole
### BEGIN INIT INFO
# Provides: copyconsole
# Required-Start: $remote_fs $syslog $all
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
start() {
echo "Starting CopyConsole..."
if [ -x /opt/.copy.com/x86_64/CopyConsole ]; then
HOME="/media/***uuidofyourdisk***/***folderyouwanttosync***/" start-stop-daemon -b -o -c root -S -u root -x /opt/.copy.com/x86_64/CopyConsole -- -daemon
fi
}
stop() {
echo "Stopping CopyConsole..."
if [ -x /opt/.copy.com/x86_64/CopyConsole ]; then
start-stop-daemon -o -c root -K -u root -x /opt/.copy.com/x86_64/CopyConsole
fi
}
status() {
dbpid=`pgrep -u root CopyConsole`
if [ -z $dbpid ] ; then
echo "CopyConsole for USER root: not running."
else
echo "CopyConsole for USER root: running (pid $dbpid)"
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status
;;
*)
echo "Usage: /etc/init.d/copyconsole {start|stop|reload|force-reload|restart|status}"
exit 1
esac
exit 0
Alles anzeigen
then change permission on the script & add the script to the boot sequence
I found all the permission created by daemon will be only affected to the root user and not the user of the shares from the web environement
The only way to change that is to change manually permission on the directory manually. If you have other ideas I am willing to take them.
Please report me any errors or mistake.
Inzeback