Invalid RPC response -Kein Filesystem über GUI wählbar

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Invalid RPC response -Kein Filesystem über GUI wählbar

      Hallo Community

      Ich bräuchte eure Hilfe. Nach einem HDD Tausch im RAID 5/3HDDs hab ich das Raid 5 per mdadm ins recovering geschickt. Nach Abschluß ist das Raid unter anderem Namen erreichbar. Daten alle da. Soweit so gut.
      Ich habe dann Fehlermeldungen bekommen, dass der mount point für das "alte" raid nicht existiert. Ich habe alle Freigaben die auf das alte raid verwiesen haben gelöscht bzw. auf das neue raid geändert.
      Mittlerweile wird in der GUI kein Dateisystem gelistet und ich kann auch kein Gerät auswählen. Kriege immer invalid RPC response.

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid5 sda[0] sdb[1] sdc[3]
      4. 7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      5. bitmap: 0/30 pages [0KB], 65536KB chunk
      6. unused devices: <none>




      Source Code

      1. major minor #blocks name
      2. 8 0 3907018584 sda
      3. 8 32 3907018584 sdc
      4. 8 16 3907018584 sdb
      5. 8 48 2930266584 sdd
      6. 8 49 2930265543 sdd1
      7. 8 64 976762584 sde
      8. 8 65 968651776 sde1
      9. 8 66 1 sde2
      10. 8 69 8107008 sde5
      11. 9 127 7813774336 md127
      12. 8 80 976761856 sdf
      13. 8 81 976759808 sdf1
      Display All

      Source Code

      1. # Use 'blkid' to print the universally unique identifier for a
      2. # device; this may be used with UUID= as a more robust way to name devices
      3. # that works even if disks are added and removed. See fstab(5).
      4. #
      5. # <file system> <mount point> <type> <options> <dump> <pass>
      6. # / was on /dev/sda1 during installation
      7. UUID=5f859e2a-0cbc-4cfc-9a96-12d39c467e1f / ext4 errors=remount-ro 0 1
      8. # swap was on /dev/sda5 during installation
      9. #UUID=048b39dd-4c57-451f-89c6-829229a75c01 none swap sw 0 0
      10. # >>> [openmediavault]
      11. /dev/disk/by-label/3tb /srv/dev-disk-by-label-3tb ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      12. /dev/disk/by-id/md-name-OMVNAS:R5 /srv/dev-disk-by-id-md-name-OMVNAS-R5 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      13. /dev/disk/by-label/Raid /srv/dev-disk-by-label-Raid ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      14. /srv/dev-disk-by-id-md-name-OMVNAS-R5/moovie /export/moovie none bind,nofail 0 0
      15. /srv/dev-disk-by-id-md-name-OMVNAS-R5/music /export/music none bind,nofail 0 0
      16. # <<< [openmediavault]
      17. tmpfs /tmp tmpfs defaults 0 0
      Display All

      Source Code

      1. Feb 06 21:10:30 OMVNAS systemd[1]: systemd-fsck@dev-disk-by\x2did-md\x2dname\x2dOMVNAS:R5.service: Job systemd-fsck@dev-disk-by\x2did-md\x2dname\x2dOMVNAS:R5.service/start failed with resul
      2. Feb 06 21:10:30 OMVNAS systemd[1]: dev-disk-by\x2did-md\x2dname\x2dOMVNAS:R5.device: Job dev-disk-by\x2did-md\x2dname\x2dOMVNAS:R5.device/start failed with result 'timeout'.
      3. Feb 06 21:10:58 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      4. Feb 06 21:11:29 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      5. Feb 06 21:11:59 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      6. Feb 06 21:12:29 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      7. Feb 06 21:12:59 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      8. Feb 06 21:13:30 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      9. Feb 06 21:14:00 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      10. Feb 06 21:14:01 OMVNAS CRON[19964]: pam_unix(cron:session): session opened for user root by (uid=0)
      11. Feb 06 21:14:01 OMVNAS CRON[19965]: (root) CMD (sleep 38 ;
      12. Feb 06 21:14:30 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      13. Feb 06 21:14:39 OMVNAS CRON[19964]: pam_unix(cron:session): session closed for user root
      14. Feb 06 21:15:01 OMVNAS monit[1234]: 'mountpoint_srv_dev-disk-by-id-md-name-OMVNAS-R5' status failed (1) -- /srv/dev-disk-by-id-md-name-OMVNAS-R5 is not a mountpoint
      15. Feb 06 21:15:01 OMVNAS CRON[21498]: pam_unix(cron:session): session opened for user root by (uid=0)
      16. Feb 06 21:15:01 OMVNAS CRON[21497]: pam_unix(cron:session): session opened for user root by (uid=0)
      Display All
      Die nextcloud Instanz die auf einer einzelnen Hdd liegt, ist ganz normal erreichbar. Die 3TB HDD ist aber auch nicht auswählbar.
      Ich hoffe ich konnte das Problem einigermaßen schildern.

      Danke für eure Antworten

      Robin