I had posted initially in another thread, but I had a very similar issue (3 HDDs were exhibiting issues that could be seen in dmesg). The errors I was seeing were like below. Out of the 3, only 2 are actually used and I had issues with the shares coming from the 2 HDDs with problems.
2025-01-28T21:54:55+0200 nas kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.SAT1.SPT0._GTF.DSSP], AE_NOT_FOUND (20240322/psargs-332)
2025-01-28T21:54:55+0200 nas kernel:
2025-01-28T21:54:55+0200 nas kernel: No Local Variables are initialized for Method [_GTF]
2025-01-28T21:54:55+0200 nas kernel:
2025-01-28T21:54:55+0200 nas kernel: No Arguments are initialized for method [_GTF]
2025-01-28T21:54:55+0200 nas kernel:
2025-01-28T21:54:55+0200 nas kernel: ACPI Error: Aborting method \_SB.PCI0.SAT1.SPT0._GTF due to previous error (AE_NOT_FOUND) (20240322/psparse-529)
2025-01-28T21:54:55+0200 nas kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.SAT1.SPT0._GTF.DSSP], AE_NOT_FOUND (20240322/psargs-332)
2025-01-28T21:54:55+0200 nas kernel:
2025-01-28T21:54:55+0200 nas kernel: No Local Variables are initialized for Method [_GTF]
2025-01-28T21:54:55+0200 nas kernel:
2025-01-28T21:54:55+0200 nas kernel: No Arguments are initialized for method [_GTF]
2025-01-28T21:54:55+0200 nas kernel:
2025-01-28T21:54:55+0200 nas kernel: ACPI Error: Aborting method \_SB.PCI0.SAT1.SPT0._GTF due to previous error (AE_NOT_FOUND) (20240322/psparse-529)
2025-01-28T21:54:55+0200 nas kernel: ata5.00: configured for UDMA/133
2025-01-28T21:54:55+0200 nas kernel: sd 4:0:0:0: [sdd] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=26s
2025-01-28T21:54:55+0200 nas kernel: sd 4:0:0:0: [sdd] tag#15 Sense Key : Illegal Request [current]
2025-01-28T21:54:55+0200 nas kernel: sd 4:0:0:0: [sdd] tag#15 Add. Sense: Unaligned write command
2025-01-28T21:54:55+0200 nas kernel: sd 4:0:0:0: [sdd] tag#15 CDB: Read(16) 88 00 00 00 00 00 00 00 08 18 00 00 00 08 00 00
2025-01-28T21:54:55+0200 nas kernel: I/O error, dev sdd, sector 2072 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
2025-01-28T21:54:55+0200 nas kernel: ata5: EH complete
2025-01-28T21:54:55+0200 nas kernel: EXT4-fs (sdd1): mounted filesystem 6458ae7e-d39d-42dc-af30-0b1d6adc8645 r/w with ordered data mode. Quota mode: journalled.
2025-01-28T21:54:56+0200 nas kernel: ata5.00: exception Emask 0x50 SAct 0x2600000 SErr 0x4090800 action 0xe frozen
2025-01-28T21:54:56+0200 nas kernel: ata5.00: irq_stat 0x00400040, connection status changed
2025-01-28T21:54:56+0200 nas kernel: ata5: SError: { HostInt PHYRdyChg 10B8B DevExch }
2025-01-28T21:54:56+0200 nas kernel: ata5.00: failed command: READ FPDMA QUEUED
2025-01-28T21:54:56+0200 nas kernel: ata5.00: cmd 60/08:a8:00:08:00/00:00:03:00:00/40 tag 21 ncq dma 4096 in
res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x50 (ATA bus error)
2025-01-28T21:54:56+0200 nas kernel: ata5.00: status: { DRDY }
2025-01-28T21:54:56+0200 nas kernel: ata5.00: failed command: READ FPDMA QUEUED
2025-01-28T21:54:56+0200 nas kernel: ata5.00: cmd 60/08:b0:00:08:40/00:00:03:00:00/40 tag 22 ncq dma 4096 in
res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x50 (ATA bus error)
2025-01-28T21:54:56+0200 nas kernel: ata5.00: status: { DRDY }
2025-01-28T21:54:56+0200 nas kernel: ata5.00: failed command: READ FPDMA QUEUED
2025-01-28T21:54:56+0200 nas kernel: ata5.00: cmd 60/08:c8:18:08:c0/00:00:03:00:00/40 tag 25 ncq dma 4096 in
res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x50 (ATA bus error)
2025-01-28T21:54:56+0200 nas kernel: ata5.00: status: { DRDY }
2025-01-28T21:54:56+0200 nas kernel: ata5: hard resetting link
2025-01-28T21:55:00+0200 nas kernel: ata4: link is slow to respond, please be patient (ready=0)
2025-01-28T21:55:02+0200 nas kernel: ata5: link is slow to respond, please be patient (ready=0)
2025-01-28T21:55:02+0200 nas kernel: ata5: SATA link down (SStatus 0 SControl 310)
2025-01-28T21:55:02+0200 nas kernel: ata5: hard resetting link
2025-01-28T21:55:02+0200 nas kernel: Initializing XFRM netlink socket
2025-01-28T21:55:03+0200 nas kernel: ata6: link is slow to respond, please be patient (ready=0)
2025-01-28T21:55:03+0200 nas kernel: ata6: SATA link down (SStatus 0 SControl 310)
2025-01-28T21:55:03+0200 nas kernel: ata6: limiting SATA link speed to 1.5 Gbps
2025-01-28T21:55:07+0200 nas kernel: ata5: link is slow to respond, please be patient (ready=0)
2025-01-28T21:55:09+0200 nas kernel: ata6: link is slow to respond, please be patient (ready=0)
2025-01-28T21:55:10+0200 nas kernel: ata4: link is slow to respond, please be patient (ready=0)
2025-01-28T21:55:12+0200 nas kernel: ata5: hard resetting link
2025-01-28T21:55:16+0200 nas kernel: ata6: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
Display More
My issue was caused by a SATA power extension cable that the 3 HDDs were connected to. I am using a big case and the cables from the power supply are not long enough to reach all HDDs. My solution came from another forum where someone has posted the fix, fortunately. This may help other people. The only funny thing is that my NAS may have had the issue for a long time, but a kernel update actually revealed it. The same thing happened to the poster in the other forum (issue was visible after a kernel update).