Just upgraded to plugin 7.0.7 and the SnapRAID -> Arrays, Drives, and Rules pages all return a 500 - Invalid RPC Response error on load.
I did notice an error while it was upgrading, from what I can remember it read something along the lines of ""paritynum" is not an object".
I did have the issue when upgrading from OMV 6->7 with the incorrect parity number but I fixed that manually a while ago, not sure if that could be causing this?
Syslog error:
2024-04-12T15:53:43.648978-06:00 nas omv-engined[2629466]: PHP Fatal error: Uncaught TypeError: OMV\Config\ConfigObject::setAssoc(): Argument #1 ($data) must be of type array, string given, called in /usr/share/php/openmediavault/config/database.inc on line 98 and defined in /usr/share/php/openmediavault/config/configobject.inc:248
2024-04-12T15:53:43.649362-06:00 nas omv-engined[2629466]: Stack trace:
2024-04-12T15:53:43.649466-06:00 nas omv-engined[2629466]: #0 /usr/share/php/openmediavault/config/database.inc(98): OMV\Config\ConfigObject->setAssoc()
2024-04-12T15:53:43.649540-06:00 nas omv-engined[2629466]: #1 /usr/share/openmediavault/engined/rpc/snapraid.inc(218): OMV\Config\Database->get()
2024-04-12T15:53:43.649606-06:00 nas omv-engined[2629466]: #2 [internal function]: OMVRpcServiceSnapRaid->getDriveList()
2024-04-12T15:53:43.649670-06:00 nas omv-engined[2629466]: #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array()
2024-04-12T15:53:43.649741-06:00 nas omv-engined[2629466]: #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod()
2024-04-12T15:53:43.649802-06:00 nas omv-engined[2629466]: #5 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call()
2024-04-12T15:53:43.649862-06:00 nas omv-engined[2629466]: #6 {main}
2024-04-12T15:53:43.649928-06:00 nas omv-engined[2629466]: thrown in /usr/share/php/openmediavault/config/configobject.inc on line 248
EDIT: config.xml
<snapraid>
<blocksize>256</blocksize>
<hashsize>16</hashsize>
<autosave>0</autosave>
<nohidden>0</nohidden>
<debug>0</debug>
<sendmail>1</sendmail>
<runscrub>1</runscrub>
<scrubfreq>7</scrubfreq>
<updthreshold>0</updthreshold>
<delthreshold>0</delthreshold>
<percentscrub>12</percentscrub>
<scrubpercent>100</scrubpercent>
<drives>
<drive>paritynum</drive>
<drive>paritynum</drive>
<drive>
<uuid>f2a3aab3-10dc-4ce0-8e57-14c3d7e2edf6</uuid>
<arrayref>114a88d2-53ed-11ed-8eee-b3f2573b9c38</arrayref>
<mntentref>42f0f779-026c-41d4-b366-ec8af06554fa</mntentref>
<name>parity1</name>
<label/>
<path>/srv/dev-disk-by-uuid-b4087104-2bef-4eb3-89db-3289206a9f0d</path>
<content>0</content>
<data>0</data>
<parity>1</parity>
<paritynum>1</paritynum>
<paritysplit>0</paritysplit>
</drive>
</drives>
<rules/>
<arrays>
<array>
<uuid>114a88d2-53ed-11ed-8eee-b3f2573b9c38</uuid>
<name>array1</name>
</array>
</arrays>
</snapraid>
Alles anzeigen
I suspect the issue is from the <drive>paritynum</drive> lines...where I expect is where my data drives should be? This was all working fine pre-upgrade.
EDIT 2: What config.xml looked like prior to upgrade (taken from fsa backup performed last night):
<snapraid>
<blocksize>256</blocksize>
<hashsize>16</hashsize>
<autosave>0</autosave>
<nohidden>0</nohidden>
<debug>0</debug>
<sendmail>1</sendmail>
<runscrub>1</runscrub>
<scrubfreq>7</scrubfreq>
<updthreshold>0</updthreshold>
<delthreshold>0</delthreshold>
<percentscrub>12</percentscrub>
<scrubpercent>100</scrubpercent>
<drives>
<drive>
<uuid>dbe482a4-9d7c-493c-81a1-8628cbe7c2fc</uuid>
<mntentref>23cd7b45-4005-4be5-8471-8f5781b149c0</mntentref>
<name>media1</name>
<label></label>
<path>/srv/dev-disk-by-uuid-7971638c-fbc6-4ac2-8ed6-85159355b2e2</path>
<content>1</content>
<data>1</data>
<parity>0</parity>
<arrayref>114a88d2-53ed-11ed-8eee-b3f2573b9c38</arrayref>
<paritynum>1</paritynum>
<paritysplit>0</paritysplit>
</drive>
<drive>
<uuid>568bfbd1-23de-4e69-88b9-c88145663922</uuid>
<mntentref>9a3fc399-d220-4a94-9aad-447cbd37cf29</mntentref>
<name>media2</name>
<label></label>
<path>/srv/dev-disk-by-uuid-8a296679-fd37-42d8-9cad-f49db16288a6</path>
<content>1</content>
<data>1</data>
<parity>0</parity>
<arrayref>114a88d2-53ed-11ed-8eee-b3f2573b9c38</arrayref>
<paritynum>1</paritynum>
<paritysplit>0</paritysplit>
</drive>
<drive>
<uuid>f2a3aab3-10dc-4ce0-8e57-14c3d7e2edf6</uuid>
<arrayref>114a88d2-53ed-11ed-8eee-b3f2573b9c38</arrayref>
<mntentref>42f0f779-026c-41d4-b366-ec8af06554fa</mntentref>
<name>parity1</name>
<label></label>
<path>/srv/dev-disk-by-uuid-b4087104-2bef-4eb3-89db-3289206a9f0d</path>
<content>0</content>
<data>0</data>
<parity>1</parity>
<paritynum>1</paritynum>
<paritysplit>0</paritysplit>
</drive>
</drives>
<rules></rules>
<arrays>
<array>
<uuid>114a88d2-53ed-11ed-8eee-b3f2573b9c38</uuid>
<name>array1</name>
</array>
</arrays>
</snapraid>
Alles anzeigen
I'm assuming restoring those drive configs would fix the issue but want to make the maintainer aware this is happening, without a backup I suspect those drive configs may be lost for others who upgrade and have this issue?