This is only half-arse general solution - while it catches and propagates non-0
exits for all sensors, not all sensors will actually propagate failures from
middles of their pipelines; and controller only handles the mpd case here.
The right thing to do seems to be to save the current timestamp value and
mark every key in the db with it, then only use values that are fresh-enough.
This needs a re-organization of how values are formatted by the sensors and
stored by the controller.
For error-handling we need to enable pipefail option on all sensor scripts,
but, how to parse and handle errors _well_? It seems that shell/awk are no
longer an advantage...
do
echo "OK ${msg_head} $line" > "$pipe"
done
+ cmd_exit_code=${PIPESTATUS[0]}
+ if [ "$cmd_exit_code" -ne 0 ]
+ then
+ echo "ERROR ${msg_head} NON_ZERO_EXIT_CODE $cmd_exit_code" > "$pipe"
+ fi
}
fork_watcher() {
/^OK/ { debug("OK line", $0) }
+/^ERROR in:MPD.*NON_ZERO_EXIT_CODE/ {
+ for (mpd_key in db) {
+ if (mpd_key ~ "^mpd_") {
+ delete db[mpd_key]
+ }
+ }
+ next
+}
+
/^ERROR/ {
debug("ERROR line", $0)
shift()