Now these are most likely due to failing sensors and, unless I either replace the sensor or the whole system, there isn;t much I can do.
But - because the readings are so abnormally above previous readins, is there something the software in Cumulus can do to prevent this faulty/erroneous data being recorded.
For instance - this is the log for one of these spikes:
Code: Select all
05/11/10,18:50,20.6,34,5.0,7.9,14.8,111,0.0,0.0,1020.0,427.0,20.3,43,9.4,21.0,20.6
05/11/10,18:51,20.6,34,154.0,7.2,14.8,109,0.0,0.0,1020.0,427.0,20.3,43,0.0,21.0,20.6
05/11/10,18:52,20.6,34,5.0,0.0,14.8,0,0.0,0.0,1020.0,427.0,20.3,43,0.0,21.0,20.6
So - would it be possible (in whatever version of Cumulus) to have a 'filter' or cut off applied to data?
As an example - FreeWx, when it encounters dsata such as this, actually records 'Err' (think that is the wording he uses) and, because if is non-numeric, doesn't get processed by other software utilising the data.
I also know, for example, that in about 4 weeks time, I am going to have a period of about 6 weeks where the outside temperature sensor will, at times, start gooing from a temperature reading of 35 C plus and suddenly show a reading of -6 C ie it appears to go open circuit on extremely hot days for a short perid before re-setting itself. Again - no spares available so nothing I can do about it.
Being about to put limits to the data would help alleviate these problems (fo example, there is no way in the world a gauge here (opr anywhere else for that matter) will ever record a temperature above 60 Celsius and, for me at least, never go below -3 or -4 C where I live).