Legacy Cumulus 1 release 1.9.4 (build 1099) - 28 November 2014
(a patch is available for 1.9.4 build 1099 that extends the date range of drop-down menus to 2030)
Download the Software (Cumulus MX / Cumulus 1 and other related items) from the Wiki
Topics about the Beta trials up to Build 3043, the last build by Cumulus's founder Steve Loft. It was by this time way out of Beta but Steve wanted to keep it that way until he made a decision on his and Cumulus's future.
steve wrote:We can't expect great performance from the Pi, but the time taken to produce the graph data is long compared to everything else. I'll have a look some time to see if I can optimise it a bit.
Steve, I know henkg's generation of his extra file was pretty darned quick but I find that this is actually the slowest part of the process. Generation of the standard graph files is really quite quick in comparison to creation of, in my case the Saratoga files, which took 85seconds. As I mentioned above my latest timings (on 2007) were:
21 seconds to make the HTML files
15 s to make the graph data files
85 s to create the extra files (CUtags.txt and cumuluswebtags.txt) that are processed for tags
4 s to upload everything
I don't think I'm going to be able to do anything to make the web tag processing more efficient than it already is. Files that have hundreds of web tags in them are in them are always going to take a lot of processing power, relatively speaking. If someone wants to write a new webtag parser, I'd be happy to use it. I based mine on this code - http://www.codeproject.com/Articles/279 ... ken-Parser. Once the parser has extracted a web tag, the function which returns the value is found using a hashtable.
Bear in mind that these are all 'debug' builds. I can't say whether 'release' builds will be any more efficient, however.
I did some tests to compare timings of processing just those two files. This was just after starting Cumulus up, so wspddata and wdirdata are very small. If you have those in and you don't have a use for them, you should take them out, as recommended.
On my (high end) Windows PC:
Cumulus 1: 0.025 s
Cumulus MX 0.429 s
So it appears that the parser in Cumulus 1 is about 17 times faster than the one in MX.
On the Pi, it took 49 seconds.
I tried with a 'release' build and the time was about the same as the debug build, on both the Pi and the PC. I disabled the code which actually returns the web tag replacement, and the code which writes the file, and it didn't alter the timing significantly. So most of the time is spent in the parser.
I've rewritten the parser. The time to process the two PHP webtag files on my PC is now down from 0.4 seconds to 0.04 seconds, and on the Pi down from 49 seconds to 2.3 seconds.
It seems to work OK, but there may be bugs to iron out later.
steve wrote:I've rewritten the parser. The time to process the two PHP webtag files on my PC is now down from 0.4 seconds to 0.04 seconds, and on the Pi down from 49 seconds to 2.3 seconds.
It seems to work OK, but there may be bugs to iron out later.
Whoa! Huge improvement!! Great job! This should help on my Pi.
I'm not sure if I'm going to keep this on my Pi or not. I'm worried about SD Card Corruption from so many writes. I have a Class10 32gb card, with LOADS of free space on it. I'm hoping that might help prolong the life. The last thing I want is corrupt data. I have a Intel NUC on order I was going to use as a media PC I might throw this on. I'll have to see. It's either that or throw a USB 64gb 2.5SSD in the Pi or something to get a little better feel for reliability.
steve wrote:
I'm not sure if I'm going to keep this on my Pi or not. I'm worried about SD Card Corruption from so many writes. I have a Class10 32gb card, with LOADS of free space on it. I'm hoping that might help prolong the life. The last thing I want is corrupt data. I have a Intel NUC on order I was going to use as a media PC I might throw this on. I'll have to see. It's either that or throw a USB 64gb 2.5SSD in the Pi or something to get a little better feel for reliability.
I'm planning to access and write the data to an NFS shared drive from a web server (well, a NAS with a web server 'app' installed on it) - that would side step writing to the SD card of the Pi.
In theory a modern, quality SD card should outlast your use for it. Especially if you have lots of free space. But I found mine corrupted too regularly, however I do not think it was from wear, they would reformat fine, though sometimes needed the partition table clearing- something about the write process when power was lost. Since moving to a HDD I haven't had a single corruption. The SD is just used for the boot partition.
I've rewritten the graph data creation code and increased its speed by what looks to be about a factor of 10.
Just after start up, with a logging interval of 10 minutes, my Pi is now generating the standard HTML files, the two PHP web tags files, and the graph data files, in less than 4 seconds.
Nice improvement. I might be able to run something else on the same RPi after all.
Next request, having CumulusMX run as a daemon or at least runnable as a background process. No hurry Steve, in your own time.
My weather CumulusMX (3036)
Raspberry Pi: Wheezy
FineOffset WH1081
2015-01-17 15:20:00.677 Creating standard HTML files
2015-01-17 15:20:01.822 Done creating standard HTML files, creating graph data files
2015-01-17 15:20:02.591 Done creating graph data files, creating extra files
2015-01-17 15:20:02.608 Done creating extra files
2015-01-17 15:20:02.865 Uploading extra files
2015-01-17 15:20:02.882 Done uploading extra files, uploading standard files
2015-01-17 15:20:04.365 Done uploading standard files, uploading graph data files
2015-01-17 15:20:05.769 Done uploading graph data files
3009: Quite an improvement!
I notice in the latest release (3010) that you now require a Control-C to end the process. This means that CumulusMX will allow itself to be set as a background task and (probably, but not tested) run as a daemon - great stuff!
EDIT: A bit more testing shows that when 'sudo mono CumulusMX.exe is run as a background task, it responds to a 'kill -SIGINT' by shutting down correctly, the same way that a Control-C does. Using SIGHUP instead of SIGINT just stops the process i.e. no clean shut-down shown in the diags file. Generally, a SIGHUP should force config files to be reread.
I just tried using Control-C on the running process and got an error message:
System.ApplicationException: Mutex is not owned
at System.Threading.Mutex.ReleaseMutex () [0x00000] in <filename unknown>:0
at (wrapper remoting-invoke-with-check) System.Threading.Mutex:ReleaseMutex ()
at System.Net.FtpClient.FtpClient.Dispose () [0x00000] in <filename unknown>:0
at System.Net.FtpClient.FtpClient.Finalize () [0x00000] in <filename unknown>:0
This may just have been bad timing (ftp) but diags file attached.
This only happened once out of 6 restarts.
You do not have the required permissions to view the files attached to this post.
My weather CumulusMX (3036)
Raspberry Pi: Wheezy
FineOffset WH1081
Yes, I got the same stack trace once in my testing. I'm not sure there's anything I can do as it's coming from a thread in the ftp component. But I'll have a look.
What should I do with the SIGHUP? I can easily add code to trap it. Ignore it? What I can't do is re-read the Cumulus.ini file and act on the changes, the code just isn't written that way.