freddie wrote: ↑Fri 03 Jun 2022 7:19 pmTo enhance the already-available database updates performed at the realtime interval, the logging interval and at rollover, I would like to have a database update performed at MX start-up.
I would use this to store such (relatively) static data as release number, build number, latitude, longitude, units used, station name, etc.
I suspect in both Microsoft Windows and Linux environments, the start running as a service could be easily modified to initiate a batch process that could trigger the necessary update of just release number, and build number. Even these don't need to be updated every time MX starts as strictly they could be updated just when a restart shows the build has changed.
The remainder of static information virtually never changes (people rarely change location or units), and would be better treated as a one-off installation task, certainly not repeated on every restart.
MX currently can run SQL or external processes once a day (rollover), or at more frequent intervals (for Custom SQL it is already any number of seconds, or various numbers of minutes, not fixed as with standard SQL, and external processes, to just the realtime interval, or the logging interval). Your idea could be extended by implementing the ability to run SQL or external processes at end/start of month, end/start of year, start of a MX session, or once per MX installation? One could argue for even more, that MX could have a major redesign, to be totally data-driven, as I describe below.
freddie wrote: ↑Fri 03 Jun 2022 7:19 pm
the JSON files used by the new-style default website to be database-driven.
The majority of the content in JSON files used by the new-style default website is unchanged between successive uploads. In the case of data for graph plots a little is added and a little is dropped off, so in-between content apparently changes its position. For the data for tables, a comparatively small amount of the data actually changes despite the demand for frequent updates! So actually I think use of .json files could end, replaced with database table reads either programmed at various intervals, or just run when needed by a web page.
If any database options are enabled, then MX could assume people do indeed want a database driven approach, with all web site processes interrogating database tables. MX would become "intelligent" and only upload (to appropriate database table row/column) those items that have actually changed since last upload, so the whole system becomes data driven. Of course changing what gets uploaded also implies changing how the default web pages work, but that offers flexibility, web pages can plot/display information that can be picked on demand rather than predetermined.
I have made significant progress towards this objective by using my own schema for database tables for well over a decade, and the biggest step forward was implementing custom SQL updates by MX, and the related web page script changes.