Step 9 of 9
Continuous Verification and Adjustment
Why this is not a set and forget method and how we keep it honest
Why This Step Exists
A forecast model built in 2019 and left untouched would have missed Solar Cycle 25's peak, which came in stronger than anyone expected. It would have underweighted the ENSO signal in 2022. Its analogue library would be getting progressively stale as new data accumulates but never gets added. After five years it's essentially running on a frozen snapshot of the world, adn the world moved on.
The Jones method is a living model. It needs new data, checked outcomes, and adjustment when the evidence shows it's drifting. This is the step most people skip when they pick up the method. It's also the step that separates an honest model from one that serves its own conclusions. Anyone can build a model. The hard part is maintaining it honestly when it tells you things you'd rather not hear.
The Monthly Cycle
The ongoing work follows a regular cycle, the same five steps every month:
- Before the forecast month: produce the forecast and lock it. No changes once it's published.
- During the month: record actual weather daily from the Lyndhurst Hill station. Temperature, rainfall, wind observations where available.
- After the month ends: score the forecast using the Step 8 process, update the analogue library wiht the new month's outcome, and publish the comparison publicly on the accuracy page.
- Check: did any calibration adjustments applied in recent months help or hurt? If a new ENSO correction was applied three months ago, are the subsequent forecasts better? If not, the correction comes out.
- Note any unusual events. A storm larger than the forecast suggested. An unexpected dry spell in a forecast wet month. Investigate whether the miss has a structural cause or whether it was just weather being weather.
The cycle is not glamorous. It's mostly spreadsheet work and writing comparisons. But it's what keeps the method honest.
Updating the Data Sources
The model depends on four data sources, each on a different update schedule:
| Data source | Update frequency | How we use it |
|---|---|---|
| Sunspot data (SILSO, Brussels) | Daily, monthly smoothed | Pull a fresh monthly download before each forecast cycle. Confirm cycle phase and activity level. |
| Planetary positions (JPL Horizons) | Computed in advance as needed | Pull ecliptic longitudes for the target month approximately 5 weeks ahead. No ongoing updates required. |
| ENSO state (BoM or NOAA) | Monthly | Check before each forecast. Drives the ENSO overlay calibration. A moderate or strong active event changes the base forecast. |
| Historical weather (BoM verified and quality checked) | Annual, in January | Add the previous year's data to the historical library once BoM finalises and publishes verified quality data. This typically happens in January for the previous calendar year. |
The annual January update is the most important one. Each new year of verified data gets added to the analogue library, which gradually improves the pattern matching. After 21 years of operation we have a reasonable library. After 40 years it'll be better still, if someone keeps running it.
When the Model Needs More Than Minor Calibration
Occasionally a pattern that has been reliable for 15 to 20 years simply stops working. Not one bad month. Six consecutive months trending in the wrong direction. That's a signal that something structural has changed.
Three situations where this eventually happens:
- The sunspot cycle changes character. Cycle 25 came in stronger than expected. Our historical analogues were drawn from weaker cycles, and the method was underestimating activity. We adjusted.
- Major long term circulation patterns shift. The Interdecadal Pacific Oscillation flipped from negative to positive around 2014. That changed rainfall patterns across eastern Australia. Years of analogue data from the negative phase were suddenly less representative of the new conditions.
- Station data has a quality issue. A blocked rain gauge, a sensor fault, a screen relocation. If the station data goes unreliable, the comparison becomes meaningless. We've had two instances of this over 21 years. Both were caught by checking against Samford and community reports before we'd run too many bad comparison months.
When a pattern breaks down: document it. Note the month it started underperforming. Form a hypothesis. Test the hypothesis over the next 3 to 6 months before acting on it. Don't adjust the model based on two bad months. That's noise, not signal.
And be honest about it publicly. We have published months where the notes said "the model missed this and we don't yet know why." That's not a failure in any meaningful sense. That's the process working correctly.
The Long Cycles You Cannot Fully Test
Two cycles in the Jones method operate on timescales that no living practitioner can fully verify from personal observation.
The 60-year combined cycle. We have 21 years of our own data. That's one third of one complete cycle. We can see the shape of it in Queensland's historical records going back to the late 1800s, but we haven't lived through a full cycle ourselves. Our confidence in the 60-year pattern comes from the historical record, not from personal verification.
The 178-year Jose cycle. No one alive has. The planetary configuration that defines it last completed around 1850. The Queensland observational record thins out considerably before 1890. For this cycle, you're relying entirely on Jones's original work adn the extended historical records. We flag this explicitly in any forecast where the Jose cycle is a contributing factor.
Community Verification
One check we rely on that doesn't come from any instrument: feedback from the Dayboro community. Farmers, gardeners, people who watch the valley and have done so for decades.
If our forecast said a dry month and water was running off the paddocks on Greys Road all month, that's data. It confirms what the Lyndhurst Hill station recorded. It also occasionally catches station anomalies before we notice them in the numbers. A blocked rain gauge reads zero when it's actually been raining. Community reports catch that faster than any automated check.
We don't incorporate anecdotal reports into the model itself. The model runs on measured data. But community feedback helps verify the station data and occasionally reveals something the instrument record misses. Worth keeping that channel open.
A lot of that conversation happens through Local Buddy, which is the community directory and events guide for Dayboro and the surrounding valley. If you want to stay connected to what's going on in the area beyond the weather, that's the place.
Where the Method Goes From Here
We have been building this for 21 years. The library gets better. The calibrations get more refined. The pattern matching improves as more years are added and as we understand the ENSO and solar cycle interactions more clearly.
Eventually, if the monthly forecasts hold up consistently enough to justify it, we want to extend the horizon. The cycle data theoretically supports a 3 month outlook. But we won't publish extended forecasts until the monthly track record is solid enough that the extension adds something real rather than just adding uncertainty on top of uncertainty.
For now: one month at a time. Every forecast published and locked before the period it covers. Every comparison made public when the month ends. Every miss documented without excuses.
That's the whole method.
What to Read Next
That's the complete nine step guide. If you want to see the method in practice, the best place is the Inigo Jones inspired long range forecasts page. Those are the actual forecasts we produce each month, locked and published before the period they cover.
If you want the broader context for how we approach weather at Dayboro, including the station data and daily forecasting, the Dayboro Weather Guide is the place to start.