It’s been about four years since I wrote the primer about using simulations. I haven’t called it the Stocky since early in 2018 and I use a completely different set of metrics as inputs, so it’s time to have a more up-to-date reference point.
The purpose of simulations is to use the magic of random numbers to estimate the probabilites of various outcomes of the season. If you want to know how likely it is for the Dragons to win the wooden spoon, chuck the Monte Carlo method at it. If you want to know how likely it is for the Broncos to win at least seven games, Monte Carlo. If you want to know how likely it is for the Storm to win the premiership, etc, etc.
If you have a means of estimating the probability of the outcomes of games (i.e. a winning probability for each team derived from the difference in the teams’ ratings), then you can use a random number generator to “simulate” the outcome of that game, which is not dissimilar to flipping a coin with an impossibly large number of sides. If you can simulate the outcome of one game, then you can simulate the outcomes of a series of games or an entire season.
Repeat these simulations of a season 5, 10 or 50,000 times and patterns start to emerge which can be analysed. The frequency with which a given team wins the premiership, finishes at a given place on the ladder or wins a number of games gives us an estimate of how likely those events are to occur in reality.
The outputs of simulations are only as good as the input ratings. Based on the last five pre-seasons of the NRL, a blend of 70% SCWP (2nd order wins are converted to an equivalent Elo rating and used as basis for estimating game outcomes) and 30% Taylor (projected TPR for each player across the likely 1-17 is used to estimate the team’s average production and game outcomes are calculated from the relative output of the teams) simulations has a mean average error of 3.1 wins per 24 games per team. In other words, the actual win percentage of each team is, on average, within a range of plus/minus 0.130 of the simulated pre-season win percentage. We can squeeze that down marginally by using SWCP solely but I’m willing to sacrifice a small amount of accuracy (approx. 0.5%) to allow for some acknowledgement of roster changes.
It’s not perfect and it never will be but it’s a good enough first order approximation.

Form and class Elo ratings are useless as inputs at pre-season. The form rating from end of previous season has minimal correlation with next year’s performance and class ratings aren’t optimised for tipping or predictions, rather acting as a hand brake on getting too excited one way or another about results. During the season, as form ratings more accurately reflect the performance of that team, they can be used to predict the outcome of the season. These will irregularly appear from mid-season onwards.
I previously also used Poseidon ratings as a basis for sims, until I quietly shot that system in the back of the head, gangland style, because of its poor head-to-head tipping performance.