Tag Archives: football

Ranking every rugby league team in the world

If you’re not interested in how the rankings work and just want to see the outputs, click here.

It began as a simple exercise to try and rate Super League players, much in the way that I rate NRL and Queensland Cup players. It turns out that the Super League website makes that an impossible task because it is a garbage fire for stats. Moving on from the wasted effort, I thought I might still do team ratings for the RFL system, mostly out of my increased interest with the Toronto Wolfpack’s promotion into Super League.

Then I thought about the Kaiviti Silktails of Fiji entering into the New South Wales system and wondered if I should take a look at the leagues there, despite my dubiousness about whether anyone in NSW cared about lower grade football when they could follow the Dragons, the Tigers or the Knights in so-called first grade.

From there I spiralled into a mishmash of US college football tradition, websites in Serbian and copying and pasting. When I came to, I had a neatly formatted spreadsheet covering a decade of world club rugby league.

Embed from Getty Images

Ranking the world

Invariably, creating any sort of evaluation system requires judgements by the evaluator about who to include or exclude and what the evaluation system considers to be “good”. I’ll explain my position and you can decide whether or not you like it.

Scoring the teams uses an average of four similar rating systems that look at performance over different time intervals.

We’ve long had form and class Elo ratings for the NRL and Queensland Cup. Form is about the short term performance of clubs, and can represent anywhere from four to eight weeks of results depending on the draw and league, while class is about long term performance, and can represent the average of years of performance. Form is a better predictor of match results, class is a better predictor of fan disappointment.

I created similar systems for another ten leagues in NSW, PNG, France (see also my Elite 1 season preview), the UK and the USA. They work along the same lines as the NRL and Queensland Cup editions. The average rating within an Elo system is approximately 1500 and the disparity in ratings can be used to estimate match outcome probabilities.

Both sets of Elo ratings are adjusted by a classification system I borrowed from baseball. To acknowledge the fact that a 1700 team in the BRL is not likely to be as good as a 1300 team in Super League, we adjust the team ratings so we can attempt to compare apples to apples –

  • Majors: NRL (ratings adjusted by +500) & Super League (+380)
  • Triple-A (AAA): QCup, NSW Cup and RFL Championship (all +85)
  • Double-A (AA): Ron Massey, RFL League 1, FFR Elite 1 (all -300)
  • High-A (A+): Brisbane RL, FFR Elite 2 (all -700)
  • Low-A (A-): USARL (-1000)

In Elo terms, a difference of 120 points between teams, like between an average NRL and an average Super League team, makes the NRL team 2:1 favourites. A 415 point gap gives the less favoured team a 8.4% chance of winning (equivalent to the replacement level), 800 points 1%, 1200 points 0.1% and 1600 points 0.01%. Consider the improbability of the Jacksonville Axemen beating the Melbourne Storm and you get an idea of where I’m coming from.

Between short term form and long term class, we’re missing a medium term component that represents roughly a single year of performance. I originally was going to create Poseidon ratings for the leagues, so I took a simpler approach and used points scored per game and points conceded per game over a regular season in lieu.

I then made my simplification much more complicated by doing a linear regression of winning percentage across all leagues compared to points scored per game and a second regression against points conceded per game. This gives a formula that converts the components of for and against into winning percentage, which is in turn converted to an equivalent Elo rating, which is then adjusted per the above. It also allows me to compare points scored per game – as a measure of competitiveness or quality or both? – across different leagues.

Competitiveness or quality.png

This specifically is just trivia but from an overall analytics perspective, the risk is if only the top league is analysed and analysts assume that the same principles apply to all leagues, incorrect conclusions will be drawn about the sport.

The ranking is decided by which team has the highest average score across the four rating components, which are given equal weighting. I call it the Global Rugby League Football Club Rankings, or GRLFC for short.

While it’s possible for teams to game a single system, it would be nigh on impossible to game all components, so I feel relatively comfortable that the highest ranked team is the “best”.

That said, form ratings and the for-and-against components only work on regular season results. Class ratings are the only component that takes into account playoff (and Challenge Cup, where applicable) performance. You may think finals footy deserve more weighting but I would put it to you that “the grand final winner is always the best team” and “any rugby league team can win on their day” are two mutually exclusive thoughts and I prefer to believe the latter. If you want to further mull it over, consider that Newtown finished seventh on the ladder in the twelve team NSW Cup in 2019 and then went on to win the Cup and then the State Championship.

Each club (as represented by their combination of name, colours and logo) is only represented once in each year’s rankings, by the version of that club in the highest league. For example, Wentworthville have been in the NSW Cup and the lower tier Ron Massey Cup. To date, Wenty have been represented in the rankings by their state cup team. However, as the Magpies will be replaced in the NSW Cup by Parra reserve grade in 2020, and while this doesn’t change much in reality, they will be henceforth represented in the rankings by their Ron Massey team. This is mostly because it makes the rankings a little more interesting, not having been clogged up by a half dozen clones of the NSWRL clubs.

I would like to have included the Auckland Rugby League’s Fox Memorial comp as a double-A league but it seems to be impossible to find scores. I also would have liked to add more low-A comps, like those in Serbia or Netherlands or maybe even Nigeria or Kenya, but scores for these comps are even more difficult to find or have incomplete results or don’t really play enough games. As a result, we may never know whether the Otahuhu Leopards are better than the Villeneuve Léopards.

I drove myself mad enough to trying to get the results that I did. I don’t feel the need to delve further into district comps in Australia but, who knows, I may well change my mind on that. It would be nice to go further back on some comps, particularly in France and PNG, but we have what we have. A big thanks to rugbyleagueproject.org, leagueunlimited.com and treizemondial.fr for hosting what they do have, because we can’t possibly rely on federations to have curated their own records and history.

A full season of results is required for a club to be ranked. This is only a problem for French clubs, with both Elite 1 and 2 running through their winter and the date the ranking is nominally calculated is December 31. A French club’s first part season is given a provisional place in the rankings, converting to a ranking the year after, based on the previous twelve months’ worth of results.

The rankings can be seen for 2009 through 2019 here. Your current top seeds in each competition are –

  • NRL (Major): nrl-mel Melbourne Storm (1)
  • Super League (Major): esl-shl St Helens (5)
  • Championship (AAA): esl-tor Toronto Wolfpack (29)
  • Queensland Cup (AAA): qcup-scf Sunshine Coast Falcons (30)
  • NSW Cup (AAA): nsw-nwt Newtown Jets (40)
  • Ron Massey (AA): nsw-mry St Marys (63)
  • League 1 (AA): rfl-old Oldham Roughyeds (64)
  • PNG NRLC (AA): png-lae Lae Tigers (66)
  • Elite 1 (AA): el1-alb Albi Tigers (69)
  • Elite 2 (A+): el2-vgh Villegailhenc-Aragon (101)
  • BRL (A+): qld-wsp West Brisbane Panthers (105)
  • USARL (A-): usa-jax Jacksonville Axemen (109)

Women’s Rankings

In an ideal world, we’d have a women’s ranking to complement the men’s. But the NRLW has only completed 14 games, which is not a sufficient sample although we may see that double in 2020. The QRLW will only commence this year and it remains to be seen what the NSWRL is going to do with their women’s premiership, whether this becomes the equivalent of a Ron Massey Cup to a new NSWRLW/women’s NSW Cup or if, as is usually the case, the Sydney comp will be promoted to be the state comp.

In the more enlightened Europe, the women’s Super League has completed its first season, comprising 14 rounds, and the Elite Feminine has just commenced its second season, the previous being 12 rounds. The bones are there for a women’s club ranking, but it will take time for Australia to catch up a little and make the rankings more balanced. With any luck, I should be able to deliver the first rankings at the end of this year.

The World Club Challenge

International club football is a rare thing, indeed. The ridiculously lopsided 1997 World Club Challenge (Australian clubs scored 2506 points to the Europeans’ 957) largely put paid to the idea that there could be a competition on an equal footing between the two major leagues of football. Other than a short lived World Club Series, which was overly reliant on the charity of big Australian clubs, all that remains of the concept is the World Club Challenge match-up between the winners of the Super League and the NRL.

First held irregularly since 1976 and annually since 2000, the match suffers from the disparity in the quality of the leagues – obviously driven by money – and a lack of interest – largely driven by a lack of promotion and lack of commitment from most Australian clubs. The advantage has ebbed and flowed, generally in favour of the Australian sides but in the late 2000s, the English fought back before being pummelled back into submission more recently.

World Club Challenge For and Against.png

Incidentally, I arrived at a 120 point discount between the NRL and Super League based on Super League clubs’ for and against in the WCC over the last twenty years. The application of Pythagorean expectation and then converting that (approx. 33% win percentage for SL) into Elo rating points.

Still, I believe that the WCC should be one of the centrepieces of the season, not unlike an abbreviated World Series or Super Bowl. A match day programme could be filled out by play-offs from the champions of the men’s, women’s and secondary men’s comps – perhaps with the winners of the NRL State Championship and the winner of a play-off of the premiers of the RFL Championship and Elite 1 – in the Pacific and the Atlantic. Such an event could be saleable to broadcasters, sponsors and hosts.

Of course, if successful, the WCC would then undermine the respective competitions’ grand final days, so there’s an obvious conflict of interest. The conflict is difficult to resolve when the stakeholders are more interested in maintaining their own position than making money or securing a commercial future. While cash may be a corrupting influence, the game will not survive as a professional sport without it.

Given the absence of international club fixtures, you could fairly wonder what the applications of this ranking system might be, other than to have a rough guess at whether the Gold Coast Titans are better or worse than the Sunshine Coast Falcons (the answer is: slightly better). My feel is that the final score is a rough proxy for a singular globalised Elo rating system. Consequently, it may not be very good but I looked back to the last ten WCCs.

GRLFC vs WCC.png

It was successful in predicting the higher ranked team winning eight of the ten matches but not particularly predictive in terms of the gap between the teams (the trendline above shows basically zero correlation) nor in the scale of favouritism (favourites won 80% of the time compared to 65.9% predicted probability). Still, it’s only a sample size of ten games where the Super League sides have been beaten pretty comprehensively.

In the meantime, this gives the English something to work towards.

A deep dive for each team’s 2019 NRL season

With the first Maori versus Indigenous All-stars game and another edition of the World Club Challenge in the history books, our attention turns to the NRL season ahead.

As with last year, I’m going to do a SWOP – Strength, Weakness, Opportunity and Prospect – analysis for each team. My general philosophy for judging a team’s prospects is that where a team finishes on the ladder the previous year is a more or less accurate reflection of their level, give or take a win or two. If no changes are made, we should see a similar performance if the season was repeated. There are exceptions, e.g. the Raiders pathological inability to close out a game should be relatively easy to fix and the Knights’ managed maybe two convincing wins in 2018 but still finished eleventh, but broadly, if a team finishes with seven wins and they hope to improve to thirteen and make the finals, then we should look at what significant changes have been made in order to make that leap up the table.

Read more

Stats of Six: Is 2018 the worst top eight in NRL finals history?

Using the same format I used during the rep weekend, this is the finals preview-ish post.

I didn’t get to do all the analysis I wanted to because I’ve run out of time. By the time this gets published, I should be somewhere in or around California starting my honeymoon, which I think should probably take priority. I won’t be filing from America (in fact I probably won’t see any rugby league for six weeks) but I will be back in October or November to do some post-season stuff.

This post relies pretty heavily on Elo ratings, so you might want to brush up.

Embed from Getty Images

Read more

Gauging the 2018 State of Origin teams – Game I

For the first time in a long time, it looks like we may have a well balanced Origin season. Indeed, the balance may even be a little Blue for my liking but when three of the last generation’s four best players retire from representative football, and they all happened to play for the same state, then the scales will shift perceptibly.

By now, you would know who’s playing for both Queensland and New South Wales in the first of rugby league’s three biggest games. You might even have formed an opinion as to which side is looking the goods. Consensus seems to have settled on this being the Blues’ year. But why settle for the thoughts of experts who have spent the last forty-eight hours tweeting out the leaked Blues lineup, when I’ve crunched the numbers for you?

Embed from Getty Images

Read more

A deep dive for each NRL team’s 2018 season

The only thing more reliable than March bringing rugby league back is the slew of season previews that each and every media outlet feels the need to produce. I’m no different in this regard and here is what is likely to be the longest post I’ve ever compiled.

This year’s season preview takes a look at each team and is a mix of my usual statistics, a bit of SWOT analysis and some good old fashioned taking a wild punt and hoping it’ll make you look wise come October.

(A SWOT analysis is where you look at Strengths, Weaknesses, Opportunities and Threats. There’s only one threat in the NRL, and that’s the other fifteen teams, so it’s more of a SWO analysis)

Read more

Analysis – Stocky vs Reality: Did your team outperform? (Pt II)

The Stocky is the main forecasting tool driving the analysis on this site. It’s a simulator of the season ahead, using the Monte Carlo method and based on Elo ratings, that gives insight into the future performance of each club. My main interest has been the number of wins, as it determines ladder positions which in turn have a big impact on the finals. The Stocky might not be able to tell you which games a team will win, but it is good at telling you how many wins are ahead.

But how does a computer simulation (in reality, a very large spreadsheet) compare to reality? To test it, I’ve put together a graph of each team’s performance against what the Stocky projected for them. Each graph shows:

  • The Stocky’s projection for total wins (blue)
  • Converting that projection to a “pace” for that point in the season (red)
  • Comparing that to the actual number of wins (yellow)

It will never be exactly right, particularly as you can only ever win whole numbers of games and the Stocky loves a decimal point, but as we’ll see, the Stocky is not too bad at tracking form and projecting that forward.

This week is Part II, from North Queensland to Wests Tigers. Part I, from Brisbane to Newcastle, was last week. Also see this week’s projections update for some errors in the Stocky.

Read more

Analysis – Stocky vs Reality: Did your team outperform? (Pt I)

The Stocky is the main forecasting tool driving the analysis on this site. It’s a simulator of the season ahead, using the Monte Carlo method and based on Elo ratings, that gives insight into the future performance of each club. My main interest has been the number of wins, as it determines ladder positions which in turn have a big impact on the finals. The Stocky might not be able to tell you which games a team will win, but it is good at telling you how many wins are ahead.

But how does a computer simulation (in reality, a very large spreadsheet) compare to reality? To test it, I’ve put together a graph of each team’s performance against what the Stocky projected for them. Each graph shows:

  • The Stocky’s projection for total wins (blue)
  • Converting that projection to a “pace” for that point in the season (red)
  • Comparing that to the actual number of wins (yellow)

It will never be exactly right, particularly as you can only ever win whole numbers of games and the Stocky loves a decimal point, but as we’ll see, the Stocky is not too bad at tracking form and projecting that forward.

This week is Part I, from Brisbane to Newcastle. Part II, from North Queensland to Wests Tigers, will be next week.

Read more

Analysis – The more competitive the season, the more bums on seats

Most rugby league commentators wouldn’t know what a linear regression is or how do one. I’m no different but I do like to compare two variables and see if they’re correlated. A scatter plot with a linear trendline and an R-squared – remember R-squared goes from 0, no correlation, to 1, perfect correlation; I usually need at least 0.2 to raise an eyebrow – is all I need to keep me entertained for hours on end.

Last week, we looked the concept of competitiveness and how to measure it. This week, I want to see if (more or less) competitiveness impacts on other aspects of the game. Using my preferred ratings gap as a proxy for how competitive a season is, this post looks at a few variables to see if they’re correlated.

If you want a specific variable looked at, give me a yell.

Draws

draws vs gap

Surprisingly, there’s no link between the number of draws and how competitive the season is. There’s basically a correlation of nothing with an R-squared of 0.03 . I think draws are more about the specific teams in question and I think golden point may play a role but the overall season competitiveness doesn’t matter.

Read more

Analysis – Is the NRL getting more competitive?

The short answer is yes and no. Yes, the NRL is more competitive now than when it started but no, it doesn’t seem to be a thing that improves consistently year-on-year.

Let me explain. On this blog, we use Elo ratings to measure teams’ performances and assess each team’s probability of winning a game in advance. Surely we can use our ratings system to assess the competitiveness of each NRL season.

Philosophically, what is a high level of competitiveness? It has to be a situation where the teams are fairly close in performance resulting in a hard to predict outcome. Here’s two ways of measuring that closeness of performance with pros and cons.

  1. You could look at the spread of teams ratings. Pro – makes an assessment based on all teams. Con – if all teams are pretty average but one team is excellent and another awful, then there isn’t a big spread of talent. This would imply a highly competitive season even though there is really only one potential premiership contender.
  2. You could look at the difference between top and bottom ratings. Pro – simple. Con – if one team is truly terrible and three or four are pretty good, then the season is pretty competitive but the difference between top and bottom might be exaggerated due to the crapiness of the bottom team. This would imply a not particularly competitive season despite there being multiple potential champions.

Which is better, measuring the spread or measuring the difference from top to bottom? Neither way of doing this is immediately obvious as a better method. Let’s look in more detail.

Read more

Analysis – Your team and their finals appearances

There was a remark on Twitter a while back that Mitchell Moses had left Wests because he wanted to play finals footy but had chosen to go to the only club that had been without a final appearance longer than the Tigers. That didn’t seem right but I looked into it and it was true.

That got me thinking. How often do teams turn up to the finals? Some, like the Storm and Cowboys, seem to be regular fixtures but how do the rest fare?

Warning: this is going to be one of those “Well, yeah, I knew that” type posts. This is not about showing off some fancy analysis, just a bit of curiosity.

Double warning: Pie charts ahead.

Read more

« Older Entries