Tuesday, January 31, 2017

On A Hockey Website

Yesterday our town was honoring signor Trombetti Giovancarlo, 
who after thirty years of work, alone, with no help, 
recorded the opera "Aida" by Giuseppe Verdi.

Gianni Rodari, "A Musical Story".

Just to not let the month of January slip away without another post, I got sentimental and decided to tell a small story about how my website came to life.

There was a void. A lot of time people on hockey boards would wonder if specific statistics of players and teams were available, and they wouldn't, although the raw data seemed to be there. Then, there was the fantasy hockey world, with its pizzazz, and asking for a predictive tool, - and again, the raw data seemed to be there.

Now, I am a sysadmin by trade, with occasional forays into software development, and since I've been doing Perl for all of my career, I got a few exposures to the Web development process and to databases. I've got a college degree in Engineering, so that gave me some idea about statistics.

So I got a look at the publicly available NHL reports, but was unsure of how to use them. I tried some standard database approach, but it wasn't working.

The turning point came when I attended a lecture on MongoDB. That one turned out to be perfect, with the loosely compiled NHL stats documents, just spill them into the Mongo database. Then extract data from them and summarize them into tables. Store the tables in an SQL database for quick serving on the website. And along came more luck - a lecture on the Mojolicious Perl Web framework which equipped me with an easy solution for how to run a website.

Thus, I was able to actually implement what I had in mind. First came the spider part, to crawl and collect the data available on NHL.com. Fortunately, I was able to scrape everything before the website's design changed drastically, and the box scores prior to 2002 stopped being available. I got everything from the 1987/88 season on.

Then, I started writing the parsers,.. and had to take a step back. There was quite a lot of inconsistent and missing reports. Therefore I had to a) add a thorough testing of every report I scraped to ensure it came together, b) look for complementing sources for whatever data was missing. So before I got done with the parsers, I had a large testing framework, and also visited all corners of the hockey-related websites to get the missing or conflicting data resolved, even the online archives of newspapers such as USA Today. Some of the downloaded reports had to be edited manually. Then, NHL.com landed another blow, dropping draft pick information from their player info pages. Luckily, the German version of the website still had it, so I began to scrape the German NHL website too.

I was able to produce the unusual statistics tables relatively quickly and easily. However I decided that the website will not open without the prediction models I had in mind. Being a retired chess arbiter and a big chess enthusiast I decided to try to apply the Chess Elo rating model to the performances of hockey teams and players. Whether it really works, or not, I don't know yet. I guess by the end of the season I can make a judgement on that.

In October 2016 I opened my website by using a free design I found somewhere online. Unfortunately, I quickly realized it was not a good fit with the contents the site was serving, so I sighed, breathed some air, opened w3schools.com in my browser, and created my own design. And a CMS too. At least I am happy with the way the site looks now, and even more happier that when someone asks a question - on Twitter, Reddit or hockey forums - whether it's possible to measure a specific metric, I am able to answer, 'Already done! Welcome to my website!'

At the end I'm a software developer, a web designer, a DBA, a sysadmin, a statistician and an SEO amateur. Oh, and a journalist too, since I'm writing a blog.

Monday, January 23, 2017

On Intangibles (a small addendum)

One "intangible" being tossed around is "motivation" of the players. Which brings memories of an episode I was witness to.

In 2003/04, in the Israeli Top Tier Chess League (which is indeed no slouch) our club managed to assemble an outstanding team, featuring, among others, a former Champion of Russia and a former Champion of Europe. I was part of the management team, and orchestrated bringing the first of the two, who also happened to be my childhood friend back in Leningrad, Soviet Union.

And so, in round III we were to face our main rival for the title, and the club's GM (also a pedestrian chess player) gathered the team and carried out a pronounced motivational speech, how we have to beat the team we're facing, and so on, and so on.

We lost 1½-4½ without winning a single game and lost any chance for the championship we could have.

Sunday, January 22, 2017

On Intangibles. Carpe Jugulum.


Often the general managers, the coaches and the players talk about "intangible values". Sometimes it's about the "locker room contributions". Sometimes it's about "passion". In my opinion, these two are actually negligible and in certain cases even harmful. I remember such references, especially the latter one, made about Israeli soccer players, and that usually meant that the player doesn't have a lot of talent to go along, but contributes a lot of passion into the game. While a passionate play can indeed ignite the play and carry the team along, more often it indicated dumb physical low-talent execution that actually harmed the team.

However, there is one intangible that I take my hat off in front. It's the one that I always admired, and myself did not have enough in my chess career. It's the ability to go for the throat of the opposition at even momentary display of weakness by it, or as Terry Pratchett put it one of its books, 'Carpe Jugulum1'.

So what is it, in my understanding? It is the situation when your opponent puts itself into an inferior position in a volatile situation (for example, in a close score), such as by a penalty, or by an icing at the end of a long shift, or by allowing an odd-man rush, and you are able to capitalize on it, yanking any remains the carpet of security from under the feet of the opposition. And then, you continue to hammer the blows on the opposition until it collapses completely. Some also call it the 'killer instinct'. This blog (and this article too) sins with abundance of examples from chess, so let me plant one from tennis... Before the match between Lleyton Hewitt and Taylor Dent at the New York Open, 2005, the latter one complained: 'He displays a poor sportsmanship: taking joy in double errors at the opponent services as well as in unforced errors.' 'I don't care what Dent thinks about it', parried Hewitt, 'I always go for a win, and on the way to it many things are allowed.'

Machiavelli advised the rulers and the politicians, 'Don't be kind'. Winston Churchill also knew something about achieving the goals when he was recommending: 'If you want to get to your goal, don't be delicate or kind. Be rough. Hit the target immediately. Come back and hit again. Then hit again with the strongest swing you can...'

All the chess champions had it, the extremes going to Alexander Alekhine, Robert J. Fischer and Garry Kasparov. Many wonderful players that never got the title complained that they couldn't commit themselves to going for the throat of the opponent time after time.

These qualities were elevated to perfection by the two best teams of the first half of 2010s, by the Los Angeles Kings and the Chicago Blackhawks that split between themselves five cups out of six from 2010 to 2015. Even when both teams seem to be struggling and wobbling, they seemed to be able to instill some kind of uncertainty into their opponents - and certainty into the spectators that these teams are going to be able to make a fist out of themselves that is going to hammer their opponents once they display any kind, and minimal level of weakness. That capability was championed by their leaders, Anze Kopitar, Drew Doughty and Jeff Carter for the Kings, and Jonathan Toews, Patrick Kane and Duncan Keith for the Hawks. When the playoffs series between the Blackhawks and their opponents were tied 3-3, Chicago has always been the favorite to win the game 7 because of their Carpe Jugulum reputation. The Kings gained even more notoriety, first by burying their sword to the hilt into each and every opponent in 2012 en route from the #8 seed to their first Stanley Cup, and then from the reverse sweep they managed against the Sharks that started their 2014 Cup run - which included two more comings from behind, 2-3 and 1-3. And even in 2016, down 1-3 to the Sharks in the first round of the playoffs somehow fans around the league were not ready to commit to the Sharks as the favorites to win the series, because the Kings were a hair away from the Sharks' throat in game 4, from 0-3 to 2-3 in the 3rd period, and then in game 5, they indeed were able to erase the 0-3 deficit into a 3-3 tie.

Well, that tie didn't hold, the Sharks broke the stranglehold and got a boost that carried them all the way to their own first even Stanley Cup Finals, and that outcome got the Kings' reputation as the Carpe Jugulum team damaged to a degree. So did the Blackhawks' one, losing their game 7 to a team that - along with the Sharks and, for instance, the Washington Capitals - had a reputation of a somewhat nonplussed one - the St. Louis Blues.

It would be entertaining to see whether the Carpe Jugulum landscape changes this year in the league, and whether the teams who were able to overcome their "benign" reputation will be able to go all the way to the Cup Finals - through their opponents' throats.

Chess Grandmaster Gennady Sosonko wrote, 'A real professional, having thought about the situation on the board, acts most decisively. He knows, that during the game, there should be no place either for doubt, nor for compassion, because a thought which is not converted into action, isn't worth much, and an action that does not come from a thought isn't worth anything at all.'

And it's important to remember, Carpe Jugulum is a necessary key to success in a competitive environment only. Albert Einstein used to say that chess "are foreign to me due to their suppression of intellect and the spirit of rivalry."

1Carpe Jugulum (lat.) - seize the throat.

Tuesday, January 17, 2017

On Players Evaluation - Part VII and Final (Bundling it all up)



Now that we obtained a way to estimate players' performances for a season, we can move on to estimate their performances for a specific game.

For the season of interest, we compute the average against for each teams, just like we computed the season averages. I.e. we calculate how many goals, shots, hits, blocks, saves are made on average against each team. Thus we obtain the team against averages Tavg. The averages are then further divided by the number of skaters and goalies (for respective stats) the team had faced.

After that we can calculate the "result" Rt of each season average stat in a chess sense, i.e. the actual performance on the scale from 0 to 1:
For Goalie Wins/Losses:

Rtwins = 0.5 + Tavgwins/(Tavgwins+Tavglosses)

For Plus-Minus:

Rt+/- = 0.5 + (Tavg+/- - Savg+/-) / 10 (10 skaters on ice on average)

For the rest:

Rstat = 0.5 + (Tavgstat - Savgstat) / K

where K is a special adjustment coefficient that is explained in part VI (and, as we remind, describes the rarity of each event)

And from the result Rt we can produce teams' Elo against in each stat, just like we computed the players' Elos.

Then, the expected result Rp of a player against a specific team in a given stat is given by:

Rp = 1/(1 + 10(Et - Ep)/4000)

where Et is the team's Elo Against and the Ep is the player's Elo in that stat.

From the expected result Rp, we can compute the expected performance Ep just like in the previous article:

Pexp = (Rp - 0.5) * A * Savg + Savg

(See there exceptions for that formula).

Please note that we do not compute "derived" stats, i.e. the number of points (or SHP, or PPP), or the GAA, given the GA and TOI, or GA, given SA and SV.

Thus, if we want to project expected result of a game between two teams, since it's the expected amount of goals on each side, we compute the sum of the expected goals by each lineup (12 forwards and 6 defensemen):

Shome = SUMF1..12(MAX(PexpG)) + SUMD1..6(MAX(PexpG)) for the home team
Saway = SUMF1..12(MAX(PexpG)) + SUMD1..6(MAX(PexpG)) for the away team

while filtering the players that are marked as not available or on injured reserve. Please note that we assume the top goal-scoring cadre is expected to play, if we knew the lineups precisely, we would substitute the exact lineup for the expected one.

You can see the projections at our Daily Summary page. So far we predicted correctly the outcome of 408 out of 661 games, i.e. about 61.7% . Yes, we still have a long way to go.

Now to the different side of the question. Given that a player expectation overall is a vector of [E1, E2, ... En] for all the stats, what is the overall value of that player. And the answer is, first and foremost, who's asking.

If it's a statistician, or a fantasy player, then the value V is simply:

V = SUM1..n(WnEn)

where Wn are the weights of the stats in the model that you are using to compare players. Fantasy Points' games (such as daily fantasy) are even giving you the weights of the stats - this is how we compute our daily fantasy projections.

Now, if you're a coach or a GM asking, then the answer is more complicated. Well, not really, mathematically wise, because it's still something of a form

V = SUM1..n(fn(En))

where fn is an "importance function" which is a simple weight coefficient for a fantasy player. But what are these "importance functions"?

Well, these are the styles of the coaches, their visions of how the team should play, highlighting the stats of the game that are more important for them. These functions can be approximated sufficiently by surveying the coaches and finding which components are of a bigger priority to them, for example, by paired-comparison analysis. Unfortunately, there are two obstacles that we may run into: the "intangibles", and the "perception gap".


But that's a completely different story.

Wednesday, January 11, 2017

On Players Evaluation - Part VI (Skater's [and Goaltender non-SVP] Elo)



The most important conclusion of the last chapter that dealt with goalies' Elos is that it is defined by actual performance of a goaltender versus the expected performance of the team he is facing. That is the approach we are going to inherit for evaluating skaters.

For the start we compute the average stats of a league for each season. We do that for most of the stats that are measured, from goals and assists to faceoffs taken, up to the time on ice for the goaltenders. This is a trivial calculation. Thus we obtain season stat averages Savg.

Now we can begin to work with the skaters. We assign them a rating of 2000 in each stat. The first and the most difficult step is to coerce the actual performance of a skater in each stat to a chess-like result, on the scale from 0 to 1. This is a real problem, since the result distribution for the number of players looks something like one of these chi-squares:


Therefore we need to rebalance it somehow while preserving the following rules:
  • They should be more or less distributive, i.e. scoring 1 goal thrice in a row in a game should produce approximately the same performance as scoring a hat trick in one game and going scoreless in the other two.
  • They should still have the same shape as the original one.
  • The average rating of the league in each stat should remain 2000 at the end of the season.

So first, we do not apply rating changes after a single game. We take a committing period, for example, five games, and average players' performance in every rated stat over that period. Second, we apply the following transformation to the performance:

P'player = (Pplayer - Savg) / Savg

where Savg is the season average on that stat. It could be more precise to compute against the averages against of the teams played (see the first paragraph), but we decided to go via a simpler route at this stage.

Then we scale the performance by the Adjustment Factor A:

P'playeradj = P'player / A

The adjustment factor sets the result between -0.5 and 0.5. More or less. There still are outliers, but they are very infrequently beyond 0.5 . The A factor depends on the rarity of the scoring in the stat and varies from 6 (Shot on Goal) to 90 (Shorthanded goal). The adjustment for goals, is, for example, 9. The adjustment for faceoffs won is 20. The latter one might look a bit surprising, but remember that many players do not ever take faceoffs, e.g. defensemen. Naturally, only skaters stats are computed for skaters, only goalie stats for goaltenders.

The final Result Rplayer is then:
Rplayer = P'playeradj + 0.5

So for the rare events we have a lot of results in the 0.48-0.5 area and a few going to 1. For the frequent events (shots, blocks, hits), the distribution is more even.

Now that we got the player's "result" R, we can compute the elo change through the familiar formula:

ΔElo = K * (R - (1/(1+10(2000 - Eloplayer)/400)))

where K is the volatility coefficient which we define as:

16 * √(A) * √(4 / (C + 1))

A is the aforementioned Adjustment Factor and C is the Career Year for the rookies (1) and the sophomores (2), and 3 for all other players.

'What is 2000', an attentive reader would ask? 2000 is the average rating of the league in each stat. We use, because the "result" of the player was "against" the league average. If we used team averages, we would put the average "Elo against" of the teams faced instead.

After we have the ΔElo, the new Elo' of a player in a specific stat becomes:

Elo' = Elo + ΔElo

And from that we can derive the expected average performance of a player in each stat, per game:

Rexp = 1/(1+10(2000-Elo')/400)
Pexp = (Rexp - 0.5) * A * Savg + Savg

which is an "unwinding" of the calculations that brought us from the actual performance to the new rating.

The calculation differs for the three following stats:

  1. SVP - processed as described in Part V.
  2. Win/Loss - processed as a chess game against a 2000 opponent, where the result is:
Rw = Pw/(Pw+Pl), Rl = Pl(Pw+Pl)
over the committing period.
The only subtlety here is that sometimes a hockey game may result in goalie win without a goalie loss.
  1. PlusMinus -
R+/- = 0.5 * (P+/- - Savg+/-) / 10 (10 skaters on ice on average)

Then, via the regular route we get the Elo' and the expected "result" Rexp, and the expected performance is:
Pexp+/- = (Rexp+/- - 0.5) * 10 + Savg+/-

Please note that we do not compute "derived" stats, i.e. the number of points (or SHP, or PPP), or the GAA, given the GA and TOI, or GA, given SA and SV.

An example of the computed expected performances that lists expectations of top 30 Centers in Assists (Adjustment Factor 9) can be seen below:

# Player Pos Team Games A a/g Avg. g. Avg.a  E a/g  E a/fs
1 CONNOR MCDAVID C EDM 43 34 0.791 44.00 33.00 0.706 61.54
2 JOE THORNTON C SJS 41 24 0.585 74.11 52.00 0.665 51.27
3 NICKLAS BACKSTROM C WSH 40 24 0.600 69.20 50.10 0.663 51.85
4 EVGENI MALKIN C PIT 39 27 0.692 62.09 44.73 0.659 55.33
5 SIDNEY CROSBY C PIT 33 18 0.545 61.67 51.50 0.655 46.15
6 RYAN GETZLAF C ANA 36 25 0.694 68.58 45.42 0.648 50.26
7 EVGENY KUZNETSOV C WSH 40 22 0.550 54.75 27.75 0.605 47.43
8 ANZE KOPITAR C LAK 36 16 0.444 72.73 41.55 0.594 40.33
9 ALEXANDER WENNBERG C CBJ 40 28 0.700 59.00 25.67 0.583 52.50
10 CLAUDE GIROUX C PHI 43 25 0.581 61.70 37.60 0.579 47.56
11 TYLER SEGUIN C DAL 42 26 0.619 66.86 31.14 0.566 48.65
12 RYAN O'REILLY C BUF 30 16 0.533 66.00 26.38 0.553 39.23
13 DAVID KREJCI C BOS 44 18 0.409 60.64 32.36 0.528 38.05
14 RYAN JOHANSEN C NSH 41 22 0.537 65.33 27.00 0.523 43.43
15 JOE PAVELSKI C SJS 41 23 0.561 69.64 29.09 0.517 44.21
16 HENRIK SEDIN C VAN 43 17 0.395 75.56 47.81 0.517 37.17
17 DEREK STEPAN C NYR 42 22 0.524 68.00 30.86 0.508 42.31
18 VICTOR RASK C CAR 41 19 0.463 67.00 22.67 0.497 39.37
19 MARK SCHEIFELE C WPG 40 20 0.500 44.50 17.83 0.493 39.23
20 JASON SPEZZA C DAL 35 18 0.514 62.71 37.79 0.490 37.60
21 JOHN TAVARES C NYI 38 16 0.421 68.50 35.00 0.488 37.46
22 MITCHELL MARNER C TOR 39 21 0.538 39.00 21.00 0.484 41.82
23 STEVEN STAMKOS C TBL 17 11 0.647 65.11 29.00 0.474 29.97
24 ALEKSANDER BARKOV C FLA 36 18 0.500 56.75 21.00 0.463 36.51
25 MIKAEL GRANLUND C MIN 39 21 0.538 55.80 24.40 0.460 40.80
26 PAUL STASTNY C STL 40 13 0.325 65.09 34.55 0.457 31.74
27 JEFF CARTER C LAK 41 15 0.366 69.67 24.33 0.448 33.35
28 MIKE RIBEIRO C NSH 41 18 0.439 62.88 33.06 0.447 36.32
29 MIKKO KOIVU C MIN 39 16 0.410 66.83 34.25 0.445 35.14
30 ERIC STAAL C MIN 39 22 0.564 74.46 36.77 0.442 40.99

You can see more of such expectation evaluations on our website, http://morehockeystats.com/fantasy/evaluation .

Now, we ask ourselves, how can we use these stats evaluations to produce an overall evaluation of a player?


To be concluded...

Saturday, January 7, 2017

On Players Evaluation - Part V (Goaltender's Elo)


Part I
Part II
Part III
Part IV

The goalkeeper is half of the whole team

Soviet proverb from Lev Yashin's times.

After a foray into the calmer lands of teams' evaluation using the Elo rating, it's time to turn our attention to the really juicy stuff - the evaluation of a single player. And we'll start with the most important one - the goaltender. DISCLAIMER: this evaluation concept is still a work in progress and one of several possible implementations of the idea.

By coincidence, it's also the simplest evaluation to make. While many stats describe the performance of a skater (goals, assists, shots, hits, blocks, faceoff wins, etc. - and even one that is accounted usually for goaltenders) only one stat truly describe the goalie's performance: the saves percentage. Usually, whole four stats are used to compare the goalies: wins (W), saves percentage (SVP), goals against average (GAA) and shutouts (SHO), but will show you first, why three of them are mostly unnecessary. Also, the name saves percentage is a bit of a misnomer, since the values of svp are usually not multiplied by 100 to look like real percent, but are shown more frequently between 0 and 1, and therefore would be more properly named as 'Saves Ratio', or 'Saves Share'.

Wins are truly results of team efforts. I always cringe when I read that a goaltender "outdueled" his opponent, when the both barely got see each other. The GAA is much more of an indication of how well the defense operates in front of the goalie. Shootouts are first, and foremost, a very rare thing, and secondly a 15-save shootout should not be the same as 40-save shootout, although for any of the four stats listed above they create two identical entry.

Therefore we feel ourselves on a firm ground evaluating goalie's performance through SVP only (with a slight input from shootouts, as described below) - and the Elo function, of course. For the start, each goaltender is assigned an Elo rating of 2000 for his first career appearance. We discard performances in which goalies faced less than four shots, because these usually are late relief appearances in the garbage time, not really an evidence of goaltending in a true hockey game. We only account for them to display the real SVP accrued in the season so far, and we consider dropping these appearances completely.

After the game we get the pure SVP from the real time stats. We adjust it in two ways:
  1. If, in the very rare case, the performance is below 0.7, we set it to 0.7 .
  2. If there was a shootout (not the shootout as defined by the NHL, but a performance where a goaltender was on the ice for at least 3420 seconds and did not let a single goal in during that time), we add a shootout bonus for the performance:

Bonus = (Saves - 10) / 200

If there were less than fifteen saves in the shootout, the bonus is assigned the minimum value of 0.025. We consider adding this bonus necessary, because the opposing team is usually gives an extra effort to avoid being shut out even during the garbage time.

Then, given the actual performance we can calculate the "Elo performance rating":

Rperf = 2000 + (SVP - SVPvsopp) * 5000

Where SVPvsopp is the SVP against the opponent the goalie is facing - effectively the shooting % of that team minus the shots resulting in empty-net goals, sort of "Expected SVP against that opponent". That means that for every thousandth of the SVP above the expectation, the performance is five points above 2000 (the absolute average).

Wait, there seems to be an inconsistency. Don't we need ratings of opponents for Elo changes calculation? Actually, no. Given an Elo performance of a player, we can calculate the rating change as a "draw" against a virtual opponent with that Elo performance, i.e.


ΔR = K * (0.5 - 1 / ( 1 + 10 ** (( Rperf - Rg)/ 400)) ) )

Where K is the volatility factor mentioned in the earlier posts. Right now we are using the volatility factor of 32, but that may change - including introducing a dependency of this factor on goaltender's experience.

And the new rating, is naturally,

Rg' = Rg + ΔR

Now we can calculate the expected remaining svp:

SVPrem = SVPavg + (Rg' - 2000) / 5000

Where SVPavg is the league average SVP. It would be more correct to substitute that value with the weighted averages of the remaining teams to face (with accordance to the matches remaining), and we'll be switching to this index soon.

We can also calculate the SVP expected from the goalie at the start of the season:

SVPexp = SVPavg0 + (Rg0 - 2000) / 5000

where SVPavg0is the average SVP of the league during the previous season and the Rg0 is the rating of the goalie at the conclusion of the previous season (including playoffs), or the initial rating of 2000.

We post a weekly update on our Elo ratings for goaltenders, and their actual and expected SVPs on our Twitter feed. You can also access our daily stats on our website page.

It looks like we're ready to try to take on the skaters' performances. But I'm not sure it's going to fit into one posting.

To be continued...

Wednesday, January 4, 2017

A small digression - about bye weeks.

One of the greatest chess methodologists, if not the greatest one, the sixth World Champion, Mikhail Botvinnik, wrote in one of his books (about the 1948 World Chess Championship Tournament):

A tournament must go on a uniform schedule, so that the participants would get used to a certain pace of competition. ...

The Dutch organizers neglected that. They didn't take into account that plenty of free days (because of the holidays, and because the number of the participants was odd) may break that rhythm and take the participant out of the equilibrium.

When I found out that one of the participants is going to "rest" for six days before the last gameday of the second round, I suggested to my colleagues Mr. Keres and Mr. Smyslov that we would submit a protest together. Alas, they didn't support me! Angrily, I told them then: "You'll see, one of us is going to rest six days in a row at the Hague, and on the seventh day he'll lose without putting up any resistance..."

And here came true the first part of my prophecy: after the six-day rest, Keres, pale as a sheet, sat at the chess table across from me, worrying, probably, that the second part of it will also come true...

Keres lost a rather short and lopsided game.

Monday, January 2, 2017

On Players Evaluation - Part IV (Teams Elo Projections)

On Players Evaluation - Part IV (Teams Elo Projections)

Part I
Part II
Part III

We left our reader at the point where we demonstrated how to produce Elo ratings for hockey teams over season (and over postseason too, if anyone wondered) and how to apply it to the up and coming next games of the rated teams.

However, in its main eparchy, chess, Elo is rarely used to produce single match outcome projections. It's much more popular when used to create a long-term projection, such as the whole tournament, which in chess lasts between five to thirteen rounds, usually.

Therefore, the question arises, shouldn't we try to use our newborn Elo ratings to long-term projections? And the answer is an unambiguous 'Yes!' We can and should create the projections for the team over longer spans such as a seven days ahead, thirty, or even through the end of the season!

How do we do it? Since we computed the Elo ratings for all teams, and we know the schedule ahead of all teams, we can run the Elo expectation on all matchups during the requested span and sum them. And since we assume that each team performs according the expectation, their Elo ratings do not change during the evaluation span.

Eteam = Σ(Ematch1, Ematch2, ... , Ematchn)

All good? No. There is one more finesse to add. The produced expectations will all be calculated in 2-0 span per game, assuming only 2 points are in play in each matchup. However, due to the loser's point it's not so. Therefore on average there are 2 + NOT/SO / Ntotal points are handed out during the season in every match (where NOT/SO is the number of games that get decided in OT or SO). So we need to compute the NOT/SO value, divide it by two (because there are two teams in each match) and multiply the expectation of each team by this factor. By doing so we receive the reliable Elo expectation, such as one in the table below, as of Jan 2nd, 2017. Spans of 7 days, 30 days and through the end of the season are displayed (games, expected points and total).

Elo ratings for season 2016
# Team Div Elo Pts Gin7 Pin7 Tin7 Gin30 Pin30 Tin30 GinS PinS TinS
1 Columbus Blue Jackets MET 2265.22 56 4 6 62 14 23 79 47 79 135
2 Pittsburgh Penguins MET 2186.57 55 1 2 57 11 16 71 44 65 120
3 Minnesota Wild CEN 2180.88 50 3 4 54 14 21 71 46 68 118
4 San Jose Sharks PAC 2137.87 47 3 4 51 14 20 67 45 62 109
5 Washington Capitals MET 2135.54 49 4 4 53 15 18 67 46 59 108
6 Montreal Canadiens ATL 2117.99 50 4 5 55 14 18 68 45 58 108
7 New York Rangers MET 2135.43 53 3 4 57 11 14 67 43 54 107
8 Chicago Blackhawks CEN 2103.27 51 3 4 55 12 15 66 42 52 103
9 Anaheim Ducks PAC 2105.41 46 3 4 50 13 18 64 43 55 101
10 Edmonton Oilers PAC 2092.89 45 4 4 49 14 16 61 44 53 98
11 Ottawa Senators ATL 2088.34 44 2 2 46 11 11 55 45 52 96
12 Toronto Maple Leafs ATL 2097.27 41 3 4 45 12 14 55 46 54 95
13 St. Louis Blues CEN 2066.58 43 2 2 45 12 12 55 44 51 94
14 Boston Bruins ATL 2079.41 44 4 5 49 15 17 61 43 49 93
15 Carolina Hurricanes MET 2093.06 39 4 5 44 13 13 52 46 53 92
16 Los Angeles Kings PAC 2066.68 40 4 4 44 14 16 56 45 52 92
17 Philadelphia Flyers MET 2079.35 45 3 3 48 12 13 58 43 46 91
18 Calgary Flames PAC 2076.79 42 4 5 47 14 16 58 43 49 91
19 Tampa Bay Lightning ATL 2068.90 42 4 4 46 13 14 56 44 48 90
20 New York Islanders MET 2070.87 36 2 3 39 12 14 50 46 51 87
21 Florida Panthers ATL 2059.66 40 4 5 45 13 14 54 44 46 86
22 Nashville Predators CEN 2055.15 38 4 4 42 14 14 52 46 48 86
23 Dallas Stars CEN 2052.77 39 3 3 42 13 13 52 44 46 85
24 Vancouver Canucks PAC 2049.05 37 4 5 42 12 15 52 44 46 83
25 Detroit Red Wings ATL 2033.62 37 3 3 40 13 12 49 45 43 80
26 Winnipeg Jets CEN 2017.50 37 4 4 41 14 14 51 43 40 77
27 Buffalo Sabres ATL 2009.45 34 3 3 37 13 12 46 46 41 75
28 New Jersey Devils MET 1994.66 35 5 4 39 14 12 47 45 37 72
29 Arizona Coyotes PAC 1921.41 27 3 2 29 12 8 35 45 30 57
30 Colorado Avalanche CEN 1910.42 25 3 2 27 12 7 32 46 29 54

The NOT/SO value right now is about 1.124 (i.e. about quarter of all games are decided past the regulation).

So you know what's good for the people?
But the people consists of men...

The team projection leaves us wanting more. After all, don't we want to be able to evaluate individual players and factor it somehow in the projection to reflect the injuries and other reasons that force top players out of the lineups? Stay tuned.

To be continued...