Thus, in our case we discovered significant inconsistencies between our model and the projections it produced on one side, and the quality of these projections on the other. Therefore we decided to dig deeper, and as the result we came out with the adjustment described below.
We saw that the rating change of a shooter and of a goalie is not reflective enough. They need much more finesse. Therefore we decided to calculate the change in the rating of the shooter as:
$R_s = R_{s0} + ΔR_{s\/g}$ (1)
where $ΔR_{s\/g}$ is the Elo relation between the rating of the executed shots by the player in the game (as player's rating), number of goals scored (result) and basic goalie rating (starting rating == 2500).
$ΔR_s\/g = F * N_{shot} * (S - 1 / (1 + 10 ^ ((2500-1/N_{shot}∑↙{s}R_{shot}) / 400))) $ (1a)
and after a few iterations we saw that the optimal $F$ value is 6.
These values of $ΔR_{s\/g}$ are accumulated per goalie at the other end of the shot as well, but with opposite sign. After all skaters of the game have been processed, we apply the accumulated values to goalie ratings:
$R_g = R_{s0} + ΔR_{g\/g}$ (2)
These adjustments allowed us to bring the projecting of games' outcome to 59.5% in the regular season and to 58.9% in the playoffs. The log loss on the shot/goal predictions was basically unchanged at 0.211. The inflation of Elo was slightly negative, at about -0.0007 per event.
No comments:
Post a Comment