Watchouts of MyHockeyRankings

While MyHockeyRankings (MHR) has many benefits, I’ve seen and heard enough parents use the site in a manner for which it was not intended. Instead of being used for good, it can be used for evil to the detriment of player development and their game.

Let’s get the most obvious one out of the way first…

  1. Focusing on your team ranking

Coaching games to maintain or improve your team’s rank/rating goes against the original intent of the site. By putting a focus on ‘goal differential’ over playing games to your team’s fullest capability is essentially poor sportsmanship. One example is not pulling your goalie late in a close game to minimize the risk to lowering your rating even more. Another one is to keep/only play your best players late in the 3rd period even when the outcome is clear. Going into a game knowing the EGD and playing to match or exceed that difference should not be on the mind of any coach before or during a game. 

2. Using MHR rating or ranking as the measurement of team success

Like in business, a team cannot just look at a single metric to see determine the how well it is performing. Usually you’ll need 2 or 3 attributes to get the full picture of how an organization is performing. Things like player development, win/loss record during league or tournament play, and learning to compete are much more important than any single rating metric.

3. Playing for a highly ranked vs middle-of-the-pack team

Coaching certainly plays a role in the development and success of a team. However, the size of the pool of players in an area and the multi-year commitment to player development of a club or region is really the biggest factor in how good a team is. This is why regions like Toronto, Boston and Minnesota have so many strong teams. They have both robust club programs to develop players from Mites to Midget as well as a deep group of players in their programs to choose from. Thus, as a parent, it really shouldn’t matter if your child’s team is highly ranked, what matters is that they continue to develop on a path to help them be the best hockey player they can be.

There are also several weaknesses to the actual algorithm using only goal differential for team ratings. Here are a few of them:

4. Lack of uniformity in game format and duration

Not all games are created equal. While USA Hockey tries to standardize games across divisions, the reality is that a large portion of games that are included in the rankings do not follow those guidelines. These can include games from tournaments, exhibition and pre-season game. The attributes that are not consistent across games can include game time, how regulation ties are handled (e.g. overtime vs. shootout vs. no extra time).  Last year, we were at a tournament with 90 second penalties while the total game was only 75% of a real game. This season our pre-season games were two twenty minute running-time games. There is no way to normalize scores based on the running time of a game.

5. Games scheduled between teams with Expected Goal Differential (EGD) great than 7

Per the original MHR manifesto, only scheduling games between teams that will be competitive makes perfect sense. However, in some regions having division where there is a large discrepancy between the top and bottom teams may occur.  Since MHR max goal differential per game is 7, I have seen several times where the lower rated team’s rating went up even though they lost by 10 goals, since the teams ratings difference was 8 or 9 goals heading into the game. I would recommend changing the algorithm to not include games between teams that have a 7 or more goal differential.

It is my experience that the MHR rating should be taken with a grain of salt and statistically there is probably some reasonably standard deviation between 0.50 and 0.75 rating points. Once again though if you are using the site for its intended purpose, then it shouldn’t matter what the actual rating is for your team. Furthermore, the natural standard deviation makes the rating even more meaningless.  Here are some additional factors that contribute to the standard deviation:

6. Tired teams

Most tier teams regularly play 4 to 6 games in a weekend.  While fatigue is something all the teams need to deal with, when the key metric for MHR is goal differential, it is very likely that final scores between two identical team will not be the same at the end of a 6 game weekend as they would have been on the first day. I have been surprised on many occasions when I expected to see a blow-out between two teams, but it was clear that the higher rated team couldn’t maintain the same level of play for 3 full periods in their final game.

7. Backup goalie dynamic

Ratings are a weighted average of both goalies.  But on many teams there can be a big gap between the top goalie and the second goalie. On others, there may be only one goalie. One season, one my kid’s teams had a goal differential rating of about 1.5 difference between the two goalies.  In this situation, wins vs. losses is a much better indicator of the team’s success instead of goal differential.

7. Asymmetric Actual Goal Differential

In my experience the EGD vs actual goal differential appears asymmetric when the EGD is about 4 or more. Usually this happened when team from different division play each other (i.e. when the higher ranked team has played most of their games against higher ranked teams and the lower ranked team traditionally plays lower ranked teams). So, the rating don’t reflect an apples-to-apples set of teams they have played and when the two teams play, the higher rated team can significantly exceed the EGD.

I am sure there are several other factors I have missed that contribute to the rating not being as precise as possible.

So how should you look at the ratings?

As mentioned above, take it with a grain of salt and don’t focus on the specific number, but more the peer group you are grouped with to see how your team is doing relative to others.  In addition, don’t be concerned about any number rating or ranking, focus instead on player and team development because at the end of the day that is what youth hockey is all about.

NHL Draft Picks Who Played in the NHL by Round

Everyone pays attention to first round draft picks and likes to point out the busts that are sure to be found in the top 30 (or 31) picks every year (e.g. Griffin Reinhart or Jarred Tinordi). In addition, they also like to point out the later-round break-through players (e.g. Jamie Benn or Patric Hornqvist). However, most people don’t realize that after the first round,  the drop off in the likelihood of playing at least one game in the NHL is pretty significant.  Once you get to the 5th round there really isn’t a big difference in the probability that a prospect will make the NHL and thus there is only a minor difference in the value of those picks (to be discussed in more detail in a later post). Players drafted in the first two rounds make up half of all the players to play a single game.

While playing a single game in the NHL is a good first cut at achieving ‘success’ in the NHL (and fulfilling childhood dreams), that probably isn’t the best metric to determine how good a team is at drafting.  Just playing a single game sets the bar a little low.  In the next post we will discuss a better second view of drafting success.

2011-12 Top 3 Underpaid Forwards

Here are the results of our True Value analysis of the Top 3 underpaid forwards during the 2011-12 campaign.  All three were making $1.25M or less and were worth at least $3M more than their cap hit. Pretty impressive numbers for all three of them.

 

Team Goals Assists Points Cap Hit ($M) True Value ($M) Amt Underpaid   ($M)
1. Jordan Eberle EDM 34 42 76 1.16 5.13 3.97
2. Jamie Benn DAL 26 37 63 0.82 4.34 3.52
3. PA Parenteau NYI 18 49 67 1.25 4.69 3.44

Eberle still has one more year left on his entry-level contract and Benn is an RFA this summer. The big winner from such a productive year is PA Parenteau who is a UFA on July 1st and should expect a big pay raise.