Rankings and Kendall’s W

How can you compare how similar two rankings are.  For instance, US News and Consumer Reports may both rate hospitals.  If they have identical ratings, then they are obviously the same.  However, what if the rankings differ for 2 hospitals?  For 4 hospitals?  How can one quantify the similar of rankings?

One method for doing so is Kendall’s W (or Kendall’s coefficient of concordance).  Kendall’s W is a not parametric measure of how similar two or more rankings are.  Whereas the Spearman rank correlation measures the similarity of  any two sets of rankings, Kendall’s W can be used to measure similarity for more than two raters.

A common heuristic for judging Kendall’s W is the following:

  • ≤0: poor agreement
  • >0 to ≤0.2: slight agreement
  • >0.2 to ≤0.4: fair agreement
  • >0.4 to ≤0.6: moderate agreement
  • >0.6 to ≤0.8: substantial agreement
  • >0.8 to ≤1.0: almost perfect agreement.

Do demonstrate how to use Kendall’s W, I look at last year’s college basketball rankings.  I choose the top 5 teams according to the AP poll and see how other polls (RPI and Ken Pomeroy’s rating) would rank these same teams.  You can see the calculations of Kendall’s W for this example HERE.

Leave a Reply

Your email address will not be published. Required fields are marked *