In a previous post, I spoke about how the college rankings put out by the magazine its Washington Monthly differed considerably from those put out by US News & World Report.
There is a fundamental problem involved in ranking things in some order. In order to do so, it becomes necessary to reduce all the quality measures used to a single number so that they can be compared along a single scale.
This raises three issues that have to be decided. What are the criteria to be used? How can the selected criteria be translated into quantifiable measures? How are the different measures to be weighted in the mix in order to arrive at the final number?
All these questions rarely have unique answers and there is seldom consensus on how to answer any of these questions, and the two college rankings mentioned above are examples of disagreements in answering just the first question alone.
The Washington Monthly said that they felt that, “Universities should be engines of social mobility, they should produce the academic minds and scientific research that advance knowledge and drive economic growth; and they should inculcate and encourage an ethic of service” and they devised measures accordingly.
US News & World Report mainly looks instead at the resources that universities have and their prestige among their peers. For example, I think that 25% of their final score is based on the “peer assessment score,” which is how people rate the universities. Such a measure is going to guarantee a high ranking for those universities that are already well known and regarded. The ratings also look at the scores of entering students, graduation and retention rates, the size of the endowment, the amount of money the schools have, the amount that alumni give to the schools, etc. All these things are also related to the prestige perception (high scoring students are likely to apply to high prestige institutions, and are more likely to graduate, get well-paying jobs, and earn more money, and so forth.) There is very little that an institution can do in the short term to change any of these things, which is why the USN&WR ratings tend to be quite stable from year to year.
The problem with both sets of ratings is that they do not really measure how well students are taught or how well they learned and grew intellectually, socially, and emotionally. In other words, neither survey tells us how much and what kind of growth the students experience during their school years. To me, that is a really important thing to know about a school.
There is one survey that I think does give useful information about some of these things and that is the NSSE, which stands for National Survey of Student Engagement. This is a research-based study that looks at how much students experience good educational practices during their college years. It does this by surveying students in their first and final years of school. Many schools (including Case) do these surveys in their first and fourth years and they provide each school with important information on their strengths and weaknesses in various areas. The results of the surveys are provided confidentially to schools for internal diagnostic purposes and are not compiled into a single overall school score for ranking purposes.
Should NSSE also produce a single quality score to enable schools to be compared? In a future posting, I will argue why such rankings may actually do more harm than good, even if the measures used to arrive at them are valid.
Leave a Reply