Kevin Wallsten on the City Journal College Rankings
"Just because you know [a small group of conservative faculty] doesn’t mean that you have a truly diverse faculty," Wallsten argues.
This month, we bring you an exclusive interview with Kevin Wallsten, Adjunct Fellow at the Manhattan Institute and political-science professor who organized City Journal’s College Rankings.
In the 2025 rankings, Furman earned high marks for student experience. But the university struggles on outcomes: Furman ranks 89th out of 100 on the Price to Earnings premium, meaning students take longer than peers at comparable schools to pay back the cost of their education. Wallsten also flagged concerns about curriculum requirements and DEI mandates. Most strikingly, nearly 98% of faculty campaign donations in the 2023-2024 election cycle went to liberal of Democratic causes. Wallsten argues this reflects a faculty culture with little room for minority viewpoints.
Read the full interview below:
🎓 Tell me about yourself and your role with the Manhattan Institute.
I’m a professor of political science. I have worked and published academically on questions connected to American politics. By training, I’m a survey researcher, so I teach a lot of courses on methods and measurement, but also on broader social-science questions. I’ve been working on higher education for four or five years, including campus speech-climate and viewpoint-diversity issues. I’ve also done projects investigating DEI in both higher education and in the military.
Right around 2020 or 2021—peak woke, if you will—I was already active in exploring some of these questions. Being a social scientist who’s primarily quantitative and empirical, my goal was always to, to the extent possible, measure what was going on; to figure out the size of these commitments to DEI and what measurable impacts they were having on student experience and free speech. Through that work I started partnering with The Manhattan Institute, and we started formulating an ambitious plan to think about college rankings in a different way.
🏆 Why did you set out to create a new college ranking system?
I think everybody is familiar with the U.S. News & World Report as the preeminent college-rankings approach. There have been some others recently that I think are useful additions to this space. The Wall Street Journal, for instance, started a college ranking a few years ago. Forbes Magazine, Washington Monthly, and of course, the Foundation for Individual Rights in Education (FIRE) all have rankings as well. Each ranking is imperfect in a variety of ways, yet they’re incredibly influential with both prospective students and the administrators who run universities. There’s an old joke that the U.S. News & World Report is actually governing our universities because administrators pay so much attention to their preferences.
And yet, for all the attention these rankings get, some are seriously deficient. U.S. News & World Report is really a reputational ranking. They take into account things like admissions rate, SAT scores, etc. None of that tells you what life will be like when you show up on campus, or how you’re going to end up afterwards.
Forbes, Washington Monthly and The Wall Street Journal have all realized this, and they’ve started doing more outcomes-based rankings. They’re really trying to get at the difference in impact on a student’s long-term career prospects depending on the school they attend. But even this is deficient because we have this black box of what the educational experience itself is like that we’re not able to peer into. All of this happened at the same time that a lot of craziness unfolded on university campuses, and people began thinking, “how is it that Princeton or Harvard’s ranking has not budged in 25 years in the U.S. News & World Report?”
That’s why we had this idea that prospective students would be better served with a ranking system that tried to evaluate what’s going on in the classroom with other students and with the faculty. We hoped that by shining a light on these things, especially in a quantitative, measurable, transparent way, universities could begin to critically examine the kinds of decisions they’re making as institutions and consider reforming themselves.
🛠️ Would you explain the rankings’ methodology?
The first thing we wanted to do is acknowledge that outcomes matter. This is not a purely ideological or political project where we’re pretending that whether you get a job at the end is unimportant. We also don’t want to say that the usual metrics are telling you nothing about universities, because they obviously are. And while we think FIRE’s method of focusing only on free speech is valuable, we didn’t want to only focus on one dimension. We wanted to take the work others had already done, and put it alongside other things that have received less attention.
To capture outcomes of education, we looked at graduation rate, retention rate, and graduate median income earning. We wanted to look at how long it takes students to pay back the cost of their education and whether they will have a solid alumni network. Our methodology for this was very close to what The Wall Street Journal does. We created a model that says, “Here’s what a student is predicted to earn based on the SAT scores of the incoming class, and here’s how much they actually do earn.” The difference is considered the impact of education. We do that with graduation rates as well.
Then we started thinking carefully about what else we need to be holistic and capture the whole college experience. We came up with three other broad buckets that we think have received insufficient attention in previous rankings. Those are educational experience, which includes curricular quality; leadership quality, which asks what direction the president and administration are pointing the school in; and, of course, the student experience, looking at how much of a school’s community involvement is benign or beneficial versus malign. We think it’s important, for instance, that there be a lively sports culture, lively Greek life, and dense organizational environment on campus.
To measure this, we developed a lot of in-house measures. We conducted a census of student organizations by scraping every university’s website and figuring out how many organizations there are, and then we classified them. How many are religious or spiritual? How many are academic? How many are political? Within the political category, how many are left-wing organizations and how many are right-wing?
In many other cases, we took work that had already been done in a piecemeal fashion by other organizations. There’s an organization called ACTA which conducts the What Will They Learn? rankings. It’s a very sophisticated and detailed study of general education requirements. We also used FIRE’s reports on campus speech policies and a lot of their survey data. We took work from organizations like Speech First that have measured the presence of bias reporting systems and DEI mandates on campus.
💬 What kind of feedback did you get?
I am a parent, so I have a lot of people in my network who are very interested in this work that I hear positive things from. The really surprising thing is the extent to which the project has found an audience with university presidents, deans, and faculty members. That’s been the most encouraging part. So I would say it’s served its dual purpose. We wanted the rankings to reach an audience among both of those camps, and as far as I can tell, they have.
🏛️ Why do you think administrators and faculty are so keen on the rankings?
Universities realize that they face a converging set of challenges from an enrollment perspective. First is the demographic cliff: the inevitable decline in college attendance that’s driven by the fact there will be fewer people who are of college age coming through the system because of the declining birth rate. The second part, of course, is declining trust in higher education, which is driving potential students—particularly young men and young conservatives—away from universities. I have a piece I published on this in City Journal. The numbers are really shocking.
On the trust front—I think public discourse around universities can fall victim to two problems. The first is, if you don’t follow these things carefully, you think every school is Harvard, in the sense that Harvard receives a vastly disproportionate amount of attention in the news. We wanted our rankings to alert people to the fact that not all colleges and universities are Harvard. There are some schools out there that are still providing a nice return on investment that aren’t plagued by all of the craziness that you see reported in The New York Times.
The second problem is this idea that universities are monolithic and that everybody in the university is aligned with the worst professor that you see at a protest or an encampment. In fact, there are reformers in every university. There are people who do not agree with the direction of the institution. You can find them in the bureaucracy. You can find them in the faculty. You can find them everywhere. Our rankings were designed to empower those reformers within the university by giving them the context and hard numbers they need to fight back on any given question. And I think that has been successful.
⚖️ Some have criticized your rankings for failing to articulate a unified pedagogical theory, and consider your grading of institutions according to particular conceptions of political and cultural virtue problematic. How do you respond to that?
Any rankings necessarily require making value judgments. That’s just true whether you’re ranking the greatest bands of all time, the greatest NBA players, or which universities are the best. That critique is fair enough in the sense that it is true—we have things we prioritize, value, and think are important, but so does every other ranking system, either implicitly or explicitly. What we tried to do is be very transparent about that and provide the weights we employ in our overall assessment of universities. You can disagree with that; ask “why is this thing 2% and that thing 5%?” And our response is, yeah, we do have priorities as an institution.
In setting up the website design, it was very important to me that people would be able to search by sub-category for precisely this reason. If someone thinks we’re giving too much weight to this or that, they can use the filtering tool and sort through our rankings in that way. I joke that if someone doesn’t like the way the rankings reward schools with less DEI infrastructure, all they have to do is click on “Commitment to Meritocracy,” and rather than search from the top of the list down, scroll to the bottom, find the most DEI-committed university in the country, and just send their kids there.
So the first response is, yes the rankings have an ideological slant, but we are transparent about that. If you don’t like our priorities, then you can take our rankings with the appropriate dose of salt, but we’re not hiding anything. My second response is, compared to what? You can quibble with our methodology, but as we talked about at the start, U.S. News & World Report, Forbes, The Wall Street Journal, and Washington Monthly, provide zero insight into many of the things that we’re trying to capture. I think skeptics should ask, “Is the average parent or prospective student better or worse off with the information that we provide?” I would argue that they’re far better off.
🔍 Your rankings cover 100 universities. How did you choose that group?
There’s a practical argument here as well as a substantive one. This is our first iteration, and we weren’t quite sure how it was all going to turn out. And, as I said, it’s a serious undertaking in terms of collecting data that nobody’s collected before. If you’re going to do a census of every student organization on every campus, that becomes really labor intensive really quickly. Plus, 100 is a nice round number.
We also wanted to identify the schools that we think people care most about. Regional schools are important—a lot of people attend them and there’s very important things happening on those campuses—but to start, we wanted to identify those high-profile campuses that come right to the top of people’s minds. So we tried to take all the schools that appear in the top 100 of the traditional rankings. We have the Ivys, we have large state flagship universities from most of the states, and some representation for liberal-arts colleges as well, which we thought were important. It’s not a comprehensive list, but we wanted to make sure that the high-profile universities that are important within a region or nationally would be represented in our data.
⭐ Why was Furman ranked 50th?
What Furman does well—though I don’t know if Furman the institution deserves credit for it necessarily—is the quality of what we call the “student experience.” Furman scores very, very highly in student ideological pluralism. Furman also does very well on student organizational balance. These indicators are important for students who want to go to a school where there’s ideological balance in the classroom, meaning there is likely to be a place where they’ll feel comfortable within their peer group on campus.
We would encourage Furman to highlight this aspect of their campus life. It’s a place that is ideologically very moderate and you can see that reflected in how students feel. To continue with what Furman does well, we have a measure of what we call “Jewish campus climate.” This was an issue that was very much consuming people’s attention when we were putting the rankings together. There were prominent encampments and a lot of instances of anti-Semitism unfolding on campuses. Furman has proved largely immune from those trends in higher education over the last two years, and we think it deserves credit for that.
We also think that, all things being equal, universities that have strong ROTC programs just provide a different educational environment. Furman does very well on this as well. So for student experience, we have Furman right near the top. If I were a prospective student, and I cared a lot about student experience, this is something I would look at and take very seriously.
Furman is performing poorly on the “outcomes” end of things. As I mentioned earlier, we have a model that’s very similar to what The Wall Street Journal does. But we also take into account the caliber of the student coming in, and assess whether the school overperforms or underperforms what we would anticipate given that quality of student. Furman doesn’t do very well here. There’s also a measure that’s put together by the organization Third Way called the Price to Earnings Premium. This is, I think, a really helpful way for prospective students to think about their education, and it puts a number to how long it will take to pay back the cost of an education. We think that’s a good measure, and unfortunately, Furman performs very poorly. They’re 89 out of our 100 Schools.
📈 If Furman wanted to advance, what specific improvements could it make?
It can be a little bit challenging from a university’s perspective to figure out what to do. I think the good news for Furman is there are some easy steps for improvement. Furman could sign on to institutional neutrality, for example, which is the idea that institutions themselves should not give opinions on or mobilize on behalf of certain causes. An institution should be home to critics, but not a critic itself. This is something universities have broadly run afoul of a lot in recent years.
Another area for improvement is curricular requirements. This is not so much about what is being taught in the class as what kinds of classes the university is requiring everyone to take as part of their general education. One of the things that jumped out about Furman is that they do not require a U.S. government or history course in order to graduate. Furman could improve its rank by really bulking up some of those general-education requirements, particularly in the American government/history area. A related improvement would be eliminating the DEI requirement. We would argue that Furman’s students would be better served by eliminating the DEI requirement and replacing it with a more traditional course (such as U.S. history or civics).
The last thing I’ll mention is the faculty. Our particular measure tries to assess how heterodox the faculty are. Broadly, that’s going to mean looking for ideological diversity. There are a couple of good faculty-based associations that allow faculty members to join and participate. The two that are most prominent are the Academic Freedom Alliance and Heterodox Academy. Our measure looked at how many faculty per capita are members of these organizations, because we think it reveals something about how faculty think about their jobs and the kinds of things they’re bringing to the classroom. This was an area where Furman didn’t perform exceptionally well, and we would encourage faculty members to join these organizations for a different perspective on what’s going on in higher education.
📊 When FFSA has pointed out that Furman’s faculty is not very ideologically diverse, we’ve received some pushback from faculty and staff. Your rankings claim that “nearly 98 percent of faculty campaign donations in the 2023–24 election cycle went to liberal or Democratic causes.” Why are campaign donations a viable metric for discerning ideological pluralism?
This is the area of debate in higher education that’s soaking up the most attention at the moment. The truth is, there is no good data set of what faculty believe across a large number of campuses. It just doesn’t exist. There have been attempts to measure faculty political attitudes through surveys, but these are plagued by all sorts of problems. There have been some attempts to look at party registration, but that’s only available in roughly half of the states.
Our attempt is to use campaign donations as a proxy for where the faculty sit politically. We would argue that’s the most comparable measure across universities. It’s also usually pretty clear, in that if somebody gives to a Democratic candidate instead of a Republican candidate, that signals a political preference. We like that there’s no ambiguity there. We also see variation across campuses of the kind you would expect. Hillsdale looks very different from Columbia.
Also, FIRE did a survey of college students in which they asked where the average faculty member is on the left to right continuum. They’re asking students for their perception of where the faculty is ideologically, and this turns out to be highly correlated with faculty campaign contributions. So we’re capturing the same results using student perception, which tends to be fairly accurate.
There are a couple things I would say about the viewpoint diversity of the faculty, though. No university is monolithic. There’s always going to be diversity of opinion in some way on a campus.
Sometimes there is a faculty member who is known as the conservative, or perhaps there’s a small group of conservatives. And sometimes the rest of the faculty get to know these people and that clouds the fact that the conservative(s) stand largely alone. Just because you know them doesn’t mean that you have a truly diverse faculty. Oftentimes the precise reason why you know these people is because they are so far away from the rest of the faculty on political questions, so they stand out.
Universities need to not ostracize conservative faculty members, but instead promote their work and highlight them. Liberal faculty can cultivate diversity by acknowledging the work of their conservative colleagues. I think too often, universities shy away from promoting their conservative faculty members because they’re not well aligned with the dominant trend of faculty opinion.



