Monday, Oct. 26, 1998
What Makes A Good College
By Barrett Seaman
It is the season of high anxiety and road trips. Across the U.S., the families of more than a million high school seniors--at least those who did not do so this July and August--are spending a good portion of this autumn visiting college campuses. As the days dwindle down to a precious few before applications are due, the students are struggling to find--and get into--a college or university that will bestow upon them a pedigree ensuring success in life. Poised to court them are college-admissions staffs bristling with view books, videos and other lures of modern marketing, eager to deliver to their faculties and coaches talented youngsters who will reflect well on the institution.
In the midst of the frenzied ritual is a battalion of college guides offering to lead families through the thicket of come-ons and promises. Among these are venerable tomes such as The Fiske Guide to Colleges and Peterson's Four-Year Colleges, as well as recent entries that include products from TIME and Newsweek, each paired with a test-preparation service--TIME with The Princeton Review and Newsweek with Kaplan. The most watched of these guides is the annual ranking of America's "best" colleges and universities by U.S. News & World Report. The U.S. News formula has evolved over 12 years into a complex calculation that purports to measure quality by quantifying 16 characteristics, including SAT/ACT scores, high school class ranks of freshmen, rates of retention, alumni giving and average faculty salaries.
The list has enormous market appeal. The issue that features the rankings has become what the guide's founding editor, Mel Elfin, has described as the U.S. News equivalent of SPORTS ILLUSTRATED's annual swimsuit issue: it is one of the best-selling issues of the year.
The rankings have also become enormously controversial. Among the schools critical of the system are members of U.S. News's own top-25 club: Berkeley, Tufts, Rice, M.I.T. and Wesleyan. Critics rally around Stanford president Gerhard Casper's censure of the "specious formulas and spurious precision" of the lists. What is wrong, many say, is that the conclusions are based too much on input--the current reputation of each school and the attributes of incoming students. More helpful, they say, would be a measure of output--what consumers (that's what applicants really are) are likely to get in exchange for as much as $120,000 at today's prices.
The movement toward output-based evaluations has been gathering momentum. New York State has announced that it is developing a system to measure, among other things, the value of a degree from each of its 263 degree-granting institutions, both public and private, two-year and four-year. The Carnegie Foundation, which developed the first college classification back in 1970, is overhauling its taxonomy to reflect the changes in many institutions. The explosion in the number of commuter colleges and, most recently, "virtual" universities that teach over the Internet poses a new dimension to these earlier classifications. Currently, only 1 out of every 4 college students in the U.S. is an 18-to-21-year-old attending a traditional four-year college on a full-time basis.
One of the more advanced--and intriguing--of the emerging classifications is being developed by Robert Zemsky at the University of Pennsylvania. Zemsky, who directs Penn's Institute for Research on Higher Education and is a principal researcher with the National Center for Postsecondary Improvement, says that by 2000 he will produce a kind of Consumer Reports on American colleges and universities that will rate schools by a new measure: how their graduates fare after they leave school. Fifteen schools have signed on to his pilot program, which has funding from the U.S. Department of Education.
What Zemsky wants to know is this: What does a degree from Harvard, Hamilton or Humboldt State get you in life? Is it worth the investment? To test his methodology, Zemsky identified seven market segments and recruited 15 institutions--members of the 160-campus Knight Collaborative--that spread across all seven. He had them survey their class-of-1992 graduates six years out of college. Over the course of nine months, the participating schools were able to get a 48% response rate.
Predictably, what Zemsky calls "name-brand" schools produced more doctors and lawyers, while the "core" schools (a segment that includes most state universities) turned out more scientists and engineers, and "convenience" schools (which tend to sell education by the piece) turned out more teachers and nurses. Only name-brand schools sent a majority of graduates on to some form of further education. Name brands top Zemsky's "academic confidence" tests as well.
On the other hand, the name brands did not necessarily lead the pack in graduates' income levels. "Parents have three questions," says Zemsky. "Economically, what is going to happen to my kid? Will this institution equip my child in ways that generate confidence? Will I be able to find some diamonds in the rough?" Zemsky's scheme "got us all thinking about how we evaluate ourselves," says Freeman Hrabowski III, president of the University of Maryland, Baltimore County. "And the emphasis is correct; it's on outcomes. Parents can now say, 'My son or daughter is interested in becoming a scientist. What's your record in producing scientists?'"
The greatest doubts center on the validity of Zemsky's measures of such amorphous qualities as "confidence," which struck some of the participants as mushy. "He's got to be a bit sharper in defining the qualitative measures," says David Paris, associate dean of faculty at Hamilton College. Zemsky plans to do a second test this coming year and hopes to have the project up to scale with 600 institutions by 2001. If the schedule holds, future collegians may have a better view of what they're buying in the next century.
--With reporting by Jillian Kasky, Dana Lenetz and Kate Zambreno/New York
With reporting by Jillian Kasky, Dana Lenetz and Kate Zambreno/New York