By Emily S. Tai
Veteran academics will tell you that assessment has been around since the 1970s. Peter Ewell reviews the history in his paper, An Emerging Scholarship: A Brief History of Assessment. Campaigns to impose an additional level of measurement on student learning—over and above the old-fashioned graded assignment or examination—nevertheless began gaining momentum during the Reagan years, and then took off with particular force during the Bush Administration, when former Department of Education Secretary Margaret Spellings began to forcefully support the use of assessment as a criterion in higher education accreditation.
The Arguments Pro….
Advocates of outcomes assessment maintain that it promotes transparency and accountability. Institutions can show that they are really educating students. Parents, employers, and legislators can see how tax-levy funds are being spent. Students—particularly first-generation college students, for whom the university classroom may seem a foreign country—can gain a better understanding of how to succeed in college, and how a university education can “add value” to their life experience. And professors can use assessment methods and instruments to gain insights into student learning that they would never learn from course assignments alone. Writing in last week’s Chronicle of Higher Education, about The Perils of Trashing the Value of College, Spellings, now chancellor of the University of North Carolina system, asserted that the Collegiate Learning Assessment (CLA), one of several “assessment instruments” introduced during her years in Washington, provides data to prove that “college is the place that hones skills and knowledge, builds professional networks, and clarifies life goals.”
But some college professors aren’t so sure that student progress can be so easily quantified—or that doing so, via assessment instruments, is a benign project. Last week’s Sunday New York Times featured a thoughtful piece by Molly Worthen, an assistant Professor of History at the University of North Carolina at Chapel Hill. In contrast to Chancellor Spelling’s ringing endorsement of assessment, Worthen suggested that the campaign to assess the “outcomes” of education can, roughly, be linked to a concurrent campaign to de-fund public higher education—and that most assessment instruments shortchange students, as they fail to fully capture the nuanced value of higher education. “If we describe college courses as mainly delivery mechanisms for skills to please a future employer,” writes Worthen, “if we imply that history, literature and linguistics are more or less interchangeable “content” that convey the same mental tools, we oversimplify the intellectual complexity that makes a university education worthwhile in the first place.”
The Tyranny of Metrics
Worthen’s arguments echo those that another historian, Jerry Z. Muller, who teaches at the Catholic University of Washington, offered to challenge assessment in a January issue of The Chronicle of Higher Education, entitled The Tyranny of Metrics. Muller’s critique of the current craze to measure everything, including student learning, is an excerpt from his longer book, of the same title, (with the subtitle How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government) which appeared this year from Princeton University Press.
Fundamentally, what Mueller questions is the notion that the ostensibly “objective” data of metrics is more reliable than the qualitative data faculty share—and gather—in the course of classroom teaching. As he explained in a recent interview in Inside Higher Ed, “gaming” metrics can also make information that seems factual actually less trustworthy. Such concerns would seem to be confirmed by the work of David Eubanks, an assistant Vice-President for Assessment and Institutional Effectiveness at Furman University who has also examined the “methodological flaws that are inherent to assessment” in an article entitled A Guide for the Perplexed, which appeared in the Fall, 2017 issue of the journal Intersection.
Are we asking the right questions?
Eubank’s title humorously invites comparison of the current debate over assessment to medieval scholastic debates that pitted Aristotle against monotheism. But even if measuring student learning is a necessary project, are current modes of assessment really the best way to help students become the independent learners they will need to be in an increasingly demanding work environment? Metrics of student performance may be available as aggregate data to institutional accreditors—but they’re seldom organized in ways that help students the same way that a professor’s comments—in an office hour, or on a paper or assignment—might start a student thinking about how to manage the task of learning; how to identify their own strengths and weaknesses; and how to leverage these abilities in formulating career goals. As educators assess assessment, perhaps the real question we should be asking is whether assessment really makes the same, transformative difference that we already know can be made by a compelling subject, and a dedicated professor.
Jerry Z. Muller will be speaking at the Roosevelt House Public Policy Institute at Hunter College, Four Freedoms Room, from 1.30-3 PM on Wednesday, March 7, 2018 (sponsored by the Hunter College Faculty Delegate Assembly, 212-772-4123 or firstname.lastname@example.org). The UFS Blog would like to thank Professor John R. Wallach, UFS Senator and Professor of Political Science at Hunter College, for sharing information about this event. The UFS Blog would also like to thank Professor Philip Pecorino, Professor of Philosophy at Queensborough Community College and UFS Executive Committee member, for several of the links referred to in this post.
Emily S. Tai is an associate professor of History at Queensborough Community College, and edits the UFS Blog.
The UFS Blog is a forum for CUNY Faculty, and welcomes the expression of all points of view.
Like the UFS Blog on Facebook.
Photo: Deitering A. (2008, September), critical glanceability.