by Patricia Hutchings
American Association for Higher Education
Assessment’s Questions | Conception of Quality | Promises and Pitfalls | Moving Ahead
About a month ago, I started working my way through everything I could get my hands on that might help me understand the Conference of Interpreter Trainers: a directory of programs, a statement of standards….Now, in the last day, I’ve been part of discussions of a third set of materials, a new endorsement package the product, as I understand it, of several years of work by many of you in this room, now with funding from FIPSE, to take the bold step of looking closely at interpreter-preparation programs and undertaking a process of self-study and evaluation. I congratulate you for your entrance into that work: it’s an important step. And I say that not because I presume to have any special insights into the world of interpreter training, or anything but the slimmest sense of your collective history and hopes for the field, but because of what I’ve seen over the last five years in my work on assessment in higher education.
Assessment, as some of you know because it’s going on in your institutions, is a national movement in which campuses across the country are trying to answer questions-for themselves and for various public audiences-about the effects of their teaching and curricula on student learning-questions, like those in your endorsement package-which are ultimately about educational purposes and standards and quality. So, what I’d like to try to do here is to make some connections between the assessment movement and the lessons we’ve learned from it, and your own CIT work. As you’ll see I think many of those connections have to do with the title of this speech: the climate (and I might add the courage) to question…and the good things that follow from that.
Assessment’s Questions
Let me try to give you a sense of assessment by talking a bit about my own experience with it, which began twelve years ago at a place called Alverno College in Milwaukee, Wisconsin. I had just finished graduate school, and the job market was at an all time low in the humanities, so I was delighted to land a job at this lovely liberal arts college for women and to discover during my first year there that it was one of the most innovative places in the country. The curriculum was organized not around content areas and the accumulation of course credits, but aroW1d a set of eight abilities: things like communications, problem solving, and analysis. There was an active, ongoing conversation among faculty about how to teach for those abilities and, also-you guessed it-how to know whether students were learning them, the latter called “assessment.”
Assessment at Alverno was something all faculty did on an ongoing basis; as a way of monitoring individual student progress, it was at the very heart of teaching and learning. It was our way of answering what I take to be assessment’s key question: Are our students learning what we think we’re teaching?
So Alverno was a powerful route into the campus practice of assessment. Then in 1987 I went to the American Association for Higher Education, to direct a national project on assessment. Assessment had by then become a national phenomenon, with states mandating that colleges provide evidence that students were learning what faculty claimed to be teaching. The concern early on-the impetus behind the project I directed was that states might impose state-wide multiple-choice tests and use them to control individual student progress (the Florida scenario). But, in fact, assessment has taken a much more positive turn, and on campuses across the country people who care about students are using assessment to ask hard questions about the quality and effectiveness of their teaching and curricula. Questions like:
- What is the college’s contribution to student learning? How and what do we know of that contribution?
- Do our graduates know and can they do what our degrees imply? How do we ensure that?
- What do the courses and instruction we provide add up to for students? Are they learning what we’re teaching?
- What knowledge and abilities do we intend that students acquire? Do they have opportunities to do so? Are they successful? At what level? Is that level good enough?
- How can the quantity and quality of student learning be improved?
That’s a long list; what’s important are the themes behind those questions: themes of attention to student learning, to quality understood in terms of student learning, and to improvement. (They’re themes, as I suggest in a bit, with high relevance to your CIT endorsement package.)
A New Conception of Quality
Behind assessment’s questions is a new conception of educational quality. The traditional conception-the one that has dominated accreditation and that prevails in various public rankings-is that quality is synonymous with resources and reputation: the number of books in the library, how many faculty have terminal degrees, the scores of entering students, the number of overhead projectors available. Those are in fact important indicators of how well an institution can do its job. But assessment is based on a conception and quality which says that what matters most is what happens to students: How much do they learn? In what ways do they develop? Assessment puts impact on student learning into the quality equation. And that’s revolutionary.
. . . Revolutionary and yet catching on in a big way. The American Council on Education’s annual Campus Trends survey reports that 82% of campuses have “assessment activities underway.” Those activities comprise a range of areas and methods. Assessment programs may focus on student writing ability, critical thinking, outcomes of general education and the major, graduation rates, employer satisfaction, student and alumni satisfaction and perceptions across those areas…. Methods used to “measure” the above include not only the inevitable standardized tests but interviews, focus groups, portfolios of student work over time, faculty-made exams, and culminating senior projects in capstone courses; the reigning principle is “multiple measures over time.”
What explains all this activity? What drives it? One answer (the one you usually hear) is that states are requiring assessment. And in fact all but nine states now have some initiative in place or in the works-everything from the mandated statewide test I mentioned in Florida, to a much more open ended requirement in Virginia that all public campuses put in place a process of determining whether students are learning what the college says they do.
A second driving force is accreditation. As of November, 1988, all the regional and specialized accreditors require, by federal regulation, information about “student academic achievement,” which is to say, assessment.
But the third and most important reason for assessment ascendancy-and the one most relevant to CIT’s work-goes back to a conception of quality. Assessment has caught hold because of a broadly shared sense that its questions are important and right; that they’re good, tough questions that we ought to be able to answer. There’s a growing recognition, too, that until we ask harder questions about student learning, we won’t be able to improve our programs in meaningful ways.
Promises and Pitfalls
The assessment movement is now some five years old, so it’s possible to look at actual examples of what works and what doesn’t, with an eye to principles for further work.
The American Assembly of Collegiate Schools of Business
I mentioned a bit ago that as of 1988 assessment has been required by accreditors of the programs they accredit. But AACSB, the agency that accredits business programs in this country, turned to questions of assessment about ten years ago. Their interest was what I just described; they wanted to shift the conception of quality from one based solely on “resources and reputation” to one that takes into account actual student learning.
As you might imagine the attempt to execute this shift took AACSB into the middle of fascinating but tricky issues. For starters, what do AACSB members expect students in their programs to learn? (It turned out those expectations were far from clear in most programs.) RelatedIy, what knowledge and abilities do qualified graduates in business need in today’s world? (Again, there was no easy consensus here.) However, an AACSB task force came to two sets of answers.
First, students need to have knowledge of the subject areas that constitute the business profession: accounting, finance, human resources, organizational theory, management, and others. They also needed to acquire a set of skills and personal characteristics: oral communication, decision making, delegation, self-objectivity, the disposition to lead, etc.
What’s most interesting is that the AACSB assessment project was predicated on the radical notion that programs had not only to teach these abilities and fields of knowledge; students in their programs had to be able to demonstrate them.
The proposed assessment model was thus two-part. On the one hand a traditional multiple choice exam was designed to assess subject-matter knowledge. On the abilities front, AACSB developed a very interesting and innovative set of simulations: real-life tasks that students had to perform in order to demonstrate their skills.
It should be said that the above developments, the new methods themselves and the questions behind them, are very good news indeed, for which AACSB is to be commended. But something needs to be said as well about the fact that while AACSB now has wonderful new assessment tools available, they’re not in wide use. Why?
For one thing, it’s very expensive: $198 per student for the assessment-center portion, and $30 for the multiple-choice test. Not every program was ready to invest that kind of money in assessment without knowing what the benefits would be-a question that had not been fully dealt with.
The second reason is cultural; that is, the project flew in the face of the values and processes that had been at the center of AACSB’s work. It should be said that this is exactly the problem that assessment runs into on many campuses.
Assessment’s questions are simply not questions that everyone is ready to entertain.
The moral of the AACSB story is a crucial one: that right methods and sophisticated technologies are only part of the battle. People need to be involved; lots of discussion needs to take place. Participants need to own the process and control it.
The Harvard Assessment Seminars
My second example is about the possibility of easing your way into assessment in a way that matches institutional culture and can therefore make a difference.
And I’m thinking here of Harvard. Some of you may know the Harvard assessment story from the New York Times-the front page no less-where a report on the Harvard Assessment Seminars appeared last spring. Or better yet you may know it from the May American Association of Higher Education Bulletin, where editor Ted Marchese interviews the convener of those seminars, Richard Light.
Light’s work that goes back to 1986, when Harvard president Derek Bok urged in his book Higher Learning that every college “study the learning process and assess the effects of its programs.” To that end, on his own campus, he asked Light, a faculty member in the Kennedy School and the Graduate School of Education, to convene a seminar on undergraduate learning-to which more than 100 faculty and administrators were soon drawn.
The seminars are best described as “faculty inquiries into student learning.” For example, participants took on questions about the impact of the size of groups in which students study, which led to several experiments conducted by faculty with their own students that showed that students who study in groups of four to six do better academically than students studying alone.
They looked, too-at the urging of student participants in the seminars-at questions of gender, and discovered that women students at Harvard have an experience rather different from men’s.
In the Bulletin interview, Ted Marchese says to Dick Light, “Well, now, that’s all very nice-these findings about gender and about study in groups-but it certainly isn’t new.” Light has a great comeback (exactly the one Ted was after, I suspect): “Newness,” Light replies, “is hardly the goal here; we’re after locally useful information and small but steady increments of improvement.” He knows, he says, “that similar findings, some from earlier decades, exist in the library, but there is a power, an immediacy, that comes out of your own discoveries.” The upshot is that long-time, long-tenured faculty are now talking to one another about how students learn, and doing things as a result that they didn’t do two years ago.
The point? It’s not that Harvard has made some quantum leap forward in quality; it’s that assessment has helped create an occasion to take up, collectively, questions about learning and the conditions under which it can occur best.
At Harvard and scores of other campuses, assessment has changed the way we work by getting us talking to one another, across all kinds of lines and boundaries, coming to clearer, more collective visions of our aims and purposes, asking questions together about whether we’re achieving those purposes and how we might do better.
Moving Ahead with the CIT Endorsement Package
Assessment, self-study, “inquiries into student learning” -the phenomenon I’ve been describing, by whatever name clearly has implications for the proposed CIT endorsement process. Let me point to five that seem particularly key.
- The process (again, by whatever name) is necessarily going to feel like a mixed blessing.
In the case of higher education assessment, faculty on some campuses have felt understandably threatened by the thought that “someone else” would be looking at the effects of their work. But the more common fear – one more relevant here, I think, since you have no external mandate requiring an endorsement process-is the fear of looking closely at oneself, of asking and coming to terms with these hard questions.
But if assessment can be threatening, it can also be (as one faculty member told me) “career-defining.” At the University of Tennessee at Knoxville, for instance, the art history department, like many others was up in arms at the thought that they had to give a departmental examination to check on whether their majors were achieving the outcomes intended by the department. Many of them fought the notion with considerable vigor. But as one of those faculty members later told me, “It was in the context of assessment that we sat down together for the first time ever as a department and talked about what we actually do in our classes, what we expect, how we judge our students’ work, whether we’re doing a good enough job.” The point here is that the self-study process can be scary but also incredibly productive.
- A second lesson is that in entering into a venture like assessment (a venture like that envisioned in your endorsement package), you need to be clear about purposes.
One of the ways many campuses went wrong in the early days of assessment was to think of the task before them as “to do assessment,” to go through the right steps. There’s a certain inevitability to that kind of thinking: it’s got to be done; it’s the new rule of doing business. But if you think about the task that way you also short circuit a most important question: what is it you’re really trying to accomplish in the process? For the real value of self-study is as a means to larger ends.
Having neglected to think carefully about those larger ends, many campuses have had to loop back and pay much greater attention to the why question. What are they really after? Numbers to show an external agency? Information that faculty can use in their classes? (Those two purposes require quite different methods.) Or do they want-as many have agreed-mainly to raise questions, to put the emphasis less on the data themselves than on the processes of inquiry and talking together in new ways.
The point here is that the “why” question is one that you need to get clear about early on …which is to say, you want to think about how the endorsement or self-study process relates to your real questions, to the things you most care about.
- A third, very practical lesson from the assessment movement is to go slow and try lots of different things.
One of my favorite assessment lines comes from the dean of King’s College in Pennsylvania, Don Farmer. He was eager to engage his faculty in questions about student learning, and to institute an ongoing assessment process to help address some of those questions-but what he very wisely did was to resist the idea of a single grand scheme. Instead, he told departments that what he was after in the name of assessment was “100 small experiments.” People were encouraged to experiment, to try this here and that there, and then, down the road a bit, to “assess their assessments,” to toss what wasn’t helpful, to hang on to the more promising ventures, to make adjustments, refine, fine tune.
Farmer’s approach is a smart one, and a good one to have in view as you begin to implement your endorsement package. Remember that you’re engaged in an experiment, a pilot, a first draft, if you will. Try different things, don’t get locked in, back off when it’s necessary. . .. Know that what you do now will almost certainly not be what you’re doing in five years-and if it is, something probably went wrong.
- Value the process itself.
It’s easy when you get into something like assessment to get enthralled with data, numbers, scores, results. . . as if they’ll somehow “fix” things. But what the campuses I’ve worked with report is rather different: that numbers are helpful (though never perfect) but that the real power is in conversation and process. What you’re after is to get your collective attention on the ball of student learning, to pay attention to effects; to take on and monitor the issues of quality in an ongoing way.
- It’s always tempting to hold back, to make sure you know exactly where you’re going before you begin the journey. But from what I’ve seen of your endorsement package, a final important piece of advice from the world of assessment is simply: get started.
You’ve got a great idea; wonderful people are at work on implementing it; and there’s much to be gained. The endorsement package can provide you with a way to enact your professional values and responsibilities, to uphold standards, and to seek constant improvement. It will prompt stimulating, if sometimes scary, conversations, questions, inquiries.
In the process it will, I predict, move the field of interpretation closer to your vision of what it can be. I wish you well with this work.