It’s redundant work for everyone, except for the “Outcomes Assessment” administrators who are paid to make $hitwork up for faculty and students who would prefer to be left alone to get on with the business of studying physical anthropology, or engineering, or zoology, or Romantic literature, or something else that has actual interest and value to people other than “Outcomes Assessment” administrators.Historiann, and the blogger she cites, Clio Bluestocking, and some of the commenters, have the following complaints about Outcomes Assessment: it's a bunch of non-experts coming in from outside and dictating how content experts should be teaching things, it's a process that attempts to standardize content and teaching across departments and colleges and locations, it's a time- and labor-intensive process but departments are usually not granted any extra money or resources to comply with it, it homogenizes learning (through testing) and trains students to be uncreative and timid, and, most important, it assumes that the humanities and the outcome of a humanities class or education can be measured quantitatively. (did I leave anything out?)
Reading this made me think of my MLA summer newsletter, which made me vaguely worried and uncomfortable when I read it a little while ago, but was unable to put my finger on it at the time (I thought maybe it was because it reminded me that the job list was coming out soon. Eeek!) If you haven't gotten around to reading it yet, here's an excerpt from Rosemary G Feal's column:
Leave the Kleenex. Take the Data.While I am glad that the MLA is actually doing something (Historian's earlier post about a professor who ignored controversial emails attributed to him makes a good case for the dangers of ignoring distasteful developments), I worry that our willingness to jump on the quantitative-data bandwagon will produce some short-term benefits but be hugely detrimental in the long term.
Humanities Advocacy Day, 2009. I am sitting in the audience with the MLA's president, Catherine Porter, and vice president, Sidonie Smith, at a panel called Making the Case for the Humanities. A university president anchors his talk with this little quip from his days as a dean: when faculty members from the sciences came to see him, he took out the checkbook; when faculty members from the humanities visited, he took out the Kleenex. Leaving aside the gendered attitudes and other biases encoded here, I wonder what made him view the humanities faculty as a bunch of whiners without a cause. Is it in part because we are without something the scientists have when they visit the dean: the data?
The scientific community enjoys the benefits of a federally funded data collection project, Science and Education Indicators, prepared by the National Science Foundation's Division of Science Resource Statistics, with guidance from the National Science Board (www.nsf.gov/statistics/seind08/start.htm). Our turn is coming, though. Thanks to the efforts of many ... the Humanities Indicators, a project of the American Academy of Arts and Sciences, has recently been launched in prototype form (www.humanitiesindicators.org).
I know it's ironic, coming from someone who is currently compiling a table of incidences and who just posted a picture to her blog to document the quantity of her recent reading (not the quality --- whoo no, there is no quality in that pile of boredom), but I would argue that the essence of the study of the humanities is qualitative. In fact, in the past, the "human" part of the humanities has meant that part of life which cannot be quantified.
I worry that, in relying on quantitative data to justify such a deeply qualitative field of study to our administrators, legislators, donors, and the general public, that we end up commodifying humanistic study ---- that we train those aforementioned people to not value the very subjects and methods we are advocating for, that they will become even less likely to understand the humanities work we do and why it is important. Will relying more and more on tables and charts and graphs of humanities "outcomes" and "excellence" end up devaluing, or transforming, the qualitative types of work we do in the humanities classroom?
After all, what's so great about quantitative data anyway? Well, let's see: it is a fast and efficient way of conveying information ---- it will take you a lot less time to read my charts and timelines of who-was-where-when than it would for you to read all those documents and for us to have a deep conversation about it. (Let's save that for the novel, which is actually worth considering closely.)
We also currently have a strong cultural inclination to value quantitative data as very important and somehow more real or more rigorous (here I could link to Lyotard or Evelyn Fox-Keller or Donna Haraway or whoever made this point first but I'm lazy and don't want to hunt it up). I'd say this trend dates back at least to the invention of the stock market ticker and the mystical belief that somehow throwing around numbers necessarily produces more money (I could link to any number of recent articles about the stock bubble, housing bubble, or Jon Stewart's attacks on Jim Cramer, if you'd like.)
So is outcomes assessment just another part of the factory university speedup? The whole point of humanistic study is to read and think in depth and then to talk about it, and to train our students to read and think in depth as well ---- and then to communicate what they have discovered through writing and speech.* That's it. It's a model that doesn't lend itself well to Taylorization, rationalization and efficiency. It's hard to turn it into a commodity with the attendant cycles of innovation and obsolescence of various bells and whistles, at least hard to do that and have it remain recognizably the same. There's no product, no profits or dividends; ideally these activities --- reading, thinking, communicating, in a humanistic manner --- should be carried out in all aspects of life and won't be easily correlated to getting a job or making X amount of money. In fact, humanistic study is supposed to be so much more all-encompassing and affect so much more of your life than your earnings, that measuring such a limited outcome as wealth, rather than a holistic assessment, should just be silly.
Of course, the idea that there are whole swaths of life and society that are not about money and measurements and profits makes some people veeery upset. As does the idea that maybe profits and making money don't have to be an important part of your life, or how it is measured. As for me, I think that when a university mission changes from educating students to proving to people that it is educating students, things have gotten really out of whack.
*I know, I know, craziness: the Marxist scholar endorsing humanism? Well, remember that Marx himself advocated that everyone work for 4 hours a day so that they all had time to study and create and entertain. Hell, I'd put in my 4 hours at the communal garden or garbage dump if I knew I had the freedom to think and read and talk about things that interested me in complete security.
Thanks, Sisyphus, for your thoughtful contributions to the conversation about OA. Love this:
As for me, I think that when a university mission changes from educating students to proving to people that it is educating students, things have gotten really out of whack.
Right on! I ask again: where did OA come from? Where was the evidence that we weren't doing our jobs, our students weren't learning, and we weren't engaged in ongoing thought, disucssion, and innovation with regards to our teaching?
Yeah, Sis! You boil down a lot of the discourse around assessment to some very basic points -- this and Clio's post have been helpful in helping me think more clearly & critically about what has been a general, emotional *repulsion* from the whole assessment disaster.
I love this: "The whole point of humanistic study is to read and think in depth and then to talk about it, and to train our students to read and think in depth as well ---- and then to communicate what they have discovered through writing and speech." Right on!
I also tend to believe that quantitative data, as often done, actually erases real data, although I actually instinctively apply quantitative measures to qualitative things (eg, I think I once expressed my feelings on job candidates by grading them as 95, 93, and 82). Nevertheless, a while back I was brainstorming something possibly relevant, and this seems an appropriate moment to post it.
Huh. My job candidate story suggests to me that an important use of quantitative data is comparison. I never would have said, after meeting a person, that she was a 95. The numbers were useful to quickly express my *relative* opinions.
There is a great article in the latest Harper's by Mark Slouka which expresses a lot of the same concerns you do here about the humanities (I'd link, but you've got to be a subscriber - I only saw the article bec. my chair passed it along to me.)
I wrote on this a couple of years ago, and then last year went to the MLA panel on assessment, and then after that, co chaired the committee that ran the assessment disucssion for the new major curriculum.
I think, for very real reasons, many of which you and Historiann express, that OA is an odious trend, begotten of hypercapitalist impulses and political revenge against the humanities as bastions of left-leaning thought. We can't precisely or even accurately measure what students are learning, because what they learn in the humanities is precisely how encounter the complexities for which there are already no easy or even right answers.
BUT. We cannot take our eye off the ball on this one, because as much as we may find the whole enterprise distasteful, and even deleterious to our mission (not only the time sink, but the very real danger of letting teaching follow assessable (i.e. overly simplistic) outcomes) we do ignore it at our peril.
What I have found promising is ways to measure things other than "is our students learning?" One panelist at MLA mentioned a standardized survey of student engagement, and sometime in 2007, Jason B Jones at The Salt Box talked about measuring things like retention and graduation rates.
So I am learning not to dig my heels in about assessment, lest we inadvertently miss opportunities to locate blind spots and improve our teaching. No easy answers on this one, but thank goodness we are armed with the tools of humanisitic thought to navigate the questions.
Post a Comment