The important “point of view” in evaluation is the context that provides the reference for making judgements about the quality of an evaluand.
There are a limited number of points of view that one might take in doing evaluation–some obvious ones are aesthetic, economic, political, religious. A POV is universal, i.e., for everyone and every culture there is each of these points of view. When one takes a POV, one necessarily takes certain criteria and indicators to be primary. For example, an economic point of view assumes things like markets, monetary value, and the like. Even though a POV is universal, within any POV there are potentially many orientations. Again using economics as an example, there are Marxist, free market, neo-liberal, fascist, and so on orientations.
Every evaluation is done from a particular POV and a particular orientation. Evaluators and all evaluand stakeholders will find evaluation more defensible and useful if there is clarity and agreement about what the POV and orientation for a particular evaluation is. Disagreement about assertions of value or dis-value are dependent on the context, from the point of view.
Images are all around us; we are all image-makers and image readers. Images are a rich source of data for understanding the social world and for representing our knowledge of that social world. Image-based research has a long history in cultural anthropology and sociology as well as the natural sciences, but is nonetheless still relatively uncommon.
This chapter, Seeing is Believing, describes imaged based research and evaluation and focuses especially on issues of credibility of images and image based inquiry strategies.
There are a few examples in this chapter from my research on the impact of high stakes testing. Data collection focusing on kids’ experiences of testing involved drawing and writing. You can see more of these data on my website, as well as view a presentation I did on this topic for the Claremont Graduate School 2006 summer institute on the credibility of evidence.
In these times of ever greater technological sophistication there is a presumption that complexity and erudition will lead to true knowledge about the way things work, will identify unequivocally what causes what.
Michael Scriven has written a very nice piece on the logic of causation–a more complex and sophisticated notion for sure, but not because of the use of complex methods like randomized clinical trials. Indeed, Scriven describes the rather ordinary notion of observation as key to discerning causation. He reminds us that even preschoolers, in some contexts, know perfectly well what can cause what.
While Scriven does not speculate about why there is such romanticism about experimental design, this seems worthy of analysis. One side of globalism is the invocation of elite authorities to determine what is right and good. The economic (and therefore political and cultural) imperative is used to justify the few making decisions for the many. (If you can stand it, read Thomas Friedman’s The World is Flat to get a sense of this thinking.) Suggesting that believable causal claims ensue from only RCTs suggests only a special class of people with the knowledge, ways and means to do this sort of research have knowledge worthy of being shared. While Scriven calls for cooperation among the camps of causation warriors, logic alone is unlikely to win the day.
At the heart of the quantitative-qualitative debates in evaluation (which have been muted but not resolved in the field) lie fundamentally different notions about the world–different ontological perspectives. This debate often dissolves into caricatures of ontological positions, a particularly common one being using a paint brush that colours every neo-positivist a “realist,” by which is meant naive realism. Of course, most contemporary realists are not ‘naive’ and subsribed to a more nuanced sense in which there is a real knowable world out there. Click here for a little more of that nuance.
Just announced is an edited monograph on systems approaches to evaluation, which can be purchased from Edgepress or downloaded from the Kellogg Foundation website.
As part of Emory University’s Series on Excellence, I gave a talk on excellence in evaluation. You can listen to this presentation on the Emory website.
Outputs, outcomes, impacts–for whom, by whom, says who?
The song was written by and is performed by Terry Smutylo, former Director of Evaluation at IDRC (International Development Research Centre, Ottawa).
The Output Outcome Downstream Impact Blues.
Science is an essentially anarchic enterprise: theoretical anarchism is more humanitarian and more likely to encourage progress than its law-and-order alternatives.
Paul Feyerabend, Against Method
Substitute evaluation for science in the Feyerabend quote and then consider the contemporary practice of evaluation. As practice and as theory, evaluation embodies an anarchist epistemology and additionally some evaluators employ an anarchist epistemology. It is this anarchist epistemology that makes evaluation a powerful means to progress. If ones looks at the explosion of theoretical ideas in evaluation, methods used (evaluation borrows liberally not just from psychology and sociology but from all the social sciences, the arts, philosophy, and indeed wherever something pragmatically useful can be found), forms of human interaction, and ways of representing knowledge one would conclude that indeed as a discipline evaluation already embodies an anarchist epistemology.
To read a short paper on this idea go to An Anarchist Epistemology of Evaluation