Perhaps a step closer to being a discipline, the American Evaluation Association project to define evaluation might signal that we are getting down to the fundamental ideas in our field. A committee has developed a definition that its chair, Michael Q. Patton describes as “a living document, ever being updated and revised, but never becoming dogma or an official, endorsed position statement.” Bravo to all for this initiative!
The open-access, participatory strategy is an interesting and forwarding thinking one, and I will be curious to see if and how that statement changes over time. My prediction is that it won’t change much. The statement as it is pretty much captures what anyone would say an introductory evaluation course, but we shall see.
I think, however, there are a couple of key details missing from this definition… details that might bring clarity about the foundations of evaluation. As the definition now stands, it focuses primarily on evaluation practice and less so on the discipline of evaluation. The initial definition is what we all say when we explain what evaluation is:
Evaluation is a systematic process to determine merit, worth, value or significance.
The string of descriptors about what evaluation is a determination of are important, and they are not the same. The definition provides no guidance about what the differences are and why we provide this string in our definition. What is the difference between merit and worth, and how are those different from value or significance? This is not a trivial matter and lack of understanding about these distinctions sometimes gives evaluation a bad name. For example, when an evaluation focuses on determining the worth of an evaluand and is found wanting there is often a hue and cry when that same evaluand is simultaneously meritorious.
The second detail that is missing is the logic of how we get to those judgements of merit, worth, value and significance. The definition says that evaluation is a “systematic process” but provides no hint of what makes evaluation systematic. Perhaps this is one of those contentious areas that Patton describes when he introduced the statement, “There was lots of feedback, much of it contradictory.” But, from the statement, we cannot know whether the committee talked about including details about what makes evaluation systematic and couldn’t come to agreement, or if this was never discussed in the first place. Perhaps being systematic has two meanings that get entangled… we use models/approaches in evaluating that provide guidance about how to do evaluation (UFE, RCT, participatory, and so on) AND there is a logic to thinking evaluatively that is embedded in all models/approaches to evaluation. There is no need to include the former in a definition of evaluation, but there is a need to include the latter.
Michael Scriven has provided the grounding for articulating the logic of evaluation, Deborah Fournier has done considerable work on articulating what that logic looks like in practice (that is, how it is manifest in various evaluation approaches/models), and both Michael Scriven and Ernie House have tackled the specific issue of synthesis in evaluation. This logic is at the heart of what makes evaluation systematic and I’d like to see this in this definition. (For a quick introduction to these ideas, check out the entries in the Encyclopedia of Evaluation by these authors.)
As an organic, evolving definition of evaluation, perhaps these are components that will still be developed and included.