Apr 23 2013
Apr 19 2013
A few school districts in western Canada have moved away from percentage grades to categorical grades and involving students and parents genuinely in conferences about learning. In BC, Maple Ridge and Pitt Meadows school district has replaced letter grades with what they are calling a student-inclusive conferencing model. Battle Creek school district in Alberta has replaced percentage grades with a categorical grading of beginning, developing, achieving, or excelling. This change was implemented some time ago for elementary and junior high schools, and is now being extended to the high school. In both cases, participating in the new grading systems is optional for teachers. The change, in both cases, has been controversial… yay-sayers and nay-sayers abound. In AB there have been parent and student protests.
Today, I was on CBC Radio 1, The 180 with Jim Brown, debating the use of grades in school ~ Michael Zwaagstra, who is affiliated with the neo-liberal Frontier Center for Public Policy, representing the “we can’t live without percentage grades position” and I representing the “schools would be better places without grades position.”
Click here to hear the show (the interview/debate happens in the second half hour).
Apr 18 2013
He is known for the development of democratic evaluation, and was known to be provocative in his evaluation work and personal life.
Click here to hear Barry talk about CARE and evaluation work he was involved in.
Apr 16 2013
Outcome Harvesting is an evaluation approach developed by Ricardo Wilson-Grau. Using a forensics approach, outcome harvesting has the evaluator or ‘harvester’ retrospectively gleans information from reports, personal interviews, and other sources to document how a given program, project, organization or initiative has contributed to outcomes. Unlike so many evaluation approaches that begin with stated outcomes or objectives, this approach looks for evidence of outcomes, and explanations for those outcomes, in what has already happened… a process the creators call ‘sleuthing.’
This approach blends together, and maybe eliminates the distinctions, among intended and unintended outcomes. Evaluators are enjoined to look beyond what programs say they will do to what they actually do, but in an objectives driven approach this requires evaluators to convince clients that this is important or necessary, and justifying the expenditure of evaluation resources on a broader concept of outcomes than is often defined.
Wilson-Grau has written a clear explanation of the process, which can be downloaded here. In the downloadable pdf, the six steps of outcome harvesting are summarized:
1. Design the Outcome Harvest: Harvest users and harvesters identify useable questions to guide the harvest. Both users and harvesters agree on what information is to be collected and included in the outcome description as well as on the changes in the social actors and how the change agent influenced them.
2. Gather data and draft outcome descriptions: Harvesters glean information about changes that have occurred in social actors and how the change agent contributed to these changes. Information about outcomes may be found in documents or collected through interviews, surveys, and other sources. The harvesters write preliminary outcome descriptions with questions for review and clarification by the change agent.
3. Engage change agents in formulating outcome descriptions: Harvesters engage directly with change agents to review the draft outcome descriptions, identify and formulate additional outcomes, and classify all outcomes. Change agents often consult with well- informed individuals (inside or outside their organization) who can provide information about outcomes.
4. Substantiate: Harvesters obtain the views of independent individuals knowledgeable about the outcome(s) and how they were achieved; this validates and enhances the credibility of the findings.
5. Analyze and interpret: Harvesters organize outcome descriptions through a database in order to make sense of them, analyze and interpret the data, and provide evidence-based answers to the useable harvesting questions.
6. Support use of findings: Drawing on the evidence-based, actionable answers to the useable questions, harvesters propose points for discussion to harvest users, including how the users might make use of findings. The harvesters also wrap up their contribution by accompanying or facilitating the discussion amongst harvest users.
Other evaluation approaches (like the Most Significant Change technique or the Success Case Method) also look retrospectively at what happened and seek to analyze who, why and how change occurred, but this is a good addition to the evaluation literature. An example of outcome harvesting is described on the BetterEvaluation Blog. A short video introduces the example. watch?v=lNhIzzpGakE&feature=youtu.be
Apr 02 2013
When my term as Editor-in-Chief of New Directions for Evaluation ended I was asked to write a short piece for the AEA newsletter, as I did each year whilst I was EIC. I submitted a short reflection on knowledge and publishing rather than a summary of what was in and what would be in NDE. I have been told by Gwen Newman of AEA that the short piece I wrote would be published in the AEA Newsletter, but three months have passed and it hasn’t appeared. I have no insight about why.
Below is the short reflective commentary I wrote.
As of December 2012 my term as Editor-in-Chief of New Directions for Evaluation ended, and Paul Brandon’s term began. AEA has made a fine choice in appointing Paul, and I wish him good luck in his new role.
Closing the book on six years working on NDE leads me to reflect on being an editor and the role of scholarly journals. I have enjoyed being the editor of NDE, I hope I have made a positive contribution to AEA, and I have tried to respect the diversity of viewpoints and varying degrees of cultural competence in the journal publishing game. I have enjoyed working with the newer generation of evaluators and those whose voices might not otherwise have been heard, but regret that this did not make up more of my time as NDE editor. I also have mixed feelings, even if, on balance, the good outweighs the bad.
Journal editors are gatekeepers, mediators, maybe even definers of the field, who are expected to oversee and insure the fairness of an adjudication process that results in the stamp of approval and dissemination of knowledge that is most worthy and relevant to the field. But in fulfilling this role, journal editors participate in a larger ‘game’ of knowledge production. Of course, others participate in the game as well, including authors, the reward systems in higher education, professional associations, publishing companies, and indeed journal readers. Pierre Bourdieu’s notion of “illusio” captures the ‘game’ of publishing in scholarly journals, a game where everyone must play, and even be taken in by the game, in order for the game to continue.
And so I have played a key role in this game, a game that is mostly seen as necessary, benign, civil and collegial. I am, however, a bit disquieted by my complicity in the game, where knowledge about evaluation theory and practice is commodified, packaged and embargoed. A game that sometimes defines too narrowly what ought to be published, in what form, by whom, and limits access to knowledge. The illusio of the game leads us to believe that without stalwart gatekeepers and limited (often corporately owned) venues for sharing knowledge there will be excessive scholarly writing, and that it will be of dubious quality. There is little evidence to support this fear, and a growing number of highly regarded open access journals, blogs, and websites that do not forsake quality and suggest the possibility of a new game.
In a vision of the future where knowledge is a public commodity and freely shared, I imagine journal editors might play a different role in the game. A role that focuses less on gatekeeping and more on opening the gate to welcome the sharing of evaluation knowledge for free, with unfettered access, and without the need for authors to give away copyright to their works. While it may be the case that knowledge in some disciplines has a small, select audience, evaluation knowledge crosses all domains of human experience with an attendant desire to foster improvement. The audience for our work is vast, and I wish for thoughtful inclusive sharing of evaluation knowledge.
Mar 29 2013
See the conference website for more information.
Mar 26 2013
For more details, click here.
Feb 28 2013
For many professionals doing evaluation is part of the job. Lawyers make evaluative judgements about the quality of evidence; teachers judge the quality of students’ learning; builders judge the quality of materials. All work entails judgements of quality, and the quality of work is dependent on doing good evaluations.
But what happens when the evaluation done as part of professional work is contested? You might just find yourself being sued. Such is the case with Dale Askey, librarian at McMaster University. Askey’s job requires him to make judgements about the quality of published works and in turn publishers to make wise procurement decisions for his employer, decisions that have become ever more difficult with shrinking resources. The case can be easily summarized:
Librarian questions quality of a publishing house.
Librarian publicly criticizes said press on his personal blog.
Two years later, librarian and current employer get sued for libel and damages in excess of $4 million.
Read more: http://www.insidehighered.com/news/2013/02/08/academic-press-sues-librarian-raising-issues-academic-freedom#ixzz2MDEYx2An
Inside Higher Ed
There is no reason to believe that Askey rendered his judgement about the quality of scholarship offered by Mellen Press in a capricious or incompetent manner. Making judgements for procurement decisions is surely one of the tasks that Askey’s employer expects him to do, especially in a time of diminishing resources.
There has been considerable support for Askey, some a bit misguided by defending his write to express his opinion on his blog, but most in defense of Askey’s responsibility to do his job.
So what is the relevance for evaluation? It is clear that evaluation is integral to all and applied in virtually all other intellectual and practical domains… it is as Michael Scriven claims, a trans-discipline. As such, there is a need to pay more attention in preparing people to do publicly defensible evaluations in the context of their work. Perhaps more than program evaluation, this sort of evaluative thinking might be the raison d’etre for the discipline of evaluation.
Feb 25 2013
One of the hallmarks of any quality evaluation is that it ought to be subject itself to evaluation. Many evaluation schemes in education, such as the test driven accountability scheme, are not evaluated. The Action Canada Task Force on Standardized Testing has released a report analyzing the place of standardized testing as an accountability measure in Canadian K-12 education systems, using Ontario as a case study focus. “A review of standardized testing in this province and others is not only timely – it’s urgently needed,” says Sébastien Després, a 2012-2013 Action Canada Fellow and co-author of the report.
The Task Force offers four recommendations that could be the heart of an evaluation of accountability schemes in K-12 education across Canada.
We recommend that the Ontario government establish a suitable panel with a balanced and diverse set of experts to conduct a follow-up review of its standardized testing program. In particular:
A. Structure of the tests relative to objectives
i. The panel should review whether the scope of the current testing system continues to facilitate achievement of education system objectives.
ii. The panel should review whether the scale and frequency of testing remains consistent with the Ministry of Education’s objectives for EQAO testing.
B. Impact of testing within the classroom
i. The panel should review the impact on learning that results from classroom time devoted to test preparation and administration.
ii. The panel should review the impact of testing methods and instruments on broader skills and knowledge acquisition.
iii. The panel should review the appropriateness and impact of the pressure exerted by standardized testing on teachers and students.
C. Validity of test results
i. The panel should review whether or not standardized testing provides an assurance that students are performing according to the standards set for them.
ii. The panel should review the impact of measuring progress by taking a limited number of samples throughout a student’s career.
D. Public reporting and use of test results
i. The panel should review the impact of the potential misinterpretation and misuse of testing results data, and methods for ensuring they are used as intended.
ii. The panel should review supplemental or alternative methods of achieving public accountability of the educational system.