Category Archives: Books and journals

evaluation and independence… it matters

In the run up to the Academy Awards I was catching up on nominated movies. This past weekend I saw several including The Big Short.MV5BMjM2MTQ2MzcxOF5BMl5BanBnXkFtZTgwNzE4NTUyNzE@._V1_UX182_CR0,0,182,268_AL_ A Salon review summarizes the all to familiar story of the movie:

In the late 1990s, banks and private mortgage lenders began pushing subprime mortgages, many with “adjustable” rates that jumped sharply after a few years. These risky loans comprised 8.6 percent of all mortgages in 2001, soaring to 20.1 percent by 2006. That year alone, 10 lenders accounted for 56 percent of all subprime loans, totaling $362 billion. As the film explains, these loans were a ticking time bomb, waiting to explode.

While there is really nothing new revealed in the movie, there is a great scene in which Mark Baum (Steve Carrell) confronts the Standard and Poor’s staffer who admits to giving high ratings to mortgage security bonds because the banks pay for the ratings. If S&P doesn’t deliver the high ratings, the banks will take their business elsewhere, perhaps to Moody’s. The profit incentive to be uncritical, to not evaluate, is overwhelming. Without admitting any wrong doing it has taken until 2015 for S&P (whose parent company is MacGraw Hill Financials) to make reparations in a $1.4M settlement with the US Justice Department.

cashThis is a particular and poignant message for evaluation and evaluators. Like so much else about the financial crisis, shortsightedness and greed resulted in false evaluations, ones with very serious consequences. S&P lied: they claimed to be making independent evaluations of the value of mortgage backed securities, and the lie meant making a larger than usual profit and facilitating banks’ bogus instruments. Moody did the same thing. While the ratings agencies have made some minor changes in their evaluation procedures the key features, lack of independence and the interconnection of their profit margin with that of their customers, have not. The consensus seems to be there is nothing that would preclude the evaluators from playing precisely the same role in the future.

In addition, while the ratings companies profits took a serious hit the big three agencies—Moody’s, S&P and Fitch— their revenues surpassed pre-crisis levels, and Moody’s and S&P now look more attractive as businesses than most other financial firms do. Something worth pondering another day.

p56a191acde5f1Individual evaluators may say, “Well, I wouldn’t do that” and that may be to some extent true, but the same underlying relationships are repeated in all contracted evaluation work. If you are hiring me to do evaluation for you and I want you to consider hiring me again in the future then I am in the same relationship as the ratings agencies are to financial institutions. This is a structural deficiency, and a serious one. In a soon to be published book chapter (in Evaluation for an Equitable Society), I analyze how capitalism has overwhelmed pretty much everything. We are unable to see a role for evaluation theory and practice outside the fee-for-service framework dictated in the current neoliberal frames of social engagement.

In that chapter I offer suggestions about what evaluation can do, alongside being more responsible within a fee for service framework. First, evaluation needs to evaluate its own systems and instruments. Meta-analysis of evaluations (like that done by S&P, and pharmaceutical companies, by grant funding agencies, in education, and so on) are necessary. Using our skills to insure that what is being done in the name of evaluation is indeed evaluative and not merely profiteering is critically important. Second, professional evaluation associations need to promote structures for truly independent evaluations, evaluations solicited and paid for by third parties that have no profit to make although, of course, an interest (government agencies, funding agencies, and so on) in competently done, valid evaluation studies.



new book ~ Feminist Evaluation & Research: Theory & Practice

Available in April, a new edited book (Guilford Press) that explores the ‘whats,’ ‘whys,’ and ‘hows’ of integrating feminist theory and methods into applied research and evaluation practice.


I. Feminist Theory, Research and Evaluation

1. Feminist Theory: Its Domain and Applications, Sharon Brisolara
2. Research and Evaluation: Intersections and Divergence, Sandra Mathison
3. Researcher/Evaluator Roles and Social Justice, Elizabeth Whitmore
4. A Transformative Feminist Stance: Inclusion of Multiple Dimensions of Diversity with Gender, Donna M. Mertens
5. Feminist Evaluation for Nonfeminists, Donna Podems

II. Feminist Evaluation in Practice

6. An Explication of Evaluator Values: Framing Matters, Kathryn Sielbeck-Mathes and Rebecca Selove
7. Fostering Democracy in Angola: A Feminist-Ecological Model for Evaluation, Tristi Nichols
8. Feminist Evaluation in South Asia: Building Bridges of Theory and Practice, Katherine Hay
9. Feminist Evaluation in Latin American Contexts, Silvia Salinas Mulder and Fabiola Amariles

III. Feminist Research in Practice

10. Feminist Research and School-Based Health Care: A Three-Country Comparison, Denise Seigart
11. Feminist Research Approaches to Empowerment in Syria, Alessandra Galié
12. Feminist Research Approaches to Studying Sub-Saharan Traditional Midwives, Elaine Dietsch
Final Reflection. Feminist Social Inquiry: Relevance, Relationships, and Responsibility, Jennifer C. Greene

Reflections of a Journal Editor

When my term as Editor-in-Chief of New Directions for Evaluation ended I was asked to write a short piece for the AEA newsletter, as I did each year whilst I was EIC. I submitted a short reflection on knowledge and publishing rather than a summary of what was in and what would be in NDE. I have been told by Gwen Newman of AEA that the short piece I wrote would be published in the AEA Newsletter, but three months have passed and it hasn’t appeared. I have no insight about why.

Below is the short reflective commentary I wrote.

As of December 2012 my term as Editor-in-Chief of New Directions for Evaluation ended, and Paul Brandon’s term began. AEA has made a fine choice in appointing Paul, and I wish him good luck in his new role.

Closing the book on six years working on NDE leads me to reflect on being an editor and the role of scholarly journals. I have enjoyed being the editor of NDE, I hope I have made a positive contribution to AEA, and I have tried to respect the diversity of viewpoints and varying degrees of cultural competence in the journal publishing game. I have enjoyed working with the newer generation of evaluators and those whose voices might not otherwise have been heard, but regret that this did not make up more of my time as NDE editor. I also have mixed feelings, even if, on balance, the good outweighs the bad.

Journal editors are gatekeepers, mediators, maybe even definers of the field, who are expected to oversee and insure the fairness of an adjudication process that results in the stamp of approval and dissemination of knowledge that is most worthy and relevant to the field. But in fulfilling this role, journal editors participate in a larger ‘game’ of knowledge production. Of course, others participate in the game as well, including authors, the reward systems in higher education, professional associations, publishing companies, and indeed journal readers. Pierre Bourdieu’s notion of “illusio” captures the ‘game’ of publishing in scholarly journals, a game where everyone must play, and even be taken in by the game, in order for the game to continue.

And so I have played a key role in this game, a game that is mostly seen as necessary, benign, civil and collegial. I am, however, a bit disquieted by my complicity in the game, where knowledge about evaluation theory and practice is commodified, packaged and embargoed. A game that sometimes defines too narrowly what ought to be published, in what form, by whom, and limits access to knowledge. The illusio of the game leads us to believe that without stalwart gatekeepers and limited (often corporately owned) venues for sharing knowledge there will be excessive scholarly writing, and that it will be of dubious quality. There is little evidence to support this fear, and a growing number of highly regarded open access journals, blogs, and websites that do not forsake quality and suggest the possibility of a new game.

In a vision of the future where knowledge is a public commodity and freely shared, I imagine journal editors might play a different role in the game. A role that focuses less on gatekeeping and more on opening the gate to welcome the sharing of evaluation knowledge for free, with unfettered access, and without the need for authors to give away copyright to their works. While it may be the case that knowledge in some disciplines has a small, select audience, evaluation knowledge crosses all domains of human experience with an attendant desire to foster improvement. The audience for our work is vast, and I wish for thoughtful inclusive sharing of evaluation knowledge.

Youth participatory evaluation ~ resources

A good stop for resources on YPE is Act for Youth. YPE is described as:

an approach that engages young people in evaluating the programs, organizations, and systems designed to serve them. Through YPE, young people conduct research on issues and experiences that affect their lives, developing knowledge about their community that can be shared and put to use. There are different models of YPE: some are completely driven by youth, while others are conducted in partnership with adults.

A list of resources points the reader to other literature on YPE.

some useful references on writing & publishing

Allison, A., & Forngia, T. (1992). The grad student’s guide to getting published. New York: Prentice Hall.

American Psychological Association. (2009). Publication manual of the American Psychological Associations (6th ed.). Washington, DC.

Becker, H. S., & Richards, P. (2007). Writing for social scientists: How to start and finish your thesis, book, or article (2nd ed.). Chicago: University of Chicago Press.

Bridgewater, C. A., Bornstein, P. H., & Walkenbach, J. (1981). Ethical issues in the assignment of publication credit. American Psychologist, 36, 524-525.

Clifford, J. & Marcus, G. E. (1986). Writing culture. Berkeley, CA: University of California Press.

Frost, P. J., & Taylor, M. S. (Eds.). (1996). Rhythms of academic life: Personal accounts of careers in academia. Thousand Oaks, CA: Sage.

Fuchs, L. S., & Fuchs, D. (1993). Writing research reports for publication: Recommendations for new authors. Remedial and Special Education, 14(3), 39-46.

Geertz, C. (1989). Works and lives: The anthropologist as writer. Boston: Polity Press.

Klingner, J. K., Scanlon, D. & Pressley, M. (2005). How to publish in scholarly journals. Educational Researcher, 34(8), 14-21.

Matkin, R. E., & Riggar, T. F. (1991). Persist and publish: Helpful hints for academic writing and publishing. Niwot, CO: University of Colorado Press.

University of Chicago Press. (2003). The Chicago manual of style (15th ed.). Chicago: University of Chicago Press.

Strunk, W. J., & White, E. B. (2005). The elements of style (3rd. Ed.). Boston: Allyn & Bacon. [NOTE: Treat yourself and get the edition illustrated by Maira Kalman.]

Truss, L. (2004). Eats, shoots and leaves: Why, commas really do make a difference! New York: Gotham.

Wolcott, H. F. (2008). Writing up qualitative research (3rd edition). Thousand Oaks, CA: Sage.

Encyclopedia of Evaluation online

Sage Publications has launched Sage Reference Online. After exhausting the print copy sales to libraries, Sage has bundled most of its Encyclopedias and Handbooks and sold online access to these same libraries. The good news is that many more people at an institution will have access to the information. The Encyclopedia of Evaluation is included in this package, so if you are at a University or College that subscribes to Sage Reference Online you will have free access to this resource, and many more.
Screen shot 2010-04-22 at 12.52.21 PM


The Editorial Team of Critical Education is pleased to launch the inaugural issue of the journal. Click on the current issue link at the top of the home page to read “The Idiocy of Policy: The Anti-Democratic Curriculum of High-stakes Testing” by Wayne Au. Au is assistant professor of education at Cal State University, Fullerton and author of Unequal By Design: High-Stakes Testing and the Standardization of Inequality (Routledge, 2009).

To receive notification of new content in Critical Education, sign up as a journal user (reader, reviewer, or author).

Look for the initial installments of the special section edited by Abraham DeLeon titled “The Lure of the Animal: Addressing Nonhuman Animals in Educational Theory and Research” in the coming weeks.

Social Network Analysis ~ how to

Social network analysis (SNA) is a tool that may be useful in an evaluation if there are questions about the effectiveness of networks or the ways in which networks contribute to distributing or sustaining knowledge.

Durland & Fredericks are co-editors of an issue of New Directions for Evaluation that addresses the application of SNA to evaluation. The issue is described:

The application of SNA is relatively new for mainstream evaluation, and like most other innovations, it has yet to be fully explored in this field. The volume aims to fill the gaps within SNA methodology exploration by first reviewing the foundations and development of network analysis within the social sciences and the field of evaluation. The focus then turns to the methodology. Who holds power in a network, and what measures indicate whether that power is direct or indirect? Which subgroups have formed, and where are they positioned in an organization? How divided is an organization? Who forms the core of a collaboration, and where are the experts in an organization? These are the types of common questions explored in the four case studies of the use of network analysis within an evaluative framework. These cases are diverse in their evaluation situations and in the application of measures, providing a basis to model common applications of network analysis within the field. The final chapters include a personal account of current use by a government agency and suggestions for the future use of SNA for evaluation practice.

Additionally, the following online text is freely available as a reference.

Hanneman, Robert A. and Mark Riddle. 2005. Introduction to social network methods.