Category Archives: Evaluation methods

Social Network Analysis ~ how to

Social network analysis (SNA) is a tool that may be useful in an evaluation if there are questions about the effectiveness of networks or the ways in which networks contribute to distributing or sustaining knowledge.

Durland & Fredericks are co-editors of an issue of New Directions for Evaluation that addresses the application of SNA to evaluation. The issue is described:

The application of SNA is relatively new for mainstream evaluation, and like most other innovations, it has yet to be fully explored in this field. The volume aims to fill the gaps within SNA methodology exploration by first reviewing the foundations and development of network analysis within the social sciences and the field of evaluation. The focus then turns to the methodology. Who holds power in a network, and what measures indicate whether that power is direct or indirect? Which subgroups have formed, and where are they positioned in an organization? How divided is an organization? Who forms the core of a collaboration, and where are the experts in an organization? These are the types of common questions explored in the four case studies of the use of network analysis within an evaluative framework. These cases are diverse in their evaluation situations and in the application of measures, providing a basis to model common applications of network analysis within the field. The final chapters include a personal account of current use by a government agency and suggestions for the future use of SNA for evaluation practice.

Additionally, the following online text is freely available as a reference.


Hanneman, Robert A. and Mark Riddle. 2005. Introduction to social network methods.

Evaluation of advocacy

The Harvard Family Research Project offers a number of publications and dialogues on evaluation potentially valuable across many evaluation contexts. A new publication on evaluating advocacy efforts has just been released. This suggests a shift in the focus of evaluation ~ in addition to programs, personnel, and products ~ to also include strategy, planning and action.

Organizational assessment

A website with resources for organizational evaluation, with a focus on assessment, learning and change is Reflect & Learn, a joint venture between International Development Research Centre (IDRC) and Universalia Management Group. In their words, the purpose of the website is:

Also we believe that improving organizations requires constant work. We are convinced that there is no “silver bullet” for improving the performance of organizations. Thus organizations need to learn about themselves and develop the problem solving skills to transform learning and insight into action. To do this an important step is for organizations to be able to generate systematic and credible evidence about themselves; who they are, how they function, and how well they are performing. The organizational assessment frameworks and approaches presented in this web site is our attempt to help organizations better knows themselves.

When Smart People Evaluate

2009-04-29-hpt_bookjacket-thumbMichele Lamont in How Professors Think: Inside the Curious World of Academic Judgment opens the Pandora’s Box of peer review, the primary form of evaluation in higher education. Lamont’s curiosity, like Pandora’s, reveals secretive deliberations that all too often amount to judgments of quality based on the similarity of the work being judged to that of the judges. Lamont examines differences across disciplines, highlights the tension between the idea of having independently established criteria and standards and the inevitability of situational deliberation on what is good or bad, and ultimately calls for a more open, transparent approach to evaluation in higher education. In this later move, she searches for the hope that Pandora found at the bottom of the box.

Lamont describes the details of her book in a short essay in the Huffington Post.

Assessing What Kids Think About Themselves

There are a plethora of strategies for collecting data from children and youth that provide evidence for evaluation of services and programs. (See a previous post on my new book that focuses on data collection strategies that ensue from a perspective that sees youth as culturally embedded meaning-making social actors.) Often, the focus is psychological and individual–that is, focusing on psychological states and attributes and judging changes in those based on some sort of intervention. A good example of this is a report just released by Child Trends. This report provides an instrument for measuring adolescent’s self-concept. This is standard psychometric fare and could be useful, however, think about the viable alternatives. What if you asked youth to draw a self-portrait, or write a biographical sketch, or create a photo-essay that reflects how they think about themselves, and well you get the idea. Self-concept is, as the report suggests, and important consideration in youth oriented programming and thus in evaluation. So important, that we should be cautious about using simplistic indicators, just because they are there.