There are a plethora of strategies for collecting data from children and youth that provide evidence for evaluation of services and programs. (See a previous post on my new book that focuses on data collection strategies that ensue from a perspective that sees youth as culturally embedded meaning-making social actors.) Often, the focus is psychological and individual–that is, focusing on psychological states and attributes and judging changes in those based on some sort of intervention. A good example of this is a report just released by Child Trends. This report provides an instrument for measuring adolescent’s self-concept. This is standard psychometric fare and could be useful, however, think about the viable alternatives. What if you asked youth to draw a self-portrait, or write a biographical sketch, or create a photo-essay that reflects how they think about themselves, and well you get the idea. Self-concept is, as the report suggests, and important consideration in youth oriented programming and thus in evaluation. So important, that we should be cautious about using simplistic indicators, just because they are there.
First, I’d like to say that I agree that the measurement of self-concept is an important one. I am studying to be a school-counselor, and with NCLB and the movement for counselors to be held accountable, anything measuring the outcomes of our programs are becoming more and more important. Do you think that this instrument in the report would be a valid tool to use in the school environment (to measure an outcome of a program addressing self-concept?). Also, as far as the alternatives to a questionnaire-type tool, how would professionals in an environment like a school go about evaluating something like self-concept, by using some of the options you listed, but still have the results seem credible to outsiders evaluating our performance? While people in the field see the value of qualitative data, like biographical sketches, it’s my experience that there is a drive for more quantitative data to evaluate our programs. Any suggestions to bridge that gap?