Research and Evaluation

Comprehensive in nature, the SDP addresses the levers that have an impact on student performance, development and well-being, including school organization, school climate, curriculum and instruction, and parent and community involvement, with relationships and child and adolescent development principles as the foundation.

Such a comprehensive reform model requires an evaluation design that can address the interplay of a wide variety of variables, many of which are not under the control of the program designers, if order is to be brought to the process and meaning to the results. The SDP program evaluation process is designed to capture the complexity inherent in whole-district reform, and to attempt to trace the lines of causation from causes to effects, while remaining congruent in theory and practice with the program that it evaluates.

As such, the SDP program evaluation has three main foci: contextual analysis, formative evaluation and, theory building; and combines three major areas in the field:

  • Expansion of scope and use of evaluation (Preskill & Caracelli, 1997);
  • Integration of program and implementation theories (Weiss, 1997); and
  • The participatory approach; and uses multiple data gathering methods including quantitative and qualitative in an effort to triangulate and better interpret the results. 

Regular feedback of information through formative evaluation contributes to the quality of implementation. Emmons and Comer (2009) provide a detailed description of the Yale Child Study Center School Development Program evaluation process.

References

  • Comer, J. P. & Emmons, C.L. (2006). "The research program of the Yale Child Study Center School Development Program." The Journal of Negro Education 75(3): 353-372.
  • Emmons, C. L. & Comer, J. P.(2009) Capturing Complexity: Evaluation of the Yale Child Study Center School Development Program, in R. Deslandes (ed.) International Perspectives on Contexts, Communities and Evaluated Innovative Practices: Family-school-community partnerships. London and New York: Routledge, pp. 204-219.
  • Preskill, H. & Caracelli, V. (1997). “Current and developing conceptions of use: Evaluation use TIG survey results.” Evaluation Practice Fall97, Vol. 18 Issue 3, p209.
  • Weiss, C. H. (1997). "Theory-based evaluation: Past, present, and future." New Directions for Evaluation 76: 41-55.

The SDP evaluation process has been funded by grants from the U.S. Department of Education:  CFCA#: 84.332B, PR01#: S332B050015; PR/Award #: R215U980019.