CPRE : Consortium for Policy Research in Education

Download PDF

Meeting Paper

Learning to learn from benchmark assessment data: How teachers analyze results

Although the rhetoric around formative assessment asserts the utility of everything from teacher-made assignments and quizzes to district-mandated benchmark testing for diagnostic and other instructional purposes, few studies have been conducted of how formative assessments are actually used. While there is acknowledgment that such assessments may be effective in improving student achievement and that students benefit from meaningful feedback, we know little about how educators use the data or about the conditions that support their ability to use the data to improve instruction.

In an understandable desire to limit instructional time taken for testing, districts have opted for interim assessments that are quick to administer and score. In particular, they are opting for all multiple-choice formats and for restricting the number of items given on any one assessment. From an efficiency standpoint, this makes sense. The question is how these interim assessments are used by teachers.

The analysis presented here is part of a broader research agenda developed by the Consortium for Policy Research in Education (CPRE) to better understand how teachers, schools, and policy makers can use information about student learning to inform decision making and practice. CPRE houses the Center on Continuous Instructional Improvement (CCII), a center that provides leadership in research and development to improve the quality and expand the use of policies, systems, and tools that support three closely related improvements in public education: adaptive instruction, formative assessment, and the cycle of instructional improvement. The findings presented in this paper are drawn from an NSF-funded exploratory study of elementary school teachers’ use of interim assessments in mathematics. We use the term “interim assessments” to refer to assessments that (a) evaluate student knowledge and skills, typically within a limited time frame; and (b) the results of which can be easily aggregated and analyzed across classrooms, schools, or even districts (Perie, Marion, & Gong, 2007). As mentioned above, this type of assessment is becoming increasingly popular as a way of informing teachers, schools, and districts about student performance. This paper addresses the question: How do the Philadelphia teachers in our sample analyze benchmark assessment results, how do they plan instruction based on these results; and what are their reported instructional responses to such results

Publication Date

January 2008