AERA Online Paper Repository Viewer

Continuing Paper Access to AERA Annual Meetings
and other Conventions

  
InformationContents
Owner: Emily R. Lai
Owner Email: emily.lai@pearson.com
Paper Title: Comparing Two Approaches to Recovering Levels of a Learning Progression in Math
Session Title: Validating Theories of Learning for Classroom Assessment Design: Sources of Evidence
Paper Type: Session Paper
Presentation Date: 4/7/2019
Presentation Location: Toronto, Canada
Unit: SIG-Cognition and Assessment
Author(s): Emily R. Lai, Pearson; Kristen E. Dicerbo, Pearson
Abstract: Presenters will describe a project that designed and tested a learning progression (LP) focused on area measurement. We designed three types of assessment activities: a game, a set of digital performance tasks, and a collection of paper-based activities. We used two different methods to recover the LP levels. The presentation will compare and contrast the two methods and the conclusions each afforded. Theoretical Framework To learn area measurement, students must perceive the square as the unit of area, which implies that by iterating and enumerating equal-sized units, area can be quantified (Battista, 2004; Baturo & Nason, 1996; Outhred & Mitchelmore, 2000; Sarama & Clements, 2009; Zacharos, 2006). Visualization and enumeration begins with a single unit and progresses to composites. Eventually, students no longer need to visualize the spatial structuring of shapes into rows, columns, and units. They understand the linear dimensions of a shape represent the number of units per row/column and can simply multiply (Battista, 2004; Sarama & Clements, 2009). In this paper, we focus on four levels in this process: • Area Unit Iteration • Using Area Units • Using Area Composites • Area Formula Methods We used two different methods to recover the levels of the LP. In Study 1 (DiCerbo, Xu, Levy, Lai, & Holland, 2017), we used a Bayesian approach that aggregated different sources of evidence (game, performance tasks, and paper-based activities) into a single mastery estimate. In Study 2 (Lai, Kobrin, DiCerbo, & Holland, 2017), we applied latent class analysis to students’ responses to the online performance tasks to sort students into categories defined by patterns of response probabilities. Data Sources Study 1 used data from 131 3rd grade students in six classrooms in Indiana. Study 2 used data collected from 260 students, both US third-grade students and Australian students in grades 3-5. American participants came from eight classrooms in five public school districts in Kentucky, New Jersey, Ohio, and Washington. Australian students came from nine classrooms in four primary schools in the state of Victoria. Study 2 also analyzed think-aloud data collected from 16 3rd and 4th grade students. Results Study 1 found that most of the students likely mastered Using Area Composites, but more than half of them likely did not master the Area Formula. Results also suggested that unique assessment contexts can affect student performance. Study 2 recovered only the novice and expert levels of the progression, with evidence of a “messy middle” suggesting that some levels may be out of sequence. Study 2 also found that students appeared to work in multiple levels of the progression at once, indicating that the LP may be less a strict hierarchy and more of a loose network of concepts. Significance LPs continue to be a promising tool for designing learning and assessments, but previous studies have failed to cleanly recover the levels of the LP (e.g., Choi & Duncan, 2014; Steedle & Shavelson, 2009). If LPs are to deliver on their promise, it is critical that researchers study different statistical approaches.
DOI: https://doi.org/10.3102/1438453