Findings from two studies conducted by researchers at AAAS’s Project 2061 will be presented at the following sessions taking place during the annual meetings of the National Association for Research in Science Teaching (NARST) and the American Educational Research Association (AERA). Registered attendees are invited to participate.
Tuesday, April 25th, 2017, San Antonio, TX
Session Title: Validating an Assessment for Tracking Students’ Growth in Understanding of Energy from Elementary School to High School
Presenter: Joseph M. Hardcastle (Co-authors: Cari F. Herrmann Abell, and George E. De Boer)
Abstract: Energy is a critically important topic in the K-12 science curriculum, with many applications in the earth, physical, and life sciences and in engineering and technology. To meet the challenges associated with teaching energy, new tools and assessment instruments are needed. In this work we describe the development of a three-tier assessment instrument designed for measuring students’ understanding of energy at basic, intermediate, and advanced levels. Using a bank of 372 multiple choice items targeting 14 different key energy ideas, three assessment instruments were created to test each of the three levels. These instruments were pilot tested with elementary, middle, and high school students and results were analyzed using Rasch modeling. Our findings show that by using linking items the three instruments form a common scale, allowing items and students to be compared across forms. Each instrument was found to contain items with suitable difficulties for students with a range of understandings of energy. Together these instruments form a reliable tool for measuring the growth of students’ understanding of the energy concept as they progress from elementary school to high school.
Sunday, April 30th, 2017, San Antonio, TX
Session Title: Comparing Student Performance on Paper-and-Pencil and Computer-Based-Tests
Presenter: Joseph M. Hardcastle (Co-authors: Cari F. Herrmann Abell, and George E. De Boer)
Abstract: Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took either a PPT or one of two different CBT containing multiple-choice items assessing science ideas. Propensity score matching was used to create equivalent demographic groups for each testing modality, and Rasch modelling was used to describe student performance. Performance was found to vary across testing modalities by grade band, students’ primary language, and the specific CBT system used. These results are discussed in terms of the current literature and the differences between the specific PPT and CBT systems.