Thousands of experts in science education and education research gathered this spring in Atlanta and New York City for the annual conferences of the National Association for Research in Science Teaching and the American Educational Research Association. Among those presenting their findings at the meetings were researchers from Project 2061, AAAS’s science literacy initiative.
Drawing on findings from research supported by a grant from the U.S. Department of Education, the Project 2061 team provided new insights about what K-12 students know about energy (and when they know it) and about the comparability of results from paper-based versus computer-based tests. The following abstracts highlight the major points from each paper.
Using Rasch to Develop and Validate an Assessment of Students’ Progress on the Energy Concept (Paper presented at the 2018 AERA Annual Conference New York, NY, April 13-17, 2018) Authors: Cari F. Herrmann-Abell, Joseph Hardcastle, George E. DeBoer
The Project 2061 research team developed and validated a set of three assessment instruments that can be used to measure students’ progress in understanding the energy concept from fourth through twelfth grade. The team used Rasch analysis techniques throughout the development process to guide the construction of a bank of test items and the selection of items for inclusion on the instruments. Rasch analysis techniques were also used to validate the instrument and design support materials to aid in the interpretation of student performance. A cross-sectional analysis determined the match between the instruments and students and revealed that the difficulty levels of the intermediate and advanced assessment instruments were greater than the average performance levels of students in grades four through twelve. Wright maps and option probability curves were created to help users interpret student performance.
Comparability of Computer-Based and Paper-Based Science Assessments (Paper presented at the 2018 NARST Annual International Conference Atlanta, GA March 10-13, 2018) Authors: Cari F. Herrmann-Abell, Joseph Hardcastle, and George E. DeBoer
For this study, the Project 2061 researchers compared students’ performance on a paper-based test (PBT) and three versions of a computer-based test (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing the researchers to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across the U.S. who were randomly assigned to take a test in one of the four modes. CBT modes differed in whether students could skip questions and freely move through the test, whether students could click directly on the answer choice, or whether they had to click on a radio button at the bottom of the screen. Rasch analysis was used to estimate item difficulties and student performance levels. Student performance level was then used as an outcome in hierarchical linear models to determine the mode effects. The Project 2061 team found that student performance was unaffected by whether the test was paper-based or computer-based. A comparison of student performance on the three CBTs indicated that restricting test navigation did not affect student performance, but allowing students to select an answer choice by directly clicking on it improved student performance. The team's findings show that CBTs can be considered equivalent to PBTs, and the results can also be used to inform best practices for the design of other CBTs.