The Rapid Online Assessment of Reading (ROAR) is a web-based lexical decision task that measures single-word reading abilities in children and adults without a proctor. Here we study whether item response theory (IRT) and computerized adaptive testing (CAT) can be used to create a more efficient online measure of word recognition. To construct an item bank, we first analyzed data taken from four groups of students (N = 1960) who differed in age, socioeconomic status, and language-based learning disabilities. The majority of item parameters were highly consistent across groups (r = .78-.94), and six items that functioned differently across groups were removed. Next, we implemented a JavaScript CAT algorithm and conducted a validation experiment with 485 students in grades 1-8 who were randomly assigned to complete trials of all items in the item bank in either (a) a random order or (b) a CAT order. We found that, to achieve reliability of 0.9, CAT improved test efficiency by 40%: 75 CAT items produced the same standard error of measurement as 125 items in a random order. Subsequent validation in 32 public school classrooms showed that an approximately 3-min ROAR-CAT can achieve high correlations (r = .89 for first grade, r = .73 for second grade) with alternative 5-15-min individually proctored oral reading assessments. Our findings suggest that ROAR-CAT is a promising tool for efficiently and accurately measuring single-word reading ability. Furthermore, our development process serves as a model for creating adaptive online assessments that bridge research and practice.
Keywords: Computerized adaptive testing; Dyslexia; Lexical decision task; Psychometrics; Screening tools; Word recognition.
© 2025. The Author(s).