Objective: A series of experiments examined human operators' strategies for interacting with highly (93%) reliable automated decision aids in a binary signal detection task.
Background: Operators often interact with automated decision aids in a suboptimal way, achieving performance levels lower than predicted by a statistically ideal model of information integration. To better understand operators' inefficient use of decision aids, we compared participants' automation-aided performance levels with the predictions of seven statistical models of collaborative decision making.
Method: Participants performed a binary signal detection task that asked them to classify random dot images as either blue or orange dominant. They made their judgments either unaided or with assistance from a 93% reliable automated decision aid that provided either graded (Experiments 1 and 3) or binary (Experiment 2) cues. We compared automation-aided performance with the predictions of seven statistical models of collaborative decision making, including a statistically optimal model and Robinson and Sorkin's contingent criterion model.
Results and conclusion: Automation-aided sensitivity hewed closest to the predictions of the two least efficient collaborative models, well short of statistically ideal levels. Performance was similar whether the aid provided graded or binary judgments. Model comparisons identified potential strategies by which participants integrated their judgments with the aid's.
Application: Results lend insight into participants' automation-aided decision strategies and provide benchmarks for predicting automation-aided performance levels.
Keywords: contingent criterion model; decision-making strategies; human–automation interaction; signal detection theory.