Study objective: We apply a previously described tool to forecast emergency department (ED) crowding at multiple institutions and assess its generalizability for predicting the near-future waiting count, occupancy level, and boarding count.
Methods: The ForecastED tool was validated with historical data from 5 institutions external to the development site. A sliding-window design separated the data for parameter estimation and forecast validation. Observations were sampled at consecutive 10-minute intervals during 12 months (n=52,560) at 4 sites and 10 months (n=44,064) at the fifth. Three outcome measures-the waiting count, occupancy level, and boarding count-were forecast 2, 4, 6, and 8 hours beyond each observation, and forecasts were compared with observed data at corresponding times. The reliability and calibration were measured following previously described methods. After linear calibration, the forecasting accuracy was measured with the median absolute error.
Results: The tool was successfully used for 5 different sites. Its forecasts were more reliable, better calibrated, and more accurate at 2 hours than at 8 hours. The reliability and calibration of the tool were similar between the original development site and external sites; the boarding count was an exception, which was less reliable at 4 of 5 sites. Some variability in accuracy existed among institutions; when forecasting 4 hours into the future, the median absolute error of the waiting count ranged between 0.6 and 3.1 patients, the median absolute error of the occupancy level ranged between 9.0% and 14.5% of beds, and the median absolute error of the boarding count ranged between 0.9 and 2.8 patients.
Conclusion: The ForecastED tool generated potentially useful forecasts of input and throughput measures of ED crowding at 5 external sites, without modifying the underlying assumptions. Noting the limitation that this was not a real-time validation, ongoing research will focus on integrating the tool with ED information systems.