Objectives: This study applied risk adjustment methods to evaluate member institutions of the American College of Cardiology-National Cardiovascular Data Registry with respect to in-hospital mortality in percutaneous coronary intervention patients over a 4-year period to assess variability in risk-adjusted performance measures.
Background: Cardiac catheterization laboratories, hospital networks, and third-party payers are interested in assessing the outcomes of percutaneous coronary interventions. Evaluation of outcomes without considering case selection may lead to erroneous conclusions about program quality.
Methods: The National Cardiovascular Data Registry database was queried for all percutaneous coronary intervention cases performed between January 1, 2001, and September 30, 2004. Random effects logistic regression was used to develop models of in-hospital mortality and compute an expected mortality rate for each program. The observed mortality rate in each program was divided by the program's predicted rate to obtain the observed/expected (O/E) mortality ratio. Change in the O/E ratio was assessed by a generalized estimating equation approach to repeated measures. An index of variability was calculated by the mean absolute difference between O/E ratios of each pair of years.
Results: There were 664,909 interventional procedures performed in 403 National Cardiovascular Data Registry programs from 2001 to 2004. There was no significant systematic change in O/E ratios over the 4-year period, but there was significantly greater variation in O/E ratios associated with lower percutaneous coronary intervention volume programs.
Conclusions: Our risk-adjustment models had very good discrimination and were relatively consistent over the study period. There was substantial within-program variation in O/E ratios. This information would provide an indication for a detailed examination of individual programs.