A statistical wit once remarked that researchers often pose the wrong question and then proceed to answer that question incorrectly. The question that researchers intend to ask is whether or not a treatment effect is clinically significant. The question that is typically asked, however, is whether or not the treatment effect is statistically significant--a question that may be only marginally related to the issue of clinical impact. Similarly, the response, in the form of a p value, is typically assumed to reflect clinical significance but in fact reflects statistical significance. In an attempt to address this problem the medical literature over the past decade has been moving away from tests of significance and toward the use of confidence intervals. Concretely, study reports are moving away from "the difference was significant with a p value under 0.01" and toward "the one-year survival rate was increased by 20 percentage points with a 95% confidence interval of 15 to 24 percentage points." By focusing on what the effect is rather than on what the effect is not confidence intervals offer an appropriate framework for reporting the results of clinical trials. This paper offers a non-technical introduction to confidence intervals, shows how the confidence intervals framework offers advantages over hypothesis testing, and highlights some of the controversy that has developed around the application of this method. Additionally, we make the argument that studies which will be reported in terms of confidence intervals should similarly be planned with reference to confidence intervals. The sample size should be set to ensure that the estimates of effect size will be reported not only with adequate power but also with appropriate precision.