Systematic reviews of clinical trials aim to include all relevant studies conducted on a particular topic and to provide an unbiased summary of their results, producing the best evidence about the benefits and harms of medical treatments. Relevant studies, however, may not provide the results for all measured outcomes or may selectively report only some of the analyses undertaken, leading to unnecessary waste in the production and reporting of research, and potentially biasing the conclusions to systematic reviews. In this article, Kirkham and colleagues provide a methodological approach, with an example of how to identify missing outcome data and how to assess and adjust for outcome reporting bias in systematic reviews.