Objective: Missing data due to study dropout is common in weight loss trials and several statistical methods exist to account for it. The aim of this study was to identify methods in the literature and to compare the effects of methods of analysis using simulated data sets. Methods: Literature was obtained for a 1-y period to identify analytical methods used in reporting weight loss trials. A comparison of methods with large or small between-group weight loss, and missing data that was, or was not, missing randomly was conducted in simulated data sets based on previous research. Results: Twenty-seven studies, some with multiple analyses, were retrieved. Complete case analysis (n ¼ 17), last observation carried forward (n ¼ 6), baseline carried forward (n ¼ 4), maximum likelihood (n ¼ 6), and multiple imputation (n ¼ 2) were the common methods of accounting for missing data. When comparing methods on simulated data, all demonstrated a significant effect when the between-group weight loss was large (P < 0.001, interaction term) regardless of whether the data was missing completely at random. When the weight loss interaction was small, the method used for analysis gave considerably different results with mixed models (P ¼ 0.180) and multiple imputations (P ¼ 0.125) closest to the full data model (P ¼ 0.033). Conclusion: The simulation analysis showed that when data were not missing at random, treatment effects were small, and the amount of missing data was substantial, the analysis method had an effect on the significance of the outcome. Careful attention must be paid when analyzing or appraising studies with missing data and small effects to ensure appropriate conclusions are drawn.
Available at: http://works.bepress.com/mbatterham/135/