Impact of predictor measurement heterogeneity across settings on performance of prediction models: a measurement error perspective

27 Jun 2018  ·  Kim Luijken, Rolf H. H. Groenwold, Ben van Calster, Ewout W. Steyerberg, Maarten van Smeden ·

Clinical prediction models have an important role in contemporary medicine. A vital aspect to consider is whether a prediction model is transportable to individuals that were not part of the set in which the prediction model was derived. Transportability of prediction models can be hampered when predictors are measured differently at derivation and (external) validation of the model. This may occur, for instance, when predictors are measured using different protocols or when tests are produced by different manufacturers. Although such heterogeneity in predictor measurement across derivation and validation samples is very common, the impact on the performance of prediction models at external validation is not well-studied. Using analytical and simulation approaches, we examined the external performance of prediction models under different scenarios of heterogeneous predictor measurement. These scenarios were defined and clarified using an established taxonomy of measurement error models. The results of our simulations indicate that predictor measurement heterogeneity induces miscalibration of prediction models and affects discrimination and accuracy at external validation, to extents that predictions in new observations may no longer be clinically useful. The measurement error perspective was found to be helpful in identifying and predicting effects of heterogeneous predictor measurements across settings of derivation, validation and application. Our work indicates that consideration of consistency of measurement procedures across settings is of paramount importance in prediction research.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper