During the clinical development of therapeutics, fluid biomarkers are often included to inform on target engagement, patient selection, treatment monitoring and potentially disease progression. In addition, fluid biomarkers often provide insight into the biological pathways underlying pathophysiology that can precede observable disease progression and therapeutic response. Measuring biomarkers with research-use only (RUO) immunoassays is universal, yet the lack of assay kit standardization and unexpected lot-to-lot variability present challenges for clinical trial applications. Limitation in reagent stability often necessitates testing of study samples to occur in several batches, which despite rigorous controls to minimize variability, are still processed and measured under different conditions at different times by different operators. However, data collected across these varied testing conditions must be combined into a single analysis. Despite technologies and bioanalytical methods advancements and thorough method validations, batch effects remain pervasive. These batch effects are complex and effective mitigation strategies, which may be highly context-dependent, are required.
During an ongoing 3.5-year clinical trial measuring serum biomarkers, we observed clear batch effects for one of the biomarkers utilized in an active clinical trial. In this case study, we analyze patient sample data to evaluate the batch effects and potential technical contributors to the observed data variability. We also describe a novel computational method that was evaluated to mitigate the observed batch effects and enable uniform data analysis across multiple ELISA kits/reagent lots for this longitudinal study.
Learning Objectives:
Evaluate longitudinal biomarker data to identify potential batch effects
Understand impacts of batch effects on long term study biomarker data
Describe potential mitigation strategies to correct for batch effects including implementing a statistical correction factors