EEG studies play a crucial role in enhancing our understanding of brain development across the lifespan. The increasing clinical and policy implications of EEG research underscore the importance of utilizing reliable EEG measures and increasing the reproducibility of EEG studies. However, important data characteristics like reliability, effect sizes, and data quality metrics are often underreported in pediatric EEG studies. This gap in reporting could stem from the lack of accessible computational tools for quantifying these metrics for EEG data. To help address the lack of reporting, we developed a toolbox that facilitates the estimation of internal consistency reliability, effect size, and standardized measurement error with user-friendly software that facilitates both computing and interpreting these measures. In addition, our tool provides subsampled reliability and effect size in increasing numbers of trials. These estimates offer insights into the number of trials needed for detecting significant effects and reliable measures, informing the minimum number of trial thresholds for the inclusion of participants in individual difference analyses and the optimal trial number for future study designs. Importantly, our toolbox is integrated into commonly used preprocessing pipelines to increase the estimation and reporting of data quality metrics in developmental neuroscience.
Keywords: Data quality metrics; Effect sizes; Electroencephalogram (EEG); Event-related potentials (ERPs); Reliability; Standard Measurement Error (SME).
Copyright © 2024 The Authors. Published by Elsevier Ltd.. All rights reserved.