Abstract
Exposure modeling is critical in environmental epidemiology and human health but may face challenges (e.g., skewed data, unequal error, context-insensitive validation, and computational demands). Modeling decisions reflect the intended use of the models and the values that modelers prioritize. We aimed to provide a conceptual framework and machine learning (ML) modeling protocols that address these issues. With 500m-gridded hourly PM(2.5) and O(3) levels in Illinois before, during, and after the 2023 Canadian wildfire season as a motivating example, we conducted modeling experiments to evaluate modeling methods, guided by three domains we propose based on theories of science: 1) Data Diversity, leveraging open and citizen science data to enhance inclusivity, parsimony, and representativeness; 2) Equitable Accuracy, ensuring fairly distributed uncertainties across subpopulations; and 3) Sustainable Modeling, balancing accuracy with reducing computational demands to promote accessibility for under-resourced researchers. We found that ML with publicly available data can achieve high accuracy. Depending on methods, performance may vary substantially, even with identical input data. Large but skewed data may reduce performance. Misuse of cross-validation protocols can underestimate prediction error; although we observed R(2)s of ∼98 %, the modeled estimates varied significantly, indicating the need for careful model validation. By using new modeling protocols including representativeness-considered training and validation data and a new loss function, we achieved high agreement between estimates and ground-based measurements (e.g., R(2) = ∼90 % for PM(2.5); ∼80 % for O(3)), equally distributed errors across sociodemographic strata and urban-rural divides, and reduction in computation time-from several weeks or months to a few days.