Abstract
Machine learning use in plant phenotyping has grown exponentially. These algorithms empowered the use of image data to measure plant traits rapidly and to predict the effect of genetic and environmental conditions on plant phenotype. However, the lack of interpretability in machine learning models has limited their usefulness in gaining insights into the underlying biological processes that drive plant phenotypes. Explainable AI (XAI) emerges to help understand the 'why' behind machine learning model predictions and allow researchers to investigate the most influential features that lead to prediction, classification or segmentation results. Understanding the mechanisms behind model prediction is also central to sanity-checking models, increasing model reliability and identifying dataset biases that may limit the model's applicability across different conditions. This review introduces the concept of XAI and presents current algorithms, emphasizing their suitability for different data types or machine learning algorithms. The use of XAI to leverage trait information is highlighted, showcasing how recent studies employed model explanations to recognize the features that impact plant phenotype. Overall, this review presents a framework for using XAI to gain insights into intricate biological processes driving plant phenotypes, underscoring the significance of transparency and interpretability in machine learning.