Abstract
Behavioral monitoring of laboratory animals is essential for evaluating drug safety, yet existing assessments are typically limited to in-room observations by technicians. Here, we introduce our versatile AI model pipeline, composed of interconnected artificial neural networks that leverage end-to-end learning based solely on video-derived appearance features of canines. This non-invasive approach enables detailed mapping of activity, behavior and clinical signs at individual animal level under diverse conditions. To validate its real-world application, we conducted extensive field testing on hours of footage. Trained on a large, annotated dataset, our model can accurately multi-track up to three group-housed canines using color-coded reflective harnesses, achieving high re-identification accuracies (≥92.5%) and IDF1 scores up to 99.9%. AI-derived locomotor activity showed a strong correlation with accelerometer-based measurements (r = 0.965). Our AI model detects 11 behavior and clinical observation classes, with a mean class accuracy of 48% and individual accuracies up to 93%. As such, a detailed time-specific quantitative output is available for activity, mobility, pose, eating, drinking and specific clinical signs (ataxia, anxiety, circling, convulsions, head shaking, involuntary muscle movements, limping, limb stiff, vomiting). Our innovative approach brings holistic behavioral and health monitoring in canines closer to routine practice and contributes towards the 3Rs principles.