Abstract
In an increasingly volatile world, the security of strategic zones such as state borders, military garrisons, and critical infrastructure has regained paramount importance. Traditional visual surveillance systems are, however, limited by line-of-sight constraints, terrain obstructions, and high power consumption, rendering them ineffective for tactical security elements like mobile police checkpoints, remote border posts and counter insurgency & counter terrorist operations. While existing works in audio classification have explored tasks such as environmental sound detection, and general activity recognition, the challenge of estimating the number of individuals based on footstep audio cues remains relatively underexplored. Our work addresses this critical gap by introducing a novel audio-based approach that not only estimates the number of individuals but also classifies the surrounding environmental conditions from footstep sounds by utilizing EWFootstep 1.0 dataset, which has footstep acoustic signatures of one person and multiple person across varied environmental conditions (forest, road, indoor). We propose a hierarchical multi-task learning (H-MTL) model that leverages both fine-grain and coarse-grain acoustic features, where environment type classification serves as the main task and determining the number of persons is an auxiliary task. The proposed model demonstrates remarkable performance, achieving an accuracy of [Formula: see text] for the main task and [Formula: see text] for the auxiliary task, consistently surpassing the human baseline by [Formula: see text] and outperforming standard multi-class classification, conventional multi-task learning, and existing state of the art H-MTL models on the EWFootstep 1.0 dataset. Beyond direct security applications, this system holds significant potential for broader use cases, such as audio forensics in crime scene analysis.