Abstract
Landing zone detection of autonomous aerial vehicles is crucial for locating suitable landing areas. Currently, landing zone localization predominantly relies on methods that use RGB cameras. These sensors offer the advantage of integration into the majority of autonomous vehicles. However, they lack depth perception, which can lead to the suggestion of non-viable landing zones, as they only assess an area using RGB information. They do not consider if the surface is irregular or accessible for a user (easily accessible to a person on foot). An alternative approach is to utilize 3D information extracted from depth images, but this introduces the challenge of correctly interpreting depth ambiguity. Motivated by the latter, we propose a methodology for 3D landing zone segmentation using a DNN-Superpixel approach. This methodology consists of three steps: First, the proposal involves clustering depth information using superpixels to segment, locate, and delimit zones within the scene. Second, we propose feature extraction from adjacent objects through a bounding box of the analyzed area. Finally, this methodology uses a Deep Neural Network (DNN) to segment a 3D area as landable or non-landable, considering its accessibility. The experimental results are feasible and promising. For example, the landing zone detection achieved an average recall of 0.953, meaning that this approach identified 95.3% of the pixels according to the ground truth. In addition, we have an average precision of 0.949, meaning that this approach segments 94.9% of the landing zones correctly.