Abstract
The astounding performance of transformers in natural language processing (NLP) has motivated researchers to explore their applications in computer vision tasks. A detection transformer (DETR) introduces transformers to object detection tasks by reframing detection as a set prediction problem. Consequently, it eliminates the need for proposal generation and post-processing steps. Despite competitive performance, DETR initially suffered from slow convergence and poor detection of small objects. However, numerous improvements are proposed to address these issues, leading to substantial improvements, enabling DETR to achieve state-of-the-art performance. To the best of our knowledge, this paper is the first to provide a comprehensive review of 25 recent DETR advancements. We dive into both the foundational modules of DETR and its recent enhancements, such as modifications to the backbone structure, query design strategies, and refinements to attention mechanisms. Moreover, we conduct a comparative analysis across various detection transformers, evaluating their performance and network architectures. We aim for this study to encourage further research in addressing the existing challenges and exploring the application of transformers in the object detection domain.