Abstract
Visual design element recognition and analysis play a critical role in various applications, ranging from creative design to cultural artifact preservation. However, existing methods often struggle with accurately identifying and understanding complex, multimodal design elements in real-world scenarios. To address this, we propose an integrated model that combines the Swin Transformer for precise image segmentation, multi-scale feature fusion for robust type recognition, and a multimodal large language model (LLM) for fine-grained image understanding. Experimental results on ETHZ Shape Classes, ImageNet, and COCO datasets demonstrate that the proposed model outperforms state-of-the-art methods, achieving 88.6% segmentation accuracy and a 92.3% F1 score in multimodal tasks. These findings highlight the model's potential as an effective tool for advanced design element recognition and analysis. The source code for this study can be viewed at this url: https://github.com/LIU-WENBO/Multi-Feature-Design-Elements-Recognition.