Abstract
Habesha kemis, an Ethiopian attire traditionally donned by women belonging to the Habesha community, has undergone variations of designs over time. Initially, it comprised a lengthy dress with a fitted bodice and sleeves extending to the ankles. In the Amhara region, various ethnic groups such as Gojjam, Gondar, Shewa, Agew, and Wollo uphold their distinct cultural customs. While these Habesha garments may appear similar outwardly, their embroidered motifs exhibit unique patterns, shapes, and hues, symbolizing the rich cultural legacy of Gojjam, Gondar, Shewa, Agew, and Wollo. The study aimed to identify the most appropriate model for recognizing and classifying the quality of Habesha kemis embroidery design. Digital image processing methods and CNN models incorporating VGG16, VGG19, and ResNet50v2 classifiers were used. Following the gathering of datasets, image preprocessing and segmentation were employed to enhance the model's performance. In segmentation, we used canny edge detection, local binary pattern, and dilation with contour detection for segmenting and automatically cropping each habesha kemis. After applying the segmentation process, the individual habesha kemis and foreign matters are placed in a folder based on their corresponding categories. This resulted in 320 images before augmenting for each class amount representative. The performance of VGG16, VGG19, and ResNet50v2 for Agew, Gojjam, Gonder, Shewa, and Wollo was evaluated. This process resulted in an image size of 224 × 224 in the CNN model with a VGG16 architecture and a SoftMax classifier of course we try also 64 × 64 and 128 × 128. Augmentation techniques were applied to increase the dataset size from 1600 to 3,270. Finally, the model was evaluated and achieved an accuracy of 95.72% in test data and 99.62% in training data compared to the VGG19 and ResNet50v2 models.