Automated image inpainting for historical artifact restoration using hybridisation of transfer learning with deep generative models

利用迁移学习与深度生成模型相结合的自动化图像修复技术,用于历史文物修复

阅读:1

Abstract

Historical artefacts such as pottery, sculptures, paintings, and manuscripts often suffer damage, erosion, or loss of detail due to weather, ageing, environmental factors, or improper handling. Traditional restoration is labour-intensive, slow, and prone to human error, while digital restoration enables reversible, non-invasive, and precise reconstruction of cultural heritage objects. Thus, a significant focus in computer vision (CV) has shifted to inpainting of historical artefacts, which repairs and restores damaged or missing sections to preserve the artwork’s original integrity. Conventional image inpainting methods, whether based on pixel diffusion or patch-based, have limitations. While modern digital methods have improved the effectiveness of inpainting, they often struggle to maintain the original work’s aesthetic and unique qualities. This makes it difficult to completely and accurately restore the work’s authentic look and feel. With the constant advances in deep learning (DL), image inpainting techniques that leverage it have achieved remarkable results. Unlike conventional image inpainting techniques, Generative Adversarial Network (GAN)-driven approaches offer greater efficiency and generality. With this motivation, this study develops a new hybrid deep learning-enabled image inpainting model for smart historical artificial restoration, named the HDLIP-SHAR technique. The HDLIP-SHAR technique aims to train a DL model to identify and reconstruct missing or damaged portions of artefact images. Initially, adaptive median filtering (AMF) and contrast enhancement are applied to improve the image quality. Furthermore, a hybrid SqueezeNet CNN model is utilised to fully extract deep semantic features from historical artefact images to identify cracks, missing parts, and faded textures. Moreover, the U-Net model is applied for image segmentation and localisation of damaged regions. Finally, a transformer-based GAN model is used to restore and inpaint the missing areas of the image. The comparison analysis of the HDLIP-SHAR model demonstrated superior performance with an average PSNR of 64.59 dB, SSIM of 0.945, and LPIPS of 0.0401, outperforming other methods under the MuralDH dataset.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。