Abstract
Panoramic images, with their wide field of view and abundant information, have become essential visual materials in digital art creation and virtual reality. However, existing panoramic image restoration and quality enhancement methods lack high-level semantic understanding and global feature control, which often leads to structural disorder in complex scenes. They also struggle to balance semantic comprehension, real-time performance, and restoration quality. To address these issues, this paper raises a panoramic image restoration and visual quality enhancement model for digital art creation. The model uses a Multi Scale Residual Network, a Coordinate Space Attention mechanism, and super resolution reconstruction to construct a visual quality enhancement algorithm, which accurately captures both local details and global structural features. Based on this algorithm, an optimized Generative Adversarial Network and a Vision Transformer are integrated to model the spatial correlation and semantic logic between damaged and undamaged regions, achieving high-quality completion. Experimental results show that the model achieves a Structural Similarity Index of 0.975 and 0.971, a Peak Signal to Noise Ratio of 53.82 dB and 53.75 dB, a maximum memory usage of 394 MB, and a response time of 3.12 s with a data volume of 2000 in DIV2K and SUN360 datasets. The model outperforms comparison models in all metrics, enhances both detail clarity and global consistency, and maintains efficient processing performance. It provides high-quality visual materials for digital art creation and shows significant advantages across various performance indicators.