Abstract
Textures are central to the perceived realism of 3D environments, yet their generation and upscaling often rely on methods that produce low-quality or repetitive results. These limitations are particularly evident in high-fidelity applications, such as films, animations, video games, and virtual reality, where diverse material properties must be represented consistently at high resolution. Here, we introduce a texture-enhancement framework that improves the resolution and quality of multiple material channels, including color, displacement, metalness, normal, and roughness maps. The method integrates a batch-based processing pipeline, a material decompression-recompression strategy, and a “scalable tiling” (e.g., dividing the texture into patches that can be processed and reassembled seamlessly), enabling efficient handling of large textures while preserving spatial coherence. The model is trained on a curated dataset of relevant materials spanning surfaces such as stone, metal, wood, tile, and fabric, allowing it to reproduce fine-grained and material-specific characteristics. Quantitative and qualitative evaluations demonstrate that the proposed approach outperforms existing techniques in terms of clarity and fidelity, offering a practical solution for enhancing texture quality in both real-time and offline rendering contexts. The dataset and code are available on GitHub.