Abstract
Under the context of educational informatization and cultural heritage, painting instruction in ethnic universities faces challenges such as difficulty in technique transmission, cross-linguistic barriers, and insufficient personalization. This study proposes a painting art rendering system based on deep learning and machine translation, establishing an integrated framework of "technique transmission - style rendering - cultural interpretation - personalized guidance." The system employs an improved generative adversarial network to achieve automatic rendering of eight ethnic painting styles and introduces a visual-context Transformer to accomplish semantic mapping of painting terminology across different ethnic languages. Validation was conducted on a multimodal dataset comprising 12,000 artworks and 5,000 terminology entries. Results showed that the style rendering module achieved an F1 score of 92.3%, representing an 8.7% improvement over traditional models. Meanwhile, the terminology mapping module reached a semantic matching rate of 89.6%, an increase of 6.2%. Ablation experiments indicated that the collaborative operation of the two modules enhanced overall performance by 11.5%. Teaching experiments showed that students using the system improved by 18.4%, 25.4%, and 17.6% in technique mastery, cultural understanding, and creative innovation, respectively, significantly outperforming the traditional approach. The study makes a contribution by proposing a collaborative teaching framework, introducing innovative modules for intelligent rendering and cross-linguistic interpretation, and empirically validating their educational value. This study provides a practical approach for the digital preservation of ethnic painting techniques and for facilitating cross-cultural communication.