Abstract
Arbitrary style transfer is attracting increasing attention due to its wide application potential. Existing approaches either directly fuse deep style features with deep content features or adaptively normalize content features to achieve global statistical matching. Although these approaches show some success, they frequently produce artifacts and messy textures. This primarily stems from a lack of exploration of the semantic distribution of style image features and an ineffective capture of long-range dependencies. This paper presents a Dual-Domain Style Transfer Network that incorporates Adaptive Normalization with Style Semantics Awareness and Global Style Texture Enhancement. The former aims to extract more style semantic information to reduce artifacts through self-attention mechanism and adaptive normalization, while the latter enhances global stylistic information in the frequency domain to suppress cluttered textures. On the MSCOCO and Wikiart datasets, compared to other state-of-the-art methods, our Learned Perceptual Image Patch Similarity, Structural Similarity Index, and content loss metrics achieved the best scores of 0.616, 0.467, and 2.31, respectively, while the style loss metric achieved the second-best score of 3.08.