Abstract
Image generation technology using generative adversarial networks has been widely used in front-end design, but existing models have problems such as fuzzy generation and limited style expression. This study proposes an improved StyleGAN (Style Generative Adversarial Network) model to achieve style transfer and high-quality generation of front-end interface elements. An additional module is added after the generator to calculate the mutual information between the latent variables and the output of the network layer, and integrate it into the discriminator loss function for joint optimization to enhance the ability to control details. The overall FID (Frechet Inception Distance) value of the images generated by the improved model on the Rico dataset reaches 12.5, and the MS-SSIM (Multi-Scale Structural Similarity) reaches 0.92. Among them, the IS (Inception Score) value of the image generated for the Navigation Menu category reaches 7.41, which is about 14.2% higher than the baseline StyleGAN. The method used effectively solves the problem of detail distortion in front-end page generation, and realizes the precise mapping of style features and design elements through the mutual information constraint mechanism, providing a highly customizable technical framework for front-end intelligent design.