FROM SLICES TO SPACES Design ideation on architectural models through AI-generated image sequences

从切片到空间:通过人工智能生成的图像序列对建筑模型进行设计构思

阅读:1

Abstract

The paper presents a novel methodology for applying AI-driven style transfer to complex 3D architectural models. By converting 3D models into 2D image sequences, the process integrates sequential slicing, training, video-guided diffusion and reconstruction to transform existing 3D models based on text, image, or video prompts into new stylised forms. This enables architects to explore diverse design concepts, focusing on spatial composition, visual appearance and tectonics through high-resolution outputs that capture both exterior and interior spatial relations. The results demonstrates the setups potential in enhancing early-stage design ideation through AI, by both outperforming existing video diffusion platform while also facilitating a fast exploration of different outcomes - capabilities which were validated in a design course. The study highlights an approach for utilising advanced 2D image-based AI models to generate intricate and meaningful 3D architectural transformations.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。