En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data

Yifang Men1, Biwen Lei1, Yuan Yao1, Miaomiao Cui1, Zhouhui Lian2, Xuansong Xie1

1Institute for Intelligent Computing, Alibaba Group   2Peking University

Without any pre-existing 3D or 2D assets, our 3D generative model is trained on millions of synthetic 2D images, capable of producing visually realistic 3D humans with diverse contents.
The generated human avatars can be seemlessly rigged and animated, compatible with the mordern graphics workflow.

Abstract


We present En3D, an enhanced generative scheme for sculpting high-quality 3D human avatars. Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalanced viewing angles and imprecise pose priors, our approach aims to develop a zero-shot 3D generative scheme capable of producing visually realistic, geometrically accurate and content-wise diverse 3D humans without relying on pre-existing 3D or 2D assets. To address this challenge, we introduce a meticulously crafted workflow that implements accurate physical modeling to learn the enhanced 3D generative model from synthetic 2D data. During inference, we integrate optimization modules to bridge the gap between realistic appearances and coarse 3D shapes. Specifically, En3D comprises three modules: a 3D generator that accurately models generalizable 3D humans with realistic appearance from synthesized balanced, diverse, and structured human images; a geometry sculptor that enhances shape quality using multi-view normal constraints for intricate human anatomy; and a texturing module that disentangles explicit texture maps with fidelity and editability, leveraging semantical UV partitioning and a differentiable rasterizer. Experimental results show that our approach significantly outperforms prior works in terms of image quality, geometry accuracy and content diversity. We also showcase the applicability of our generated avatars for animation and editing, as well as the scalability of our approach for content-style free adaptation.

Method


An overview of the proposed scheme, which consists of three modules: 3D generative modeling (3DGM), the geometric sculpting (GS) and the explicit texturing (ET). 3DGM using synthesized diverse, balanced and structured human image with accurate camera $\varphi$ to learn generalizable 3D humans with the triplane-based architecture. GS is integrated as an optimization module by utilizing multi-view normal constraints to refine and carve geometry details. ET utilizes UV partitioning and a differentiable rasterizer to disentangles explicit UV texture maps. Not only multi-view renderings but also realistic 3D models can be acquired for final results.

Applications


Text Guided Synthesis




Animated Avatars & AR Applications





Image Guided Synthesis

Animate-Anyone3D: Used for animate anyone with their underlying 3D structures.




Local Editing

You can easily edit guided images to obtain changed 3D models. Slides for edited results.


Citation


@article{men2024en3d,
  title={En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data},
  author={Men, Yifang and Lei, Biwen and Yao, Yuan and Cui, Miaomiao and Lian, Zhouhui and Xie, Xuansong},
  booktitle={arXiv preprint arXiv:2401.01173},
  year={2024}
}