SketchMetaFace: A Learning-based Sketching Interface for High-fidelity 3D Character Face Modeling

Zhongjin Luo      Dong Du      Heming Zhu      Yizhou Yu      Hongbo Fu      Xiaoguang Han#     

#Corresponding email: hanxiaoguang@cuhk.edu.cn

Paper Project Code

If you are interested in sketch-based 3D modeling, you can also refer to SimpModeling and Sketch2RaBit.


Abstract

Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this paper, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed ``Implicit and Depth Guided Mesh Modeling'' (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency.

Fig. 1: We present SketchMetaFace, a novel sketching system designed for amateur users to create high-fidelity 3D character faces. With curvature-aware strokes (valley strokes in green and ridge strokes in red), novice users can smoothly customize detailed 3D heads. Note that our system only outputs geometry without texture and texturing is achieved using commercial modeling tools

Demo


Usability Study

In this study, we invite 16 amateur users to freely create at least one model without restrictions on result diversity, result quality, or time duration. Fig. 2 shows a gallery of models created by these amateur users, which reflect the expressiveness of our system. It can be seen from this figure that our system supports amateurs in geometrical modeling to create character faces with diversified shapes and rich geometric details.

Fig. 2: The gallery of our results. All models are created by amateur users who are trained to use our system with a tutorial. Thanks to the easy-to-use two-stage modeling design and the stroke suggestion component, the users can complete each model design in 5-9 minutes. The three results in the first row were created using the same coarse mesh but applying different surface details.

Comparison Study

We also conducted a comparison study on different modeling systems to demonstrate the superiority of our system. After thoroughly reviewing existing sketch-based character modeling systems, we chose DeepSketch2Face and SimpModeling for comparison. 16 amateur users were requested to create 3D models referring to the given image using the four compared systems (i.e., DeepSketch2Face, SimpModeling, SketchMetaFace, and ZBrush). Fig. 3 shows the reference images, the created models with the four systems, and the corresponding modeling time. Compared to DeepSketch2Face and SimpModeling, our system supported users to create more appealing shapes and craft more vivid surface details. The geometric shape and surface details created by our system are closer to the reference models. Compared to ZBrush, our system took less time for users to create visually reasonable 3D models.

Fig. 3: Comparison between our system against the state of the arts. The results in each row were created by the same user given a reference in (a). For each system, we show the sketch, resulting model, drawing time, and the corresponding participant.

Citation

@article{luo2023SketchMetaFace,
  author = {Zhongjin Luo, Dong Du, Heming Zhu, Yizhou Yu, Hongbo Fu, Xiaoguang Han},
  title = {SketchMetaFace: A Learning-based Sketching Interface for High-fidelity 3D Character Face Modeling},
  journal = {arXiv},
  year = {2023},
}

The website template is borrowed from DreamFusion.