Compensating Motion-Induced Errors in Smartphone-Based VR Avatar Reconstruction

Recent developments in smartphone-based avatar reconstruction have made the creation of personalized and realistic avatars significantly more accessible. However, relying on one smartphone camera leads to capturing images sequentially, which introduces new challenges; particularly longer capture times increase the susceptibility to subject motion, which results in degraded reconstructions. We present a novel approach for smartphone-based avatar reconstruction that combines photogrammetry, silhouette constraints, and inverse rendering to produce high-fidelity, realistic avatars free of motion-induced artifacts. By using short, motion-resilient image sequences, referred to as sub-scans, we considerably reduce motion-induced artifacts. Our pipeline achieves high visual quality while offering improved robustness and outperforms current state-of-the-art methods in terms of computation time and accuracy.

  • Published in:
    Proceedings of the 2025 31st ACM Symposium on Virtual Reality Software and Technology (VRST '25)
  • Type:
    Inproceedings
  • Authors:
    Runte, Friedemann; Menzel, Timo; Schwanecke, Ulrich; Botsch, Mario
  • Year:
    2025
  • Source:
    https://doi.org/10.1145/3756884.3765995

Citation information

Runte, Friedemann; Menzel, Timo; Schwanecke, Ulrich; Botsch, Mario: Compensating Motion-Induced Errors in Smartphone-Based VR Avatar Reconstruction, Proceedings of the 2025 31st ACM Symposium on Virtual Reality Software and Technology (VRST '25), 2025, https://doi.org/10.1145/3756884.3765995, Runte.etal.2025a,