Copresence Presenting a Novel Approach for Hair Strand Reconstruction at SIGGRAPH 2025

A Mobile Solution for Strand-Based Hair Reconstruction: Presented at SIGGRAPH 2025

This year at SIGGRAPH, the Copresence team will present a novel approach to one of the more persistent challenges in digital human creation: photoreal hair reconstruction—from just a mobile phone scan.

Talk Details

Title: A Mobile Scanning Solution to Reconstruct Strand-Based Hairstyles
Speaker: Titus Leistner (Copresence AG)
Contributors: Philip-William Grassal, Luca Hormann, Nadia Hamlaoui, Titus Leistner, Lynton Ardizzone
Session: Real-Time and Mobile Techniques
Date & Time: Sunday, August 10, 2025 · 9:00 AM – 9:22 AM PDT
Location: West Building, Rooms 211–214
🔗 SIGGRAPH Program Page

Technical Overview

This presentation introduces a mobile-first scanning pipeline capable of reconstructing actual strand-based 3D hairstyles including hair flow, density, and per-strand orientation—using only a few images captured with a smartphone.

Unlike previous approaches that rely on multi-camera arrays or manual grooming, the Copresence system requires just a phone, a few head turns in good lighting, and a cloud processing backend.

The pipeline includes:

  • A CNN trained on a large synthetic dataset to regress strand positions and densities
  • Semantic segmentation and handcrafted filters to extract hair boundaries and orientations
  • A differentiable rendering step that compares predicted features with those derived from input images
  • An optimization loop that refines strand positions by minimizing this rendering-based loss

The reconstructed strands are fully compatible with 3D editing pipelines and can be imported into Blender, Unreal Engine, or used directly with MetaHuman to increase likeness fidelity beyond what is currently possible.

Method Demo Video

Method Highlights

  • Monocular Input: Only phone images needed, with standard scanning motions
  • Cloud-Based Compute: Offloads neural inference and optimization for user simplicity
  • Differentiable Rendering Loop: Optimizes strand-level geometry using image-space loss
  • Interoperable Output: Hair assets compatible with modern DCC and game engines

All reconstructed strands are properly rooted on the scalp, enabling plausible physics simulation and further custom grooming. The system also reconstructs the full head mesh and textures to support pipeline integration.

Broader Context

While mobile face and body scanning has become increasingly common, strand-level hair modeling remains an unsolved challenge in democratized avatar creation. The presented method fills this gap by offering a scalable and reproducible approach to hair capture without specialized hardware.

This research addresses use cases in:

  • Avatar personalization for XR & Telepresence
  • Digital doubles in virtual production
  • Character pipelines for real-time engines
  • Likeness-preserving MetaHuman conversion

About the Team

The work is presented by Titus Leistner and developed by the applied research team at Copresence, a startup focused on realistic 3D avatar creation from mobile devices.

If you’re attending SIGGRAPH and working in digital humans, XR avatars, or character tech, we invite you to join the session and connect with us afterward.