1 ETH Zurich 2 MPI for Informatics 3 Microsoft
Motion-controllable video generation is crucial for egocentric applications in virtual reality and embodied AI. However, existing methods often struggle to achieve 3D-consistent fine-grained hand articulation. By adopting 2D trajectories or implicit poses, they collapse 3D geometry into spatially ambiguous signals or over-rely on human-centric priors. Under severe egocentric occlusions, this causes motion inconsistencies and hallucinated artifacts, as well as preventing cross-embodiment generalization to robotic hands.
To address these limitations, we propose a novel framework that generates egocentric videos from a single reference frame, leveraging sparse 3D hand joints as embodiment-agnostic control signals with clear semantic and geometric structures. We introduce an efficient control module that resolves occlusion ambiguities while fully preserving 3D information. Specifically, it extracts occlusion-aware features from the source reference frame by penalizing unreliable visual signals from hidden joints, and employs a 3D-based weighting mechanism to robustly handle dynamically occluded target joints during motion propagation. Concurrently, the module directly injects 3D geometric embeddings into the latent space to strictly enforce structural consistency.
To facilitate robust training and evaluation, we develop an automated annotation pipeline that yields over one million high-quality egocentric video clips paired with precise hand trajectories. Additionally, we register humanoid kinematic and camera data to construct a cross-embodiment benchmark. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art baselines, generating high-fidelity egocentric videos with realistic interactions and exhibiting exceptional cross-embodiment generalization to robotic hands.
We compare our method against state-of-the-art controllable video generation baselines on egocentric hand interaction videos. Each group shows four methods side by side: Wan-Move, Wan-Fun, Ours, and Ground Truth (GT). Our method consistently produces more realistic hand articulation with fewer artifacts and better 3D consistency under egocentric occlusions.
Our framework supports fine-grained interactive control over generated egocentric videos. Each case below shows the original generated video alongside the controlled result. Two key capabilities are highlighted:
If you find this work useful in your research, please consider citing:
@article{zhang2026controllable,
title={Controllable Egocentric Video Generation via Occlusion-Aware Sparse 3D Hand Joints},
author={Zhang, Chenyangguang and Ye, Botao and Chen, Boqi and Delitzas, Alexandros and Wang, Fangjinhua and Pollefeys, Marc and Wang, Xi},
journal={arXiv preprint arXiv:2603.11755},
year={2026}
}