Egocentric Manipulation Interface
Learning Active Vision and Whole-Body Manipulation from Egocentric Human Demonstrations

Anonymous Authors, Under Peer Review

EgoMI is a scalable framework for collecting and deploying egocentric human demonstration data to train and retarget whole-body + active vision manipulation policies - without requiring robot hardware for teleoperation.
[Paper (coming soon)] [arXiv (coming soon)] [Code (coming soon)]

Abstract

Imitation learning from human demonstrations offers a promising approach for robot skill acquisition, but egocentric human data introduces fundamental challenges due to the embodiment gap. During manipulation, humans actively coordinate head and hand movements, continuously reposition their viewpoint and use pre-action visual fixation search strategies to locate relevant objects. These behaviors create dynamic, task-driven head motions that static robot sensing systems cannot replicate, leading to a significant distribution shift that degrades policy performance. We present EgoMI, a framework that captures synchronized end-effector and active head trajectories during manipulation tasks, resulting in data that can be retargeted to compatible semi-humanoid robot embodiments. To handle rapid and wide-spanning head view- point changes, we introduce a memory-augmented policy that selectively incorporates historical observations. We evaluate our approach on a bimanual robot equipped with an actuated camera head and find that policies with explicit head-motion modeling consistently outperform baseline methods. Results suggest that coordinated hand-eye learning with EgoMI effectively bridges the human-robot embodiment gap for robust imitation learning on semi-humanoid embodiments.

Real World Policy Rollouts

EgoMI Device

Section work in progress...

Spatial Aware Robust Keyframe Selection (SPARKS)

Section work in progress...

Experimental Randomization Distribution

Section work in progress...