The generation of 3D hand-object interactions (HOIs) from text is crucial for dexterous robotic grasping and VR/AR content generation, requiring both high visual fidelity and physical plausibility. Nevertheless, the ill-posed problem of mesh extraction from text-generated Gaussians, and physics-based optimization on the erroneous meshes pose challenges. To address these issues, we introduce THOM, a training-free framework that generates photorealistic, physically plausible 3D HOI meshes without the need for a template object mesh. THOM employs a two-stage pipeline, initially generating the hand and object Gaussians, followed by physics-based HOI optimization. Our new mesh extraction method and vertex-to-Gaussian mapping explicitly assign Gaussian elements to mesh vertices, allowing topology-aware regularization. Furthermore, we improve the physical plausibility of interactions by VLM-guided translation refinement and contact-aware optimization. Comprehensive experiments demonstrate that THOM consistently surpasses state-of-the-art methods in terms of text alignment, visual realism, and interaction plausibility.
THOM framework adopts a two-stage pipeline for generating realistic hand-object interactions. Initially, object and hand meshes are independently generated with high visual realism. In the second stage, we jointly optimize their interaction parameters using physics-based regularization losses to ensure plausible contacts and minimal penetration.
Qualitative comparison of our method with Text-to-3D generation methods.
Qualitative comparison of our method with Text-to-HOI generation methods.
@article{
Coming soon
}