Single-View Shape Completion for Robotic Grasping in Clutter

Grasping in clutter with shape completion. (Left): Household objects in robot workspace, viewed through Intel Realsense D435i. (Middle): Shape completion of the target object and grasp inference on the completed shape. (Right): Grasp execution.

Abstract: In vision-based robot manipulation, a single camera view can only capture one side of objects of interest, with additional occlusions in cluttered scenes further restricting visibility. As a result, the observed geometry is incomplete, and grasp estimation algorithms perform suboptimally. To address this limitation, we leverage diffusion models to perform category-level 3D shape completion from partial depth observations obtained from a single view, reconstructing complete object geometries to provide richer context for grasp planning. Our method focuses on common household items with diverse geometries, generating full 3D shapes that serve as input to downstream grasp inference networks. Unlike prior work, which primarily considers isolated objects or minimal clutter, we evaluate shape completion and grasping in realistic clutter scenarios with household objects. In preliminary evaluations on a cluttered scene, our approach consistently results in better grasp success rates than a naive baseline without shape completion by 23\% and over a recent state of the art shape completion approach by 19%.

Overview of the proposed method. RGB information is used to segment an object of interest. The object pointcloud is then fed into a diffusion model to obtain a completed surface, which then informs grasp planning. Grasps are ranked and selected for execution (green grasp in figure).
Overview of the proposed method. RGB information is used to segment an object of interest. The object pointcloud is then fed into a diffusion model to obtain a completed surface, which then informs grasp planning. Grasps are ranked and selected for execution (green grasp in figure).