Pan, HaoranZhou, JunLiu, YuanpengLu, XuequanWang, WeimingYan, XuefengWei, MingqiangUmetani, NobuyukiWojtan, ChrisVouga, Etienne2022-10-042022-10-0420221467-8659https://doi.org/10.1111/cgf.14684https://diglib.eg.org:443/handle/10.1111/cgf146846D pose estimation of rigid objects from RGB-D images is crucial for object grasping and manipulation in robotics. Although RGB channels and the depth (D) channel are often complementary, providing respectively the appearance and geometry information, it is still non-trivial on how to fully benefit from the two cross-modal data. From the simple yet new observation, when an object rotates, its semantic label is invariant to the pose while its keypoint offset direction is variant to the pose. To this end, we present SO(3)-Pose, a new representation learning network to explore SO(3)-equivariant and SO(3)-invariant features from the depth channel for pose estimation. The SO(3)-invariant features facilitate to learn more distinctive representations for segmenting objects with similar appearance from RGB channels. The SO(3)-equivariant features communicate with RGB features to deduce the (missed) geometry for detecting keypoints of an object with the reflective surface from the depth channel. Unlike most of existing pose estimation methods, our SO(3)-Pose not only implements the information communication between the RGB and depth channels, but also naturally absorbs the SO(3)-equivariance geometry knowledge from depth images, leading to better appearance and geometry representation learning. Comprehensive experiments show that our method achieves the stateof- the-art performance on three benchmarks. Code is available at https://github.com/phaoran9999/SO3-Pose.CCS Concepts: Computing methodologies → Point-based modelsComputing methodologies → Pointbased modelsSO(3)-Pose: SO(3)-Equivariance Learning for 6D Object Pose Estimation10.1111/cgf.14684371-38111 pages