Cao, XinXia, HuanWang, HaoyuSu, LinzhiZhou, PingLi, KangChen, RenjieRitschel, TobiasWhiting, Emily2024-10-132024-10-132024978-3-03868-250-9https://doi.org/10.2312/pg.20241276https://diglib.eg.org/handle/10.2312/pg20241276Most deep learning methods for point cloud processing are supervised and require extensive labeled data. However, labeling point cloud data is a tedious and time-consuming task. Self-supervised representation learning can solve this problem by extracting robust and generalized features from unlabeled data. Yet, the features from representation learning are often redundant. Current methods typically reduce redundancy by imposing linear correlation constraints. In this paper, we introduce PointJEM, a self-supervised representation learning method for point clouds. It includes an embedding scheme that divides the vector into parts, each learning a unique feature. To minimize redundancy, PointJEM maximizes joint entropy between parts, making the features pairwise independent. We tested PointJEM on various datasets and found it significantly reduces redundancy beyond linear correlation. Additionally, PointJEM performs well in downstream tasks like classification and segmentation.Attribution 4.0 International LicenseKeywords: point cloud, representation learning, self-supervised; CCS Concepts: Computing methodologies → Computer vision representations; Networks → Network design principlespoint cloudrepresentation learningself supervisedComputing methodologies → Computer vision representationsNetworks → Network design principlesPointJEM: Self-supervised Point Cloud Understanding for Reducing Feature Redundancy via Joint Entropy Maximization10.2312/pg.2024127610 pages