Hwang, InwooKim, HyeonwooLim, DonggeunPark, InbumKim, Young MinBabaei, VahidSkouras, Melina2023-05-032023-05-032023978-3-03868-209-71017-4656https://doi.org/10.2312/egs.20231007https://diglib.eg.org:443/handle/10.2312/egs20231007We present Text2PointCloud, a method to process sparse, noisy point cloud input and generate high-quality stylized output. Given point cloud data, our iterative pipeline stylizes and deforms points guided by a text description and gradually densifies the point cloud. As our framework utilizes the existing resources of image and text embedding, it does not require dedicated 3D datasets with high-quality textures, which are produced by skillful artists or high-resolution colored 3D models. Also, since we represent 3D shapes as a point cloud, we can visualize fine-grained geometric variations with a complex topology such as flowers or fire. To the best of our knowledge, it is the first approach for directly stylizing the uncolored, sparse point cloud input without converting it into a mesh or implicit representation, which might fail to express the original information in the measurements, especially when the object exhibits complex topology.Attribution 4.0 International LicenseCCS Concepts: Imaging and Video → Computational Photography; Multi-View and 3D; Paint SystemsImaging and Video → Computational PhotographyMultiView and 3DPaint SystemsText2PointCloud: Text-Driven Stylization for Sparse PointCloud10.2312/egs.2023100729-324 pages