3D Point Cloud Generation

Overview

Generating synthetic 3D point cloud data is an open area of research with the intention of facilitating the learning of non-Euclidean point representations. In three dimensions, synthetic data may take the form of meshes, voxels, or raw point clouds which can be used to learn a representation that aids the solution of computer vision tasks such as classification, segmentation, and reconstruction. Yet, the geometry and texture of these data modalities is bounded by the resolution of the modeled objects. In addition, due to the complexity of the design process, a limited number of objects may fail to satisfy the enormous data needs of deep neural networks. Thus, a critical bottleneck is the limited amount of annotated data that can be utilized for deep learning applications. Automatically synthesizing 3D point clouds, in an unsupervised manner, can solve this problem by providing a source of potentially infinite amounts of diverse training data.

Contributions

  • We created a novel conditional generative adversarial network that synthesizes dense 3D point clouds, with color, for different classes of objects in an unsupervised manner

Publications

  1. M.S. Arshad and W.J. Beksi, "A Progressive Conditional Generative Adversarial Network for Generating Dense and Colored 3D Point Clouds," International Conference on 3D Vision (3DV), 2020.
    Paper  •   Preprint  •   Source Code  •   Citation