Point cloud generation and representation is important in industry areas. Generating and editing high quality 3D shapes is challenging work in deep learning. Inspired by StyleGAN, a style based generative adversarial networks is proposed to generate high quality 3D point cloud. An improved non-linear mapping network learn distribution of points and is used to generate well distributed point cloud point cloud. We also provide a coarse to fine representation for point cloud. According to the experimental results on the ShapeNet Part data set(including aircraft, chair single category and overall 16 categories), our method can generate more uniform point cloud than other GAN methods with less training epoches. The latent code for point cloud has better linear separation,and is more easy to edit.