PointRNN: Point Recurrent Neural Network for Moving Point Cloud Processing

18 Oct 2019  ·  Hehe Fan, Yi Yang ·

In this paper, we introduce a Point Recurrent Neural Network (PointRNN) for moving point cloud processing. At each time step, PointRNN takes point coordinates $\boldsymbol{P} \in \mathbb{R}^{n \times 3}$ and point features $\boldsymbol{X} \in \mathbb{R}^{n \times d}$ as input ($n$ and $d$ denote the number of points and the number of feature channels, respectively). The state of PointRNN is composed of point coordinates $\boldsymbol{P}$ and point states $\boldsymbol{S} \in \mathbb{R}^{n \times d'}$ ($d'$ denotes the number of state channels). Similarly, the output of PointRNN is composed of $\boldsymbol{P}$ and new point features $\boldsymbol{Y} \in \mathbb{R}^{n \times d''}$ ($d''$ denotes the number of new feature channels). Since point clouds are orderless, point features and states from two time steps can not be directly operated. Therefore, a point-based spatiotemporally-local correlation is adopted to aggregate point features and states according to point coordinates. We further propose two variants of PointRNN, i.e., Point Gated Recurrent Unit (PointGRU) and Point Long Short-Term Memory (PointLSTM). We apply PointRNN, PointGRU and PointLSTM to moving point cloud prediction, which aims to predict the future trajectories of points in a set given their history movements. Experimental results show that PointRNN, PointGRU and PointLSTM are able to produce correct predictions on both synthetic and real-world datasets, demonstrating their ability to model point cloud sequences. The code has been released at \url{https://github.com/hehefan/PointRNN}.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here