4D Gaussian Splatting for Real-Time Dynamic Scene Rendering

1Huazhong University of Science and Technology 2Huawei Inc.

*Equal Contributions. Project Lead. Corresponding Authors.

4D-GS can learn a dynamic scene within 20 minutes and reach the rendering speed over 50 fps of the view on one RTX 3080 GPU.


Representing and rendering dynamic scenes has been an important but challenging task. Especially, to accurately model complex motions, high efficiency is usually hard to maintain. We introduce the 4D Gaussian Splatting (4D-GS) to achieve real-time dynamic scene rendering while also enjoying high training and storage efficiency. An efficient deformation field is constructed to model both Gaussian motions and shape deformations. Different adjacent Gaussians are connected via a HexPlane to produce more accurate position and shape deformations. Our 4D-GS method achieves real-time rendering under high resolutions, 70 FPS at a 800*800 resolution on an RTX 3090 GPU, while maintaining comparable or higher quality than previous state-of-the-art methods.

Interpolation end reference image.

Our method achieves real-time rendering for dynamic scenes at high image resolutions while maintaining high rendering quality. The right figure is mainly tested on synthetic datasets, where the radius of the dot corresponds to the training time. "Res": resolution.

Interpolation end reference image.

The overall pipeline of our model. Given a group of 3D Gaussians S, we extract the center of each 3D Gaussian X and timestamp t to compute the voxel feature by multi-resolution voxel planes. Then a tiny MLP is used to decode the feature and get S` of each Gaussian at timestamp t.

Training Process

D-NeRF Datasets

HyperNeRF Datasets

Fixed-View Rendering

Free-View Rendering


We would like to express our sincere gratitude to Zhenghong Zhou for his revisions to our code and discussions on the content of our paper.


      title={4D Gaussian Splatting for Real-Time Dynamic Scene Rendering},
      author={Wu, Guanjun and Yi, Taoran and Fang, Jiemin and Xie, Lingxi and Zhang, Xiaopeng and Wei Wei and Liu, Wenyu and Tian, Qi and Wang Xinggang},
      journal={arXiv preprint arXiv:2310.08528},