This is a very interesting paper focussing on the problems in autonomous driving system development. Autonomous driving system development is critically dependent on the ability to replay complex and diverse traffic scenarios in simulation. It is very important to accurately simulate the vehicle sensors such as cameras, lidar or radar is essential. Current sensor simulators leverage gaming engines such as Unreal or Unity, requiring manual creation of environments, objects and material properties. The available approaches have limited scalability and fail to produce realistic approximations of camera, lidar, and radar data without significant additional work.

In this paper, the research team proposes a simple yet effective data-driven approach, which can synthesize camera data for autonomous driving
simulations. The approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes, preserving rich information about object 3D geometry and appearance, as well as the scene conditions. Then they leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle and moving objects in the scene.

The Waymo Open Dataset is used to demonstrate the taken approach and shows that it can synthesize realistic camera data for simulated scenarios. We use this dataset to provide additional evaluation and demonstrate the usefulness of our SurfelGAN model

Paper: https://arxiv.org/pdf/2005.03844.pdf