Back

NeRF-in-the-Wild recreates complex outdoor scenes from stills


A group of researchers from Google Research and Google Brain are developing deep learning models that can synthesise complex outdoor scenes using only unstructured collections of photos captured in-the-wild and regardless of camera angles, camera or lighting conditions

The new models can capture lighting and photometric post-processing data of outdoor scenes, with variation in appearance, without affecting the 3D geometry of the scene. 

“We build on neural radiance fields (NeRF), which uses the weights of a multilayer perceptron to implicitly model the volumetric density and color of a scene. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modelling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders”

Quote from the researchers’ paper, NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

Models were trained using TensorFlow 2 and Keras, with eight NVIDIA V100 GPUs and publicly available datasets, as well as images from Flickr. All NeRF variants were optimised for 300,000 steps on that number of GPUs using the Adam optimiser. For the Lego datasets, they optimised for 125,000 steps on 4 GPUs. According to the team, their work-in-progress shows significant qualitative and quantitative improvements over the previous state-of-the-art approaches. 



RELATED INSIGHTS