TOG 2021 (Proc. SIGGRAPH Asia)
NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination
Xiuming Zhang,
Pratul P. Srinivasan, Boyang Deng,
Paul Debevec,
William T. Freeman,
Jonathan T. Barron
TOG 2021 (Proc. SIGGRAPH Asia)
arXiv /
Publisher /
BibTeX
v1 (12/22/2021)
v1 (12/22/2021)
This GitHub repository includes code for:
All subsequent folders and files are specified w.r.t. this Google Drive root.
If you want to try our trained models, see "Pre-Trained Models." If you want to use our rendered/processed data, see "Data." If you want to render your own data, also see "Metadata."
There are four types of pre-trained models in NeRFactor. The first is
the MERL BRDF model, which we release here. The second are the NeRF
models that generate the surfaces we start with. We are not releasing
these but you can train these NeRF models and then generate the surfaces
using our code. The third are the NeRFactor shape pre-training models
that learn to just reproduce the NeRF surfaces. We are not releasing
these but you can train them easily with our code. The fourth are the
final NeRFactor models (hotdog
, ficus
,
lego
, and drums
), which we release here. See
./pretrained_models.zip
.
We release the images rendered from the synthetic scenes above and the
real images (vasedeck
and pinecone
from NeRF)
processed to be compatible with our model's data format:
Rendered Images | Real Images |
./rendered-images/ |
./real-images/ |
We release the four .blend scenes (modified from what NeRF released):
lego
, hotdog
, ficus
, and
drums
, the training/testing light probes used in the paper,
and the training/validation/testing cameras (exactly the same as what
NeRF released):
Scenes | Light Probes | Cameras |
./blender-scenes.zip |
./light-probes.zip |
./cameras.zip |
We release the video results of NeRFactor, its material editing, the NeRFactor variants used in our ablation studies, and Oxholm & Nishino [2014].
Here are our results on all four synthetic scenes and two real captures:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
vasedeck |
pinecone |
Here are our material editing results on all four synthetic scenes and two real captures:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
vasedeck |
pinecone |
Here are the NeRFactor variants' results on all four synthetic scenes.
This model variant fixes the shape to NeRF's, optimizing only the reflectance and illumination:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
This model variant uses no smoothness regularization:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
This model variant uses microfacet BRDFs instead of the learned BRDFs:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
This model variant trains the surface normal and light visibility MLPs from scratch, together with the reflectance and lighting:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
Here is how NeRFactor compares with an enhanced version of Oxholm & Nishino [2014] (implemented by ourselves due to the lack of the source code) on all four synthetic scenes:
lego_3072 |
hotdog_2163 |
drums_3072 |
ficus_2188 |
†Both this enhanced version and the original model require ground-truth lighting, which we provided.
Copyright © 2021 Paper Authors. All Rights Reserved.