Skip to content

Official Implementation of "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis", EGSR 2024

License

Notifications You must be signed in to change notification settings

graphdeco-inria/gaussian-splatting-relighting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis

Yohan Poirier-Ginter, Alban Gauthier, Julien Philip, Jean-François, Lalonde, George Drettakis
| Webpage | Paper | Video | Other GRAPHDECO Publications | FUNGRAPH project page | Datasets | Viewer |

Teaser image

This is the main repository of our work "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis". To use our method on your own scenes, you will first need to use our single-view relighting network to transform single-illumination captures into generated multi-illumination captures. Instructions are available in the secondary repository. Note that this can only be expected to work well in indoor scenes.

For all of our scenes, we provide pre-generated relightings for all views. You can use these to train gaussian splatting independently, and try your own modifications without touching the other repository.

Alternatively, you can use viewer to inspect pretrained scenes; for Windows it can be downloaded directly here. For Linux you will need to compile it from source, for this refer to the instructions in the Gaussian Splatting repository. Intructions to download and view pretrained scenes are available below.

Installation

First clone the repo with:

git clone --recursive https://gitlab.inria.fr/ypoirier/gaussian-splatting-relighting.git

Then create the environment. We recommend keeping a separate environment as the one you will use for relighting. This can be be done with:

conda env create --name gsr python==3.9.7
conda activate gsr
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install ./submodules/simple-knn
pip install ./submodules/diff-gaussian-rasterization

Note that you must install torch with a CUDA version that matches yours --- replace the https://download.pytorch.org/whl/cu118 URL with the correct one for your version (following the instructions on https://pytorch.org/).

On Windows, please use Visual Studio 2019 and not 2022.

Training

Launch training with:

python train.py -s colmap/real/kettle --viewer

you can remove the --viewer flag if the interactive viewer isn't needed. To use the viewer, download the files and launch SIBR_remoteGaussian_app_rwdi.exe while a scene is training.

The output files will be saved in output/kettle/00 by default.

This require a multi-illumination capture with the following structure:

colmap/**/$SCENE_NAME/train
   ├── relit_images/  
   ├── sparse/
   └─  ...

where sparse/ is the output from colmap and where relit_images/ contains images named 0000_dir_00.png, 0000_dir_01.png, ....

You can download our captures with:

bash download_real_datasets.sh
bash download_real_samples.sh

The first command downloads the colmap captures, while the second downloads pre-generated images produced with our ControlNet model and places them in the relit_images directory.

Launch the viewer on a finished training

You can "resume" a finished training to inspect the scene in the viewer, even after the training is complete:

MODEL=output/kettle/00
python train.py -m $MODEL --resume --viewer

You can download our pre-trained scenes with:

bash download_pretrained_scenes.sh

Some tips for best quality

Our method works best in cases where the camera rotation is not too large (e.g. a 360 rotation around an object might not work so well). Large amounts of overlap between images also appears to be beneficial. For instance, the synthetic scenes have a slow camera motion where every camera pose looks at the same point, resulting in good convergence with few floaters.

On the paintgun scene, we trained using the first 100 images only. This can be done using the --max_images 100 flag.

Rendering a video

After training, you can render with:

MODEL=output/kettle/00 
bash render_relighting_video.sh $MODEL

Synthetic scenes

In the paper, we performed evaluation using synthetic scenes, which made it easier to generate test data. You can download our synthetic training data with:

bash download_synthetic_datasets.sh
bash download_synthetic_samples.sh

You can then train and render every scene with the following command:

bash train_all_synthetic_scenes.sh

You can also render videos for a few relit direction, as well as light sweep videos with:

bash render_synthetic_videos.sh

We performed evaluation at 768x512; the data for synthetic scenes in 1536x1024 is also available on the website.

BibTeX

@article{
    10.1111:cgf.15147,
    journal = {Computer Graphics Forum},
    title = {{A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis}},
    author = {Poirier-Ginter, Yohan and Gauthier, Alban and Philip, Julien and Lalonde, Jean-François and Drettakis, George},
    year = {2024},
    publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
    ISSN = {1467-8659},
    DOI = {10.1111/cgf.15147}
}

Funding and Acknowledgments

This research was funded by the ERC Advanced grant FUNGRAPH No 788065 http://fungraph.inria.fr/, supported by NSERC grant DGPIN 2020-04799 and the Digital Research Alliance Canada. The authors are grateful to Adobe and NVIDIA for generous donations, and the OPAL infrastructure from Université Côte d’Azur. Thanks to Georgios Kopanas and Frédéric Fortier-Chouinard for helpful advice.

About

Official Implementation of "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis", EGSR 2024

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published