Skip to content

Official implementation of the paper "LangSplat: 3D Language Gaussian Splatting"

License

Notifications You must be signed in to change notification settings

Nobody-in-ZhuPoShan/LangSplat

 
 

Repository files navigation

LangSplat

Minghan Qin*, Wanhua Li*†, Jiawei Zhou*, Haoqian Wang†, Hanspeter Pfister
(* indicates equal contribution, † means Co-corresponding author)
| Webpage | Full Paper | Video |
| Datasets with language feature | Pre-trained Models |

Teaser image

This repository contains the official authors implementation associated with the paper "LangSplat: 3D Language Gaussian Splatting" (Arxiv 2024), which can be found here. We further provide the preprocessed datasets 3D-OVS with language feature, as well as pre-trained models.

BibTeX

@article{qin2023langsplat,
  title={LangSplat: 3D Language Gaussian Splatting},
  author={Qin, Minghan and Li, Wanhua and Zhou, Jiawei and Wang, Haoqian and Pfister, Hanspeter},
  journal={arXiv preprint arXiv:2312.16084},
  year={2023}
}

Cloning the Repository

The repository contains submodules, thus please check it out with

# SSH
git clone [email protected]:minghanqin/LangSplat.git --recursive

or

# HTTPS
git clone https://github.com/minghanqin/LangSplat.git --recursive

Overview

The codebase has 3 main components:

  • A PyTorch-based optimizer to produce a LangSplat model from SfM datasets with language feature inputs to
  • A scene-wise language autoencode to alleviate substantial memory demands imposed by explicit modeling.
  • A script to help you turn your own images into optimization-ready SfM data sets with language feature

The components have been tested on Ubuntu Linux 18.04. Instructions for setting up and running each of them are found in the sections below.

Optimizer

The optimizer uses PyTorch and CUDA extensions in a Python environment to produce trained models.

Hardware Requirements

  • CUDA-ready GPU with Compute Capability 7.0+
  • 24 GB VRAM (to train to paper evaluation quality)

Software Requirements

  • Conda (recommended for easy setup)
  • C++ Compiler for PyTorch extensions (we used VS Code)
  • CUDA SDK 11 for PyTorch extensions (we used 11.8)
  • C++ Compiler and CUDA SDK must be compatible

Setup

Environment Setup

Our default, provided install method is based on Conda package and environment management:

conda env create --file environment.yml
conda activate langsplat

QuickStart

Download the pretrained model to output/, then simply use

python render.py -m output/$CASENAME --include_feature

Processing your own Scenes

Before getting started

Firstly, put your images into the data dir.

<dataset_name>
|---input
|   |---<image 0>
|   |---<image 1>
|   |---...

Secondly, you need to acquire the following dataset format and a pre-trained RGB model follow the 3dgs repository.

<dataset_name>
|---images
|   |---<image 0>
|   |---<image 1>
|   |---...
|---input
|   |---<image 0>
|   |---<image 1>
|   |---...
|---output
|   |---<dataset_name>
|   |   |---point_cloud/iteration_30000/point_cloud.ply
|   |   |---cameras.json
|   |   |---cfg_args
|   |   |---chkpnt30000.pth
|   |   |---input.ply
|---sparse
    |---0
        |---cameras.bin
        |---images.bin
        |---points3D.bin

Environment setup.

Please install segment-anything-langsplat and download the checkpoints of SAM from here to ckpts/.

Pipeline

Follow the process.sh and train LangSplat on your own scenes.

  • Step 1: Generate Language Feature of the Scenes. Put the image data into the "input" directory under the <dataset_name>/, then run the following code.

    python preprocess.py --dataset_path $dataset_path 
    
  • Step 2: Train the Autoencoder and get the lower-dims Feature.

    TBD
    

    Our model expect the following dataset structure in the source path location:

    <dataset_name>
    |---images
    |   |---<image 0>
    |   |---<image 1>
    |   |---...
    |---language_feature
    |   |---00_f.npy
    |   |---00_s.npy
    |   |---...
    |---language_feature_dim3
    |   |---00_f.npy
    |   |---00_s.npy
    |   |---...
    |---output
    |   |---<dataset_name>
    |   |   |---point_cloud/iteration_30000/point_cloud.ply
    |   |   |---cameras.json
    |   |   |---cfg_args
    |   |   |---chkpnt30000.pth
    |   |   |---input.ply
    |---sparse
        |---0
            |---cameras.bin
            |---images.bin
            |---points3D.bin
    
  • Step 3: Train the LangSplat.

    python train.py -s dataset_path -m output/${casename} --start_checkpoint $dataset_path/output/$casename/chkpnt30000.pth --feature_level ${level}
    
  • Step 4: Render the LangSplat.

    python render.py -s dataset_path -m output/${casename} --feature_level ${level}
    

TODO list:

  • release the code of the optimizer
  • release the code of the autoencoder
  • release the code of the segment-anything-langsplat
  • update the arxiv link
  • release the preprocessed dataset and the pretrained model
  • release more preprocessed dataset and the pretrained model (coming soon)

This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase.

About

Official implementation of the paper "LangSplat: 3D Language Gaussian Splatting"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Shell 0.7%