PyTorch Code for arXiv "Semi-parametric Makeup Transfer via Semantic-aware Correspondence"
- Ubuntu 18.04
- Anaconda (Python, Numpy, PIL, etc.)
- PyTorch 1.7.1
- torchvision 0.8.2
opt.dataroot=MT-Dataset
├── images
│ ├── makeup
│ └── non-makeup
├── parsing
│ ├── makeup
│ └── non-makeup
├── makeup.txt
├── non-makeup.txt
- Use images of MT dataset:
opt.dataroot
├── images
│ ├── makeup
│ └── non-makeup
├── parsing
│ ├── makeup
│ └── non-makeup
├── makeup_test.txt
├── non-makeup_test.txt
- Use arbitrary images:
opt.dataroot
├── images
│ ├── makeup
│ └── non-makeup
├── makeup_test.txt
├── non-makeup_test.txt
Facial masks of an arbitrary image will be obtained from the face parsing model (we borrow the model from https://github.com/zllrunning/face-parsing.PyTorch)
python train.py --phase train
-
Check the file 'options/demo_options.py', change the corresponding cofigs if needed
-
Create folder '/checkpoints/makeup_transfer/'
-
Download the pre-trained model from Google Drive and put it into '/checkpoints/makeup_transfer/'
python demo.py --demo_mode normal
Notice:
-
Available demo mode: 'normal', 'interpolate', 'removal', 'multiple_refs', 'partly'
-
For part-specific makeup transfer(opt.demo_mode='partial'), make sure there are at least 3 reference images.
-
For interpolation between multiple references, make sure there are at least 4 reference images.
python demo_general.py --beyond_mt
- Interpolation from light to heavy
- Interpolation between multiple references
Transfer different parts from different references
- Normal Images
- Wild Images