MagFace: A Universal Representation for Face Recognition and Quality Assessment
in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, Oral presentation.
Project Page: https://irvingmeng.github.io/projects/magface/
Paper: arXiv
知乎解读: https://zhuanlan.zhihu.com/p/475775106
A toy example: examples.ipynb
Poster: GoogleDrive, BaiduDrive code: dt9e
Beamer: GoogleDrive, BaiduDrive, code: c16b
Presentation:
- CVPR 5-minute presentation.
- Will release a detailed version later.
NOTE: The original codes are implemented on a private codebase and will not be released. This repo is an official but abridged version. See todo list for plans.
@inproceedings{meng2021magface,
title={{MagFace}: A universal representation for face recognition and quality assessment},
author={Meng, Qiang and Zhao, Shichao and Huang, Zhida and Zhou, Feng},
booktitle=CVPR,
year=2021
}
Parallel Method | Loss | Backbone | Dataset | Split FC? | Model | Log File |
---|---|---|---|---|---|---|
DDP | MagFace | iResNet100 | MS1MV2 | Yes | GoogleDrive, BaiduDrive code: wsw3 | Trained by original codes |
DDP | MagFace | iResNet50 | MS1MV2 | Yes | BaiduDrive code: idkx | BaiduDrive, code: 66j1 |
DDP | Mag-CosFace | iResNet50 | MS1MV2 | Yes | BaiduDrive code: rg2w | BaiduDrive, code: ejec |
DP | MagFace | iResNet50 | MS1MV2 | No | BaiduDrive code: tvyv | BaiduDrive, code: hpbt |
DP | MagFace | iResNet18 | CASIA-WebFace | No | BaiduDrive code: fkja | BaiduDrive, code: qv2x |
DP | ArcFace | iResNet18 | CASIA-WebFace | No | BaiduDrive code: wq2w | BaiduDrive, code: 756e |
Steps to evaluate modes on lfw/cfp/agedb:
- download data from GDrive or BaiduDrive, code: z7hs
cd eval/eval_recognition/
and extract the data in the folder- evaluate the model by with
eval.sh
(e.g.,./eval.sh magface_epoch_00025.pth official 100
)
Use eval_ijb.sh
for evaluation on IJB-B (Gdrive orBaiduDrive code: iiwa) and IJB-C (Gdrive or BaiduDrive code: q6md). Please apply for permissions from NIST before your usage.
Steps to calculate face qualities (examples.ipynb is a toy example).
- extract features from faces with
inference/gen_feat.py
. - calculate feature magnitudes with
np.linalg.norm()
.
Plot the error-versus-reject curve:
- prepare the features (in the recognition step).
cd eva/eval_quality
and runeval_quality.sh
(e.g.,./eval_quality.sh lfw
).
Note: model used in the quality assessment session of the paper can be found here.
- install requirements.
- Align images to 112x112 pixels with 5 facial landmarks (code).
- Prepare a training list with format
imgname 0 id 0
in each line (id
starts from 0), as indicated here. In the paper, we employ MS1MV2 as the training dataset which can be downloaded from InsightFace (MS1M-ArcFace in DataZoo). Userec2image.py
to extract images. - Modify parameters in
run.sh/run_dist.sh/run_dist_cos.sh
and run it.
Note: Use Pytorch > 1.7 for this feature. Codes are mainly based on torchshard from Kaiyu Yue.
How to run:
- Update NCCL info (can be found with the command
ifconfig
) and port info in train_dist.py - Set the number of gpus in here.
- [Optional. Not tested yet!] If training with multi-machines, modify node number.
- [Optional. Help needed as NAN can be reached during training.] Enable fp16 training by setiing
--fp16 1
in run/run_dist.sh. - run run/run_dist.sh.
Parallel training (Sec. 5.1 in ArcFace) can highly speed up training as well as reduce consumption of GPU memory. Here are some results.
Parallel Method | Float Type | Backbone | GPU | Batch Size | FC Size | Split FC? | Avg. Throughput (images/sec) | Memory (MiB) |
---|---|---|---|---|---|---|---|---|
DP | FP32 | iResNet50 | v100 x 8 | 512 | 85742 | No | 1099.41 | 8681 |
DDP | FP32 | iResNet50 | v100 x 8 | 512 | 85742 | Yes | 1687.71 | 8137 |
DDP | FP16 | iResNet50 | v100 x 8 | 512 | 85742 | Yes | 3388.66 | 5629 |
DP | FP32 | iResNet100 | v100 x 8 | 512 | 85742 | No | 612.40 | 11825 |
DDP | FP32 | iResNet100 | v100 x 8 | 512 | 85742 | Yes | 1060.16 | 10777 |
DDP | FP16 | iResNet100 | v100 x 8 | 512 | 85742 | Yes | 2013.90 | 7319 |
- In practical, one may want to finetune a existing model for either performance boosts or quality-aware ability. This is practicable (verified in our scenario) but requires a few modifications. Here are my recommended steps:
- Generate features from a few samples by existing model and calculate their magnitudes.
- Assume that magnitudes are distributed in
[x1, x2]
, then modify parameters to meetl_a < x1, u_a > x2
. - In our scenario, we have a model trained by ArcFace which produces magnitudes around 1.
[l_a, u_a, l_m, u_m, l_g] =[1, 51, 0.45, 1, 5]
is a good choice.
- Pytorch: FaceX-Zoo from JDAI.
- Pytorch: pt-femb-face-embeddings from Jonas Grebe
TODO list:
- add toy examples and release models
- migrate basic codes from the private codebase
- add beamer (after the ddl for iccv2021)
- test the basic codes
- add presentation
- migrate parallel training
- release mpu (Kaiyu Yue, in April) renamed to torchshard
- test parallel training
- add evaluation codes for recognition
- add evaluation codes for quality assessment
- add fp16
- test fp16
- extend the idea to CosFace, proved
- implement Mag-CosFace
20210909: add evaluation code for quality assessments
20210723: add evaluation code for recognition
20210610:[IMPORTANT] Mag-CosFace + ddp is implemented and tested!
20210601: Mag-CosFace is theoretically proved. Please check the updated arxiv paper.
20210531: add the 5-minutes presentation
20210513: add instructions for finetuning with MagFace
20210430: Fix bugs for parallel training.
20210427: [IMPORTANT] now parallel training is available (credits to Kaiyu Yue).
20210331 test fp32 + parallel training and release a model/log
20210325.2 add codes for parallel training as well as fp16 training (not tested).
20210325 the basic training codes are tested! Please find the trained model and logs from the table in Model Zoo.
20210323 add requirements and beamer presentation; add debug logs.
20210315 fix figure 2 and add gdrive link for checkpoint.
20210312 add the basic code (not tested yet).
20210312 add paper/poster/model and a toy example.
20210301 add ReadMe and license.