Skip to content

Code for the AAAI 2021 paper "Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation"

Notifications You must be signed in to change notification settings

umgupta/fairness-via-contrastive-estimation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code for "Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation", AAAI 2021

Umang Gupta, Aaron Ferber, Bistra Dilkina, and Greg Ver Steeg. “Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation.” In: Thirty-Fifth AAAI Conference on Artificial Intelligence. 2021 (To appear)

To cite the paper, please use the following BibTeX:

@article{gupta2021controllable,
      title={{Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation}},
      author={Umang Gupta and Aaron Ferber and Bistra Dilkina and Greg Ver Steeg},
      year={2021},
      eprint={2101.04108},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Reproducing Results from Paper

Requirements

  • All the python package requirements are mentioned in requirements.txt. Please create an environment with those packages before running the commands below.
  • Our code is tested with python 3.8, however it should work with python 3.6 or higher version. (Also see note about LAFTR and MIFR at the bottom, that needs python 3.6).

Main Experiments

  • To run main experiments, and generate parity-accuracy curve with different loss parameters, run src/shell/run_adult.sh and src/shell/run_health.sh for UCI Adult data and Heritage Health resp.

  • Once above commands are executed, to plot the parity-accuracy curve (Fig. 2) and variation with $\beta$ (Fig. 3), run python3 -m src.scripts.plot. This will generate the figures as well as the area over the curve tables for all the methods, i.e., Table 2, 5, 6 and Fig. 9.

Fine tuning experiments

  • To reproduce the fine-tuning experiment, run src/shell/run_adult_finetune.sh. This expects a statefile by that is generated by running FCRL with parameters lambda=0.01, beta=0.005. If you run main experiments, it should be already available. Otherwise, you can generate it by running FCRL with those parameters by executing the following command.
python3 -m src.scripts.main -c config/config_fcrl.py --exp_name test --data.name adult  \
    --model.arch_file src/arch/adult/adult_fcrl.py --model.lambda_ 0.01 --model.beta 0.005 \
    --result_folder result/adult/fcrl/l=0.01_b=0.005 --train.max_epoch 200 --device <device>
  • To generate the plots for fine-tuning, you can then run python3 -m src.scripts.plot. This should reproduce Fig 4.

Ablation Experiments

  • To run ablation experiments, i.e., Fig 5a and 5b, run src/shell/run_ablation.sh.
  • To generate the plots, run python3 -m src.scripts.plot. This should reproduce Fig 5.

Experiments about predicting c from z

  • To reproduce experiments, related to predicting c from z with and without normalization run src/shell/run_invariance.sh
  • This assumes, that you have run main experiments already. If you just want to get these results without running main experiments, please execute src/shell/adult/run_adv_forgetting_adult.sh and src/shell/adult/run_maxent_adult.sh before executing the above command.
  • Plots can be generated by running python3 -m src.scripts.plot and this should reproduce Fig 8.

NOTE:

  • You may have to comment some parts in src/scripts/plot.py utility if you want to generate only some of the plots. Please check that file to see the lines that should be commented.
  • For LAFTR and MIFR (lag-fairness), we use their official implementations with some modification to match the data pre-processing and architecture. We have provided the modified code in the respective folders. We use tensorflow-cpu and python 3.6 to generate their results. See the pkg_install.sh or requirements.txt in those folder.
  • The code requires wandb. However, we have disabled it and it will only generate tensorboard logs. No logs will be uploaded.

More Details.

Details of FCRL

Above instructions should help you reproduce the experiments reported in both Main Paper and Supplementary. If you are interested in knowing how FCRL is implemented, please read below:

  • The model file for FCRL is located in src/models/fcrl_model.py and the trainer code is in src/trainers/fcrl.py. You can see the architectures in src/arch/<data>/<data>_fcrl.py where data is adult for UCI Adult and health for Heritage Health.
  • We use seperate loss coefficients lambda and beta in the code for I(x:z|c) and I(z:x), however we vary them as mentioned in the paper (see also src/shell/<data>/run_fcrl_<data>.sh).
  • Both trainer and model are loaded via src/scripts/main.py. Other models can also be run by calling main.py. See shell folder to get example commands for other methods.

You can run FCRL by running the following command.

DATA="adult"
python3 -m python3 -m src.scripts.main -c config/config_fcrl.py --exp_name test \
    --data.name $DATA --model.arch_file  src/arch/"$DATA"/"DATA"_fcrl.py \
    --result_folder result/$DATA/fcrl --train.max_epoch 200 --device cpu \
    --model.lambda_ <lambda> --model.beta <beta>

scripts folder

Most of the python execution files are in src/scripts folder.

  • main.py is for running all the representation learning algorithms other than lag-fairness and LAFTR.
  • plot.py is for plotting the results
  • lp.py is used by plot.py to get the optimal parity-accuracy trade-off as discussed in the paper.
  • eval_embeddings.py is used to evaluate embeddings i.e., train diffferent classifiers and compute accuracy, parity and other metrics. It relies on the <method>/<param details> kind of folder structure to load correct embeddings and produce results. See shell scripts and main experiments above to understand how to structure folders. Or you can use chunks of this code to write your own evaluation.
  • eval_invariance_config.py is similar to eval_embeddings.py but predicts c from the representations, which is a common metric used by some of the adversarial learning paper to evaluate invariance/fairness.

About

Code for the AAAI 2021 paper "Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published