Skip to content

Latest commit

 

History

History
36 lines (27 loc) · 2.77 KB

README.md

File metadata and controls

36 lines (27 loc) · 2.77 KB

Python Version PyTorch Version


GFN-PG

Code for the ICML 2024 paper 'GFlowNet Training by Policy Gradients'

Clik here for downloading sEH dataset

The code is adapted from torchgfn but not compatible with it. Please make sure torchgfn is not installed in your Python environment when running the code, in case of some unexpected function importing.

Citation

If you find our code useful, please considering citing our paper in your publications. We provide a BibTeX entry below.

@InProceedings{pmlr-v235-niu24c,
  title = 	 {{GF}low{N}et Training by Policy Gradients},
  author =       {Niu, Puhua and Wu, Shili and Fan, Mingzhou and Qian, Xiaoning},
  booktitle = 	 {Proceedings of the 41st International Conference on Machine Learning},
  pages = 	 {38344--38380},
  year = 	 {2024},
  editor = 	 {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},
  volume = 	 {235},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {21--27 Jul},
  publisher =    {PMLR},
  pdf = 	 {https://raw.githubusercontent.com/mlresearch/v235/main/assets/niu24c/niu24c.pdf},
  url = 	 {https://proceedings.mlr.press/v235/niu24c.html},
  abstract = 	 {Generative Flow Networks (GFlowNets) have been shown effective to generate combinatorial objects with desired properties. We here propose a new GFlowNet training framework, with policy-dependent rewards, that bridges keeping flow balance of GFlowNets to optimizing the expected accumulated reward in traditional Reinforcement-Learning (RL). This enables the derivation of new policy-based GFlowNet training methods, in contrast to existing ones resembling value-based RL. It is known that the design of backward policies in GFlowNet training affects efficiency. We further develop a coupled training strategy that jointly solves GFlowNet forward policy training and backward policy design. Performance analysis is provided with a theoretical guarantee of our policy-based GFlowNet training. Experiments on both simulated and real-world datasets verify that our policy-based strategies provide advanced RL perspectives for robust gradient estimation to improve GFlowNet performance. Our code is available at: github.com/niupuhua1234/GFN-PG.}
}