Release v0.1.0
cloneofsimo
released this
08 Jan 14:20
·
87 commits
to master
since this release
Better Pivotal Tuning, CLIP evaluation, SVD distillation!
Many tricks / features went into this release :
- Extended Latent textual inversion
- Pivotal tuning with stage-learning rate schedule
- Norm-prior Bayesian learning
- Mask conditioned score estimation (+ in-pipeline face detector)
- Sub-1 denoising
- CLIP score (text alignment score, image alignment score) evaluater
- xformer support (not in lorpt CLI yet)
- wandb logging
- Better Caption dataset
- Distilling fully trained model into LoRA : SVD distillation CLI
What's Changed
- fix(missing-var): h_flip by @oscarnevarezleal in #86
- BetterPTI: Support pivotal tuning with multiword latent by @cloneofsimo in #91
- Other Acceleration tricks by @cloneofsimo in #93
- fix : save lora 16 precision by @cloneofsimo in #97
- feat : svd distillation with CLI by @cloneofsimo in #98
- Add xformers to training scripts by @hafriedlander in #103
- Make safetensors properly optional, and support storing TI by @hafriedlander in #101
- Feat/inversion pp by @cloneofsimo in #104
- image transforms by @ethansmith2000 in #115
- Feat/allsave by @cloneofsimo in #119
- v 0.1.0 by @cloneofsimo in #88
New Contributors
- @timh made their first contribution in #11
- @AK391 made their first contribution in #20
- @2kpr made their first contribution in #25
- @DavidePaglieri made their first contribution in #39
- @hdeezy made their first contribution in #48
- @hafriedlander made their first contribution in #73
- @milyiyo made their first contribution in #75
- @laksjdjf made their first contribution in #76
- @oscarnevarezleal made their first contribution in #86
- @ethansmith2000 made their first contribution in #115
Full Changelog: https://github.com/cloneofsimo/lora/commits/v0.1.0-alpha