🤌 Trying to implement some of the Stable Diffusion Concepts and Architectures
Pocket Diffusion is a PyTorch-based implementation of stable diffusion-based models. This repository aims to provide a basic implementation of diffusion models using PyTorch, allowing researchers and developers to experiment with and explore the capabilities of diffusion-based modeling.
Diffusion models have gained significant attention in the field of deep learning due to their ability to generate high-quality samples and perform tasks such as image synthesis, inpainting, and denoising. Pocket Diffusion offers a simple and accessible implementation to get started with diffusion models and understand their underlying principles.
- Design the Blueprint
- AutoEncoder - VAE
- Text Encoder
- UNet Based Model
- Getting the Basic parts setup
- Run a Basic Model
- Find out Dataset to Run on a small model
- [ ]
- [ ]
- Creating the Repo
- Prompt to Image Generation 🖋️🖼️
- Image to Image Generation Based on Prompts 🔄🖼️🔄
- Textual Inversion 🔀📝
- Classifier Free Guidance 🎯🆓
- Image Inpainting 🖌️🖼️
- Image Outpainting 🖌️🔲🖼️
- Control Net eventually 🕹️📈
The implementation in Pocket Diffusion is inspired by the works of diffusion models and various PyTorch-based deep learning projects. We acknowledge the valuable contributions of the open-source community in advancing the field of deep learning.
- 🔗 https://github.com/kjsman/stable-diffusion-pytorch/tree/main
- 🔗 https://github.com/CompVis/latent-diffusion/tree/main
- 🔗 https://github.com/bes-dev/stable_diffusion_quantizer.pytorch
- 🔗 https://github.com/lwb2099/stable_diffusion_pytorch
- 🔗 https://github.com/mindforge-ai/neat-stable-diffusion-pytorch
- 🔗 https://github.com/mspronesti/stable-diffusion
Feel free to reach out to us with any questions or feedback. Happy exploring with Pocket Diffusion! 🚀
There are a lot... commits, please consider them as the continuation of the previous commit with tiny changes. 😄😄😄