Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📋 [TASK] Implement Multi-GPU Training Support #2258

Open
5 of 11 tasks
Tracked by #2364
samet-akcay opened this issue Aug 19, 2024 · 7 comments
Open
5 of 11 tasks
Tracked by #2364

📋 [TASK] Implement Multi-GPU Training Support #2258

samet-akcay opened this issue Aug 19, 2024 · 7 comments
Assignees
Labels
Milestone

Comments

@samet-akcay
Copy link
Contributor

samet-akcay commented Aug 19, 2024

Implement Multi-GPU Support in Anomalib

Depends on:

Background

Anomalib currently uses PyTorch Lightning under the hood, which provides built-in support for multi-GPU training. However, Anomalib itself does not yet expose this functionality to users. Implementing multi-GPU support would significantly enhance the library's capabilities, allowing for faster training on larger datasets and more complex models.

Proposed Feature

Enable multi-GPU support in Anomalib, allowing users to easily utilize multiple GPUs for training without changing their existing code structure significantly.

Example Usage

Users should be able to enable multi-GPU training by simply specifying the number of devices in the Engine configuration:

from anomalib.data import MVTec
from anomalib.engine import Engine
from anomalib.models import EfficientAd

datamodule = MVTec(train_batch_size=1)
model = EfficientAd()
engine = Engine(max_epochs=1, accelerator="gpu", devices=2)

This configuration should automatically distribute the training across two GPUs.

Implementation Goals

  1. Seamless integration with existing Anomalib APIs
  2. Minimal code changes required from users to enable multi-GPU training
  3. Proper utilization of PyTorch Lightning's multi-GPU capabilities
  4. Consistent performance improvements when using multiple GPUs

Implementation Steps

  1. Review PyTorch Lightning's multi-GPU implementation and best practices
  2. Modify the Engine class to properly handle multi-GPU configurations
  3. Ensure all Anomalib models are compatible with distributed training
  4. Update data loading mechanisms to work efficiently with multiple GPUs
  5. Implement proper synchronization of metrics and logging across devices
  6. Add multi-GPU tests to the test suite
  7. Update documentation with multi-GPU usage instructions and best practices

Potential Challenges

  • Ensuring all models in Anomalib are compatible with distributed training
  • Handling model-specific operations that may not be distribution-friendly
  • Managing different GPU memory capacities and load balancing
  • Debugging training issues specific to multi-GPU setups

Discussion Points

  • Should we support different distributed training strategies (DP, DDP, etc.)?
  • How do we ensure reproducibility across single and multi-GPU training?

Next Steps

  • Conduct a thorough review of PyTorch Lightning's multi-GPU capabilities
  • Create a detailed technical design document for the implementation
  • Implement a prototype with a single model and test performance gains
  • Discuss potential impacts on existing features and user workflows
  • Plan for gradual rollout, starting with a subset of models

Additional Considerations

  • Performance benchmarking: single GPU vs multi-GPU for various models and datasets
  • Impact on memory usage and potential optimizations
  • Handling of model checkpointing and resuming training in multi-GPU setups

We welcome input from the community on this feature. Please share your thoughts, concerns, or suggestions regarding the implementation of multi-GPU support in Anomalib.

@haimat
Copy link

haimat commented Sep 18, 2024

Hey guys, this is presumably one of the most important missing features in Anomalib.
Do you have any ideas when v1.2 with multi-GPU training will be released?

@samet-akcay
Copy link
Contributor Author

Hi @haimat, I agree with you, but to enable multi-gpu, we had to go through a number of refactors here and there. You could check the PRs done to feature/design-simplifications branch.

What is left to enable multi-gpu is metric refactor and visualization refactor, which we are currently working on.

@haimat
Copy link

haimat commented Sep 18, 2024

That sounds great, thanks for the update.
Do you have an estimation on when you might be ready with this whole change?

@haimat
Copy link

haimat commented Oct 2, 2024

@samet-akcay Hello, do you have any ideas when this might be released?

@samet-akcay
Copy link
Contributor Author

@haimat, we figured this requires quite some changes within AnomalyModule. Required changes unfortunately breaks the backwards compatibility, which is the reason why we decided to release this as part of v2.0. We are currently working on it on feature/design-simplifications branch, which will be released as v2.0.0

@samet-akcay samet-akcay modified the milestones: v1.2.0, v2.0 Oct 14, 2024
@samet-akcay samet-akcay changed the title Feature: Multi-GPU Support in Anomalib 📋 [TASK] Implement Multi-GPU Training Support Oct 14, 2024
@haimat
Copy link

haimat commented Oct 14, 2024

@samet-akcay Thanks for the update.
Do you have an estimation, when you plan to release version 2.0?

@samet-akcay
Copy link
Contributor Author

we aim to release it by the end of this quarter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: 📝 To Do
Development

No branches or pull requests

4 participants