-
-
Notifications
You must be signed in to change notification settings - Fork 617
Commit
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: 1838c605da2cb67424bb480cda5d8b2b | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
ignite.contrib.engines | ||
====================== | ||
|
||
Contribution module of engines and helper tools: | ||
|
||
ignite.contrib.engines.tbptt | ||
|
||
.. currentmodule:: ignite.contrib.engines.tbptt | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:autolist: | ||
|
||
ignite.contrib.engines.common | ||
|
||
.. currentmodule:: ignite.contrib.engines.common | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:autolist: | ||
|
||
Truncated Backpropagation Throught Time | ||
--------------------------------------- | ||
|
||
.. automodule:: ignite.contrib.engines.tbptt | ||
:members: | ||
|
||
Helper methods to setup trainer/evaluator | ||
----------------------------------------- | ||
|
||
.. automodule:: ignite.contrib.engines.common | ||
:members: |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
ignite.contrib.handlers | ||
======================= | ||
|
||
Contribution module of handlers | ||
|
||
|
||
Parameter scheduler [deprecated] | ||
-------------------------------- | ||
|
||
.. deprecated:: 0.4.4 | ||
Use :class:`~ignite.handlers.param_scheduler.ParamScheduler` instead, will be removed in version 0.6.0. | ||
|
||
Was moved to :ref:`param-scheduler-label`. | ||
|
||
LR finder [deprecated] | ||
---------------------- | ||
|
||
.. deprecated:: 0.4.4 | ||
Use :class:`~ignite.handlers.lr_finder.FastaiLRFinder` instead, will be removed in version 0.6.0. | ||
|
||
Time profilers [deprecated] | ||
--------------------------- | ||
|
||
.. deprecated:: 0.4.6 | ||
Use :class:`~ignite.handlers.time_profilers.BasicTimeProfiler` instead, will be removed in version 0.6.0. | ||
Use :class:`~ignite.handlers.time_profilers.HandlersTimeProfiler` instead, will be removed in version 0.6.0. | ||
|
||
Loggers [deprecated] | ||
-------------------- | ||
|
||
.. deprecated:: 0.5.0 | ||
Loggers moved to :ref:`Loggers`. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
ignite.contrib.metrics | ||
======================= | ||
|
||
Contrib module metrics [deprecated] | ||
----------------------------------- | ||
|
||
.. deprecated:: 0.5.0 | ||
All metrics moved to :ref:`Complete list of metrics`. | ||
|
||
|
||
Regression metrics [deprecated] | ||
-------------------------------- | ||
|
||
.. deprecated:: 0.5.0 | ||
All metrics moved to :ref:`Complete list of metrics`. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
:orphan: | ||
|
||
.. toggle:: | ||
|
||
.. testcode:: default, 1, 2, 3, 4, 5 | ||
|
||
from collections import OrderedDict | ||
|
||
import torch | ||
from torch import nn, optim | ||
|
||
from ignite.engine import * | ||
from ignite.handlers import * | ||
from ignite.metrics import * | ||
from ignite.metrics.regression import * | ||
from ignite.utils import * | ||
|
||
# create default evaluator for doctests | ||
|
||
def eval_step(engine, batch): | ||
return batch | ||
|
||
default_evaluator = Engine(eval_step) | ||
|
||
# create default optimizer for doctests | ||
|
||
param_tensor = torch.zeros([1], requires_grad=True) | ||
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) | ||
|
||
# create default trainer for doctests | ||
# as handlers could be attached to the trainer, | ||
# each test must define his own trainer using `.. testsetup:` | ||
|
||
def get_default_trainer(): | ||
|
||
def train_step(engine, batch): | ||
return batch | ||
|
||
return Engine(train_step) | ||
|
||
# create default model for doctests | ||
|
||
default_model = nn.Sequential(OrderedDict([ | ||
('base', nn.Linear(4, 2)), | ||
('fc', nn.Linear(2, 1)) | ||
])) | ||
|
||
manual_seed(666) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,117 @@ | ||
ignite.distributed | ||
================== | ||
|
||
Helper module to use distributed settings for multiple backends: | ||
|
||
- backends from native torch distributed configuration: "nccl", "gloo", "mpi" | ||
|
||
- XLA on TPUs via `pytorch/xla <https://github.com/pytorch/xla>`_ | ||
|
||
- using `Horovod framework <https://horovod.readthedocs.io/en/stable/>`_ as a backend | ||
|
||
|
||
Distributed launcher and `auto` helpers | ||
--------------------------------------- | ||
|
||
We provide a context manager to simplify the code of distributed configuration setup for all above supported backends. | ||
In addition, methods like :meth:`~ignite.distributed.auto.auto_model`, :meth:`~ignite.distributed.auto.auto_optim` and | ||
:meth:`~ignite.distributed.auto.auto_dataloader` helps to adapt in a transparent way provided model, optimizer and data | ||
loaders to existing configuration: | ||
|
||
.. code-block:: python | ||
# main.py | ||
import ignite.distributed as idist | ||
def training(local_rank, config, **kwargs): | ||
print(idist.get_rank(), ": run with config:", config, "- backend=", idist.backend()) | ||
train_loader = idist.auto_dataloader(dataset, batch_size=32, num_workers=12, shuffle=True, **kwargs) | ||
# batch size, num_workers and sampler are automatically adapted to existing configuration | ||
# ... | ||
model = resnet50() | ||
model = idist.auto_model(model) | ||
# model is DDP or DP or just itself according to existing configuration | ||
# ... | ||
optimizer = optim.SGD(model.parameters(), lr=0.01) | ||
optimizer = idist.auto_optim(optimizer) | ||
# optimizer is itself, except XLA configuration and overrides `step()` method. | ||
# User can safely call `optimizer.step()` (behind `xm.optimizer_step(optimizier)` is performed) | ||
backend = "nccl" # torch native distributed configuration on multiple GPUs | ||
# backend = "xla-tpu" # XLA TPUs distributed configuration | ||
# backend = None # no distributed configuration | ||
# | ||
# dist_configs = {'nproc_per_node': 4} # Use specified distributed configuration if launch as python main.py | ||
# dist_configs["start_method"] = "fork" # Add start_method as "fork" if using Jupyter Notebook | ||
with idist.Parallel(backend=backend, **dist_configs) as parallel: | ||
parallel.run(training, config, a=1, b=2) | ||
Above code may be executed with `torch.distributed.launch`_ tool or by python and specifying distributed configuration | ||
in the code. For more details, please, see :class:`~ignite.distributed.launcher.Parallel`, | ||
:meth:`~ignite.distributed.auto.auto_model`, :meth:`~ignite.distributed.auto.auto_optim` and | ||
:meth:`~ignite.distributed.auto.auto_dataloader`. | ||
|
||
Complete example of CIFAR10 training can be found | ||
`here <https://github.com/pytorch/ignite/tree/master/examples/cifar10>`_. | ||
|
||
|
||
.. _torch.distributed.launch: https://pytorch.org/docs/stable/distributed.html#launch-utility | ||
|
||
|
||
ignite.distributed.auto | ||
----------------------- | ||
|
||
.. currentmodule:: ignite.distributed.auto | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: generated | ||
|
||
DistributedProxySampler | ||
auto_dataloader | ||
auto_model | ||
auto_optim | ||
|
||
.. Note :: | ||
In distributed configuration, methods :meth:`~ignite.distributed.auto.auto_model`, :meth:`~ignite.distributed.auto.auto_optim` | ||
and :meth:`~ignite.distributed.auto.auto_dataloader` will have effect only when distributed group is initialized. | ||
ignite.distributed.launcher | ||
--------------------------- | ||
|
||
.. currentmodule:: ignite.distributed.launcher | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: generated | ||
|
||
Parallel | ||
|
||
ignite.distributed.utils | ||
------------------------ | ||
|
||
This module wraps common methods to fetch information about distributed configuration, initialize/finalize process | ||
group or spawn multiple processes. | ||
|
||
.. currentmodule:: ignite.distributed.utils | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:autolist: | ||
|
||
.. automodule:: ignite.distributed.utils | ||
:members: | ||
|
||
.. attribute:: has_native_dist_support | ||
|
||
True if `torch.distributed` is available | ||
|
||
.. attribute:: has_xla_support | ||
|
||
True if `torch_xla` package is found |