Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/develop' into develop
Browse files Browse the repository at this point in the history
  • Loading branch information
RandomDefaultUser committed Nov 28, 2024
2 parents d52ca0c + ad1b5fd commit c2e1bb7
Show file tree
Hide file tree
Showing 66 changed files with 2,868 additions and 1,907 deletions.
6 changes: 4 additions & 2 deletions docs/source/advanced_usage/predictions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,11 +81,13 @@ Gaussian representation of atomic positions. In this algorithm, most of the
computational overhead of the total energy calculation is offloaded to the
computation of this Gaussian representation. This calculation is realized via
LAMMPS and can therefore be GPU accelerated (parallelized) in the same fashion
as the bispectrum descriptor calculation. Simply activate this option via
as the bispectrum descriptor calculation. If a GPU is activated (and LAMMPS
is available), this option will be used by default. It can also manually be
activated via

.. code-block:: python
parameters.descriptors.use_atomic_density_energy_formula = True
parameters.use_atomic_density_formula = True
The Gaussian representation algorithm is describe in
the publication `Predicting electronic structures at any length scale with machine learning <doi.org/10.1038/s41524-023-01070-z>`_.
Expand Down
61 changes: 52 additions & 9 deletions docs/source/advanced_usage/trainingmodel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -194,22 +194,64 @@ keyword, you can fine-tune the number of new snapshots being created.
By default, the same number of snapshots as had been provided will be created
(if possible).

Using tensorboard
******************
Logging metrics during training
*******************************

Training progress in MALA can be visualized via tensorboard or wandb, as also shown
in the file ``advanced/ex03_tensor_board``. Simply select a logger prior to training as

.. code-block:: python
parameters.running.logger = "tensorboard"
parameters.running.logging_dir = "mala_vis"
Training routines in MALA can be visualized via tensorboard, as also shown
in the file ``advanced/ex03_tensor_board``. Simply enable tensorboard
visualization prior to training via
or

.. code-block:: python
# 0: No visualizatuon, 1: loss and learning rate, 2: like 1,
# but additionally weights and biases are saved
parameters.running.logging = 1
import wandb
wandb.init(
project="mala_training",
entity="your_wandb_entity"
)
parameters.running.logger = "wandb"
parameters.running.logging_dir = "mala_vis"
where ``logging_dir`` specifies some directory in which to save the
MALA logging data. Afterwards, you can run the training without any
MALA logging data. You can also select which metrics to record via

.. code-block:: python
parameters.validation_metrics = ["ldos", "dos", "density", "total_energy"]
Full list of available metrics:
- "ldos": MSE of the LDOS.
- "band_energy": Band energy.
- "band_energy_actual_fe": Band energy computed with ground truth Fermi energy.
- "total_energy": Total energy.
- "total_energy_actual_fe": Total energy computed with ground truth Fermi energy.
- "fermi_energy": Fermi energy.
- "density": Electron density.
- "density_relative": Rlectron density (Mean Absolute Percentage Error).
- "dos": Density of states.
- "dos_relative": Density of states (Mean Absolute Percentage Error).

To save time and resources you can specify the logging interval via

.. code-block:: python
parameters.running.validate_every_n_epochs = 10
If you want to monitor the degree to which the model overfits to the training data,
you can use the option

.. code-block:: python
parameters.running.validate_on_training_data = True
MALA will evaluate the validation metrics on the training set as well as the validation set.

Afterwards, you can run the training without any
other modifications. Once training is finished (or during training, in case
you want to use tensorboard to monitor progress), you can launch tensorboard
via
Expand All @@ -221,6 +263,7 @@ via
The full path for ``path_to_log_directory`` can be accessed via
``trainer.full_logging_path``.

If you're using wandb, you can monitor the training progress on the wandb website.

Training in parallel
********************
Expand Down
15 changes: 9 additions & 6 deletions docs/source/basic_usage/trainingmodel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ options to train a simple network with example data, namely
parameters = mala.Parameters()
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.network.layer_activations = ["ReLU"]
Expand All @@ -43,15 +43,18 @@ sub-objects dealing with the individual aspects of the workflow. In the first
two lines, which data scaling MALA should employ. Scaling data greatly
improves the performance of NN based ML models. Options are

* ``None``: No normalization is applied.
* ``None``: No scaling is applied.

* ``standard``: Standardization (Scale to mean 0, standard deviation 1)
* ``standard``: Standardization (Scale to mean 0, standard deviation 1) is
applied to the entire array.

* ``normal``: Min-Max scaling (Scale to be in range 0...1)
* ``minmax``: Min-Max scaling (Scale to be in range 0...1) is applied to the entire array.

* ``feature-wise-standard``: Row Standardization (Scale to mean 0, standard deviation 1)
* ``feature-wise-standard``: Standardization (Scale to mean 0, standard
deviation 1) is applied to each feature dimension individually.

* ``feature-wise-normal``: Row Min-Max scaling (Scale to be in range 0...1)
* ``feature-wise-minmax``: Min-Max scaling (Scale to be in range 0...1) is
applied to each feature dimension individually.

Here, we specify that MALA should standardize the input (=descriptors)
by feature (i.e., each entry of the vector separately on the grid) and
Expand Down
1 change: 0 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@
"scipy",
"oapackage",
"matplotlib",
"horovod",
"lammps",
"total_energy",
"pqkmeans",
Expand Down
2 changes: 1 addition & 1 deletion examples/advanced/ex01_checkpoint_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def initial_setup():
parameters = mala.Parameters()
parameters.data.data_splitting_type = "by_snapshot"
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.network.layer_activations = ["ReLU"]
parameters.running.max_number_epochs = 9
parameters.running.mini_batch_size = 8
Expand Down
14 changes: 11 additions & 3 deletions examples/advanced/ex03_tensor_board.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

parameters = mala.Parameters()
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.targets.ldos_gridsize = 11
parameters.targets.ldos_gridspacing_ev = 2.5
parameters.targets.ldos_gridoffset_ev = -5
Expand All @@ -32,11 +32,19 @@

data_handler = mala.DataHandler(parameters)
data_handler.add_snapshot(
"Be_snapshot0.in.npy", data_path, "Be_snapshot0.out.npy", data_path, "tr",
"Be_snapshot0.in.npy",
data_path,
"Be_snapshot0.out.npy",
data_path,
"tr",
calculation_output_file=os.path.join(data_path, "Be_snapshot0.out"),
)
data_handler.add_snapshot(
"Be_snapshot1.in.npy", data_path, "Be_snapshot1.out.npy", data_path, "va",
"Be_snapshot1.in.npy",
data_path,
"Be_snapshot1.out.npy",
data_path,
"va",
calculation_output_file=os.path.join(data_path, "Be_snapshot1.out"),
)
data_handler.prepare_data()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
def initial_setup():
parameters = mala.Parameters()
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.running.max_number_epochs = 10
parameters.running.mini_batch_size = 40
parameters.running.learning_rate = 0.00001
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
parameters = mala.Parameters()
# Specify the data scaling.
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.running.max_number_epochs = 5
parameters.running.mini_batch_size = 40
parameters.running.learning_rate = 0.00001
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def optimize_hyperparameters(hyper_optimizer):

parameters = mala.Parameters()
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.running.max_number_epochs = 10
parameters.running.mini_batch_size = 40
parameters.running.learning_rate = 0.00001
Expand Down
8 changes: 3 additions & 5 deletions examples/advanced/ex10_convert_numpy_openpmd.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
descriptor_save_path="./",
target_save_path="./",
additional_info_save_path="./",
naming_scheme="converted_from_numpy_*.bp5",
naming_scheme="converted_from_numpy_*.h5",
descriptor_calculation_kwargs={"working_directory": "./"},
)

Expand All @@ -40,11 +40,9 @@
for snapshot in range(2):
data_converter.add_snapshot(
descriptor_input_type="openpmd",
descriptor_input_path="converted_from_numpy_{}.in.bp5".format(
snapshot
),
descriptor_input_path="converted_from_numpy_{}.in.h5".format(snapshot),
target_input_type="openpmd",
target_input_path="converted_from_numpy_{}.out.bp5".format(snapshot),
target_input_path="converted_from_numpy_{}.out.h5".format(snapshot),
additional_info_input_type=None,
additional_info_input_path=None,
target_units=None,
Expand Down
2 changes: 1 addition & 1 deletion examples/basic/ex01_train_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
# Specify the data scaling. For regular bispectrum and LDOS data,
# these have proven successful.
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
# Specify the used activation function.
parameters.network.layer_activations = ["ReLU"]
# Specify the training parameters.
Expand Down
2 changes: 1 addition & 1 deletion examples/basic/ex04_hyperparameter_optimization.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
####################
parameters = mala.Parameters()
parameters.data.input_rescaling_type = "feature-wise-standard"
parameters.data.output_rescaling_type = "normal"
parameters.data.output_rescaling_type = "minmax"
parameters.running.max_number_epochs = 20
parameters.running.mini_batch_size = 40
parameters.running.optimizer = "Adam"
Expand Down
3 changes: 2 additions & 1 deletion mala/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,10 @@
HyperparameterOAT,
HyperparameterNASWOT,
HyperparameterOptuna,
HyperparameterACSD,
HyperparameterDescriptorScoring,
ACSDAnalyzer,
Runner,
MutualInformationAnalyzer,
)
from .targets import LDOS, DOS, Density, fermi_function, AtomicForce, Target
from .interfaces import MALA
Expand Down
13 changes: 8 additions & 5 deletions mala/common/parallelizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
import os
import warnings

import torch
import torch.distributed as dist

use_ddp = False
Expand Down Expand Up @@ -154,6 +153,11 @@ def get_local_rank():
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
Returns
-------
local_rank : int
The local rank of the current thread.
"""
if use_ddp:
return int(os.environ.get("LOCAL_RANK"))
Expand Down Expand Up @@ -189,15 +193,14 @@ def get_size():
return comm.Get_size()


# TODO: This is hacky, improve it.
def get_comm():
"""
Return the MPI communicator, if MPI is being used.
Returns
-------
comm : MPI.COMM_WORLD
A MPI communicator.
An MPI communicator.
"""
return comm
Expand All @@ -221,7 +224,7 @@ def printout(*values, sep=" ", min_verbosity=0):
Parameters
----------
values
values : object
Values to be printed.
sep : string
Expand All @@ -245,7 +248,7 @@ def parallel_warn(warning, min_verbosity=0, category=UserWarning):
Parameters
----------
warning
warning : str
Warning to be printed.
min_verbosity : int
Minimum number of verbosity for this output to still be printed.
Expand Down
Loading

0 comments on commit c2e1bb7

Please sign in to comment.