Skip to content

XIV. Load and run models I (EngineInfo)

carlosuc3m edited this page Sep 13, 2023 · 2 revisions

Running a model in JDLL is intended to be simple and generic. The library deals internally with the different Deep Learning frameworks allowing the user to load and run every model in an uniform manner.

Loading refers to the action of bringing a DL model into computer memory to perform the predictions and running the model/making inference is the actual event of making the predictions. Both loading and running the model are time consuming tasks, however once a model has been loaded the model can be run without having to load it again until it is unloaded from memory or another model is loaded.

This page is going to provide a step by step guide on how to load a Deep Learning model from scratch. In order to get more information about running a model please click here.

io.bioimage.modelrunner.engine.EngineInfo

General description

Important: for Bioimage.io models it is not necessary to create an EngineInfo instance to load them. They can be loaded directly as all the information to do it is contained in the rdf.yaml file that comes inside the model folder. For more information about loading Bioimage.io models directly click here.

The first step to load a model is to know the Deep Learning framework required to load it. If a model was trained in Pytorch, it cannot be loaded or run with Tensorflow. JDLL requires that the engine information is provided using the EngineInfo class.

The EngineInfo class encapsulates the essential details required to ensure that a model is loaded with its corresponding engine. It encompasses the following information:

  • Framework. The Deep Learning framework tha wants to be used. Currently JDLL supports torchscript for Pytorch models, onnx or tensorflow_saved_model_bundle for Tensorflow models. The list of possible frameworks can be found here.

  • Engine version. The version of the DL framework that wants to be loaded. Several Deep Learning framework versions can be installed at the same time and this field helps selecting the most adecuate one load the wanted version and avoid errors. The safest version to load a model with is the one that was used for training it.

  • CPU. Whether the Deep Learning engine wanted has to support CPU inference or not. Normally, all the engines supported by JDLL can be used to run inference on the CPU, without needing a GPU.

  • GPU. Whether the Deep Learning engine wanted has to support GPU inference or not. If there is a GPU available, selecting an engine that can run on GPU improves the speed of the model execution.

  • Engines directory (optional). Directory where all the engines are installed. By default is set to a folder named engines in the directory from where the software is being run. However, if there are engines installed in other locations, the engines directory can be changed.

The framework Strings that can be used to create an EngineInfo instance are:

  • torchscript or pytorch for Pytorch
  • tensorflow_saved_model_bundle or tensorflow for Tensorflow
  • onnx for Onnx

Note that 2 different tags are equivalent for Pytorch and Tensorflow, this is because one is torchscript and tensorflow_saved_model_bundle were adopted from the Bioimage.io weight tags to make its integration in JDLL easier.

ALL THE DL FRAMEWORKS AND VERSIONS SUPPORTED BY JDLL ARE DEFINED IN THE VERSIONS JSON FILE.

Image Alt Text Engines directory.

Static methods

The main use of the static methods of the EngineInfo class is to create an instance of the object.

EngineInfo.defineDLEngine(String framework, String version, boolean cpu, boolean gpu, String jarsDirectory)

Method that creates an instance of the EngineInfo class. The arguments define a unique engine from the available installed ones. If the engine defined does not exist or has not been installed the method will return null. For more information about how to know which engines exist and which are installed please click here.

Due to a problem in some frameworks that avoids loading different versions in the same process (see here), if a different version of the same framework has been loaded previously, depending on the framework the method might produce an excepion. Only the version of the framework that has already been loaded can be used during the session. In order to know which versions of each framework have already been loaded please check the method EngineInfo.getLoadedVersions(String framework, String version).

  • framework: the Deep Learning framework tha wants to be used. Currently JDLL supports torchscript for Pytorch models, onnx or tensorflow_saved_model_bundle for Tensorflow models. The list of possible frameworks can be found here.
  • version: the version of the DL framework that wants to be loaded. Several Deep Learning framework versions can be installed at the same time and this field helps selecting the most adecuate one load the wanted version and avoid errors. The safest version to load a model with is the one that was used for training it.
  • cpu (optional): whether the Deep Learning engine wanted has to support CPU inference or not. Normally, all the engines supported by JDLL can be used to run inference on the CPU, without needing a GPU.
  • gpu (optional): whether the Deep Learning engine wanted has to support GPU inference or not. If there is a GPU available, selecting an engine that can run on GPU improves the speed of the model execution.
  • jarsDirectory (optional): directory where all the engines are installed. By default is set to a folder named engines in the directory from where the software is being run. However, if there are engines installed in other locations, the engines directory can be changed.

If the method is called without the cpu an gpu arguments, it will first look for installed engines that support both, then that support only CPU, if no engine exists or is installed that supports CPU it will look for one supporting GPU and if it is not installed or does not exist it will return null.

For the following example, it is assumed that the installation directory is the one in the image above. Looking at the picture, the engines installed are:

  • Pytorch 1.4.0 for CPU and GPU
  • Pytorch 1.5.0 for CPU and GPU
  • Pytorch 1.6.0 for CPU and GPU
  • Pytorch 1.7.0 for CPU and GPU
  • Pytorch 1.7.1 for CPU and GPU
  • Pytorch 1.8.1 for CPU and GPU
  • Pytorch 1.9.1 for CPU and GPU
  • Pytorch 1.11.0 for CPU and GPU
  • Tensorflow 1.12.0 for CPU
  • Tensorflow 1.15.0 for CPU
// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework= "torchscript";
String version = "1.9.1";
boolean cpu = true;
boolean gpu = true;
String enginesDir = "C:\\Users\\carlos\\icy\\engines";
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, cpu, gpu, enginesDir);
boolean isValid = engineInfo != null;
System.out.println(isValid);

Output

true

Now try to reference to a non installed engine:

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework = "torchscript";
// The version 1.13.1 of Pytorch is supported by JDLL but it is not installed
String version = "1.13.1";
boolean cpu = true;
boolean gpu = true;
String enginesDir = "C:\\Users\\carlos\\icy\\engines";
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, cpu, gpu, enginesDir);
boolean isValid = engineInfo != null;
System.out.println(isValid);

Output

false

Or a non existing engine:

// Define the information for the wanted engine
// The framework 'super_good_and_fast_dl' does not exist
String framework = "super_good_and_fast_dl";
String version = "1.9.1";
boolean cpu = true;
boolean gpu = true;
String enginesDir = "C:\\Users\\carlos\\icy\\engines";
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, cpu, gpu, enginesDir);
boolean isValid = engineInfo != null;
System.out.println(isValid);

Output

false

If the gpu and cpu and gpu arguments are not provided:

For Pytorch, the engine selected will support both GPU and CPU as the installed one does.

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework = "torchscript";
String version = "1.9.1";

String enginesDir = "C:\\Users\\carlos\\icy\\engines";
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, enginesDir);
System.out.println("Supports CPU: " + engineInfo.isCPU());
System.out.println("Supports GPU: " + engineInfo.isGPU());

Output

Supports CPU: true
Supports GPU: true

For Tensorflow, the engine selected will support both only CPU as the installed one does. If there was another Tensorflow 1.15.0 installed that supported both CPU and GPU, that would have been the one instanced by the EngineInfo object.

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework = "tensorflow_saved_model_bundle";
String version = "1.15.0";

String enginesDir = "C:\\Users\\carlos\\icy\\engines";
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, enginesDir);
System.out.println("Supports CPU: " + engineInfo.isCPU());
System.out.println("Supports GPU: " + engineInfo.isGPU());

Output

Supports CPU: true
Supports GPU: false

EngineInfo.defineCompatibleDLEngine(String framework, String version, boolean cpu, boolean gpu, String jarsDirectory )

Method that creates an instance of the EngineInfo class. Unlike with EngineInfo.defineDLEngine(...), if the DL framework defined by the framework and version arguments provided is not installed, this method will look among the installed engines for one that is compatible instead of returning null. A compatible engine is defined as an engine of the same DL framework with the same major version. For example Pytorch 1.13.1 and Pytorch 1.9.0 are compatible, but Pytorch 1.13.1 and Pytorch 2.0.0 are not. More information here.

This definition of version compatiblity is from JDLL, not from the individual Deep Learning frameworks and might not be accurate in some cases. The farther the versions the less likely to actually be compatible two engines are. For example, Pytorch 1.13.1 is more probable to be compatible with 1.12.0 than with 1.5.0. Also note that DL frameworks are more likely to be backwards-compatible than forward-compatible, thus using the latest version available is a good way to maximize the possibilities to run a model smoothly.

  • framework: the Deep Learning framework tha wants to be used. Currently JDLL supports torchscript for Pytorch models, onnx or tensorflow_saved_model_bundle for Tensorflow models. The list of possible frameworks can be found here.
  • version: the version of the DL framework that wants to be loaded. Several Deep Learning framework versions can be installed at the same time and this field helps selecting the most adecuate one load the wanted version and avoid errors. The safest version to load a model with is the one that was used for training it. If the selected version is not installed, the method will look for a compatible alternative among the installed engines of the same framework and return the EngineInfo instance of it if it is found.
  • cpu: whether the Deep Learning engine wanted has to support CPU inference or not. Normally, all the engines supported by JDLL can be used to run inference on the CPU, without needing a GPU.
  • gpu: whether the Deep Learning engine wanted has to support GPU inference or not. If there is a GPU available, selecting an engine that can run on GPU improves the speed of the model execution.
  • jarsDirectory (optional): directory where all the engines are installed. By default is set to a folder named engines in the directory from where the software is being run. However, if there are engines installed in other locations, the engines directory can be changed.

Below there are some examples exploring the use of the method and version compatibility. Note that the examples asume that the engines installed are the ones shown in the picture above.

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` frameworkboth the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework = "torchscript";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";


// Define first a vesion installed
String installedVersion = "1.9.1";
// Define now a version not installed but compatible with the installed ones
String notInstalledVersion = "1.9.0";
// Define now a version not installed and not compatible
String notCompatibleVersion = "2.0.0";


EngineInfo installedEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, installedVersion , enginesDir);
EngineInfo notInstalledEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, notInstalledVersion , enginesDir);
EngineInfo notCompatibleEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, notCompatibleVersion , enginesDir);
boolean isValid = engineInfo != null;

System.out.println("Installed is valid: " + installedEngineInfo != null);
System.out.println("Compatible is valid: " + notInstalledEngineInfo != null);
System.out.println("Not compatible is valid: " + notCompatibleEngineInfo != null);

Output:

Installed is valid: true
Compatible is valid: true
Not compatible is valid: false

As Pytorch 2.0.0 is not installed or compatible with any of the versions installed, the method returns null.

Now check the version of the engines in the EngineInfo instances created:

System.out.println("Version for instance created with 1.9.1: " + installedEngineInfo.getVersion());
System.out.println("Version for instance created with 1.9.0: " + notInstalledEngineInfo.getVersion());

Output:

Version for instance created with 1.9.1: 1.9.1
Version for instance created with 1.9.0: 1.9.1

Note that for both the version used is 1.9.1. For the first case it is obvious because it was the argument to the method, but for the second case it is not the same as the one used as the argument. As it is not installed is the closest compatible one installed.

EngineInfo.defineCompatibleDLEngineCPU(String framework, String version, String jarsDirectory)

Method that creates an instance of the EngineInfo class. Unlike with EngineInfo.defineDLEngine(...), if the DL framework defined by the framework and version arguments provided is not installed, this method will look among the installed engines for one that is compatible instead of returning null. A compatible engine is defined as an engine of the same DL framework with the same major version. For example Pytorch 1.13.1 and Pytorch 1.9.0 are compatible, but Pytorch 1.13.1 and Pytorch 2.0.0 are not. More information here.

This definition of version compatiblity is from JDLL, not from the individual Deep Learning frameworks and might not be accurate in some cases. The farther the versions the less likely to actually be compatible two engines are. For example, Pytorch 1.13.1 is more probable to be compatible with 1.12.0 than with 1.5.0. Also note that DL frameworks are more likely to be backwards-compatible than forward-compatible, thus using the latest version available is a good way to maximize the possibilities to run a model smoothly.

This method is called without the cpu an gpu arguments, the closest version supported on CPU will be always returned. However if there are two engines of the same version, one supporting CPU and the other CPU and GPU, the second one will be returned.

Note that the method requires engines to run on CPU, thus if the most compatible engine only supports running on GPU, it will be skipped to the second most compatible. To look for compatible versions that run specifically on GPU-CPU, CPU only or GPU only use the method EngineInfo.defineCompatibleDLEngine(String framework, String version, boolean cpu, boolean gpu, String jarsDirectory).

  • framework: the Deep Learning framework tha wants to be used. Currently JDLL supports torchscript for Pytorch models, onnx or tensorflow_saved_model_bundle for Tensorflow models. The list of possible frameworks can be found here.
  • version: the version of the DL framework that wants to be loaded. Several Deep Learning framework versions can be installed at the same time and this field helps selecting the most adecuate one load the wanted version and avoid errors. The safest version to load a model with is the one that was used for training it. If the selected version is not installed, the method will look for a compatible alternative among the installed engines of the same framework and return the EngineInfo instance of it if it is found.
  • jarsDirectory (optional): directory where all the engines are installed. By default is set to a folder named engines in the directory from where the software is being run. However, if there are engines installed in other locations, the engines directory can be changed.

Below there are some examples exploring the use of the method and version compatibility. Note that the examples asume that the engines installed are the ones shown in the picture above.

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework= "torchscript";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";


// Define first a vesion installed
String installedVersion = "1.9.1";
// Define now a version not installed but compatible with the installed ones
String notInstalledVersion = "1.9.0";
// Define now a version not installed and not compatible
String notCompatbleVersion = "2.0.0";


EngineInfo installedEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, installedVersion , enginesDir);
EngineInfo notInstalledEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, notInstalledVersion , enginesDir);
EngineInfo notCompatibleEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, notCompatbleVersion , enginesDir);
boolean isValid = engineInfo != null;

System.out.println("Installed is valid: " + installedEngineInfo != null);
System.out.println("Compatible is valid: " + notInstalledEngineInfo != null);
System.out.println("Not compatible is valid: " + notCompatibleEngineInfo != null);

Output:

Installed is valid: true
Compatible is valid: true
Not compatible is valid: false

As Pytorch 2.0.0 is not installed or compatible with any of the versions installed, the method returns null.

Now check the version of the engines in the EngineInfo instances created:

System.out.println("Version for instance created with 1.9.1: " + installedEngineInfo.getVersion());
System.out.println("Version for instance created with 1.9.0: " + notInstalledEngineInfo.getVersion());

Output:

Version for instance created with 1.9.1: 1.9.1
Version for instance created with 1.9.0: 1.9.1

Note that for both the version used is 1.9.1. For the first case it is obvious because it was the argument to the method, but for the second case it is not the same as the one used as the argument. As it is not installed is the closest compatible one installed.

EngineInfo.defineCompatibleDLEngineGPU(String framework, String version, String jarsDirectory)

Method that creates an instance of the EngineInfo class. Unlike with EngineInfo.defineDLEngine(...), if the DL framework defined by the framework and version arguments provided is not installed, this method will look among the installed engines for one that is compatible instead of returning null. A compatible engine is defined as an engine of the same DL framework with the same major version. For example Pytorch 1.13.1 and Pytorch 1.9.0 are compatible, but Pytorch 1.13.1 and Pytorch 2.0.0 are not. More information here.

This definition of version compatiblity is from JDLL, not from the individual Deep Learning frameworks and might not be accurate in some cases. The farther the versions the less likely to actually be compatible two engines are. For example, Pytorch 1.13.1 is more probable to be compatible with 1.12.0 than with 1.5.0. Also note that DL frameworks are more likely to be backwards-compatible than forward-compatible, thus using the latest version available is a good way to maximize the possibilities to run a model smoothly.

This method is called without the cpu an gpu arguments, the closest version supported on GPU will be always returned. However if there are two engines of the same version, one supporting CPU and the other CPU and GPU, the second one will be returned.

Note that the method requires engines to run on GPU, thus if the most compatible engine only supports running on CPU, it will be skipped to the second most compatible. To look for compatible versions that run specifically on GPU-CPU, CPU only or GPU only use the method EngineInfo.defineCompatibleDLEngine(String framework, String version, boolean cpu, boolean gpu, String jarsDirectory).

  • framework: the Deep Learning framework tha wants to be used. Currently JDLL supports torchscript for Pytorch models, onnx or tensorflow_saved_model_bundle for Tensorflow models. The list of possible frameworks can be found here.
  • version: the version of the DL framework that wants to be loaded. Several Deep Learning framework versions can be installed at the same time and this field helps selecting the most adecuate one load the wanted version and avoid errors. The safest version to load a model with is the one that was used for training it. If the selected version is not installed, the method will look for a compatible alternative among the installed engines of the same framework and return the EngineInfo instance of it if it is found.
  • jarsDirectory (optional): directory where all the engines are installed. By default is set to a folder named engines in the directory from where the software is being run. However, if there are engines installed in other locations, the engines directory can be changed.

Below there are some examples exploring the use of the method and version compatibility. Note that the examples asume that the engines installed are the ones shown in the picture above.

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework = "torchscript";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";


// Define first a vesion installed
String installedVersion = "1.9.1";
// Define now a version not installed but compatible with the installed ones
String notInstalledVersion = "1.9.0";
// Define now a version not installed and not compatible
String notCompatbleVersion = "2.0.0";


EngineInfo installedEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, installedVersion , enginesDir);
EngineInfo notInstalledEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, notInstalledVersion , enginesDir);
EngineInfo notCompatibleEngineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, notCompatbleVersion , enginesDir);
boolean isValid = engineInfo != null;

System.out.println("Installed is valid: " + installedEngineInfo != null);
System.out.println("Compatible is valid: " + notInstalledEngineInfo != null);
System.out.println("Not compatible is valid: " + notCompatibleEngineInfo != null);

Output:

Installed is valid: true
Compatible is valid: true
Not compatible is valid: false

As Pytorch 2.0.0 is not installed or compatible with any of the versions installed, the method returns null.

Now check the version of the engines in the EngineInfo instances created:

System.out.println("Version for instance created with 1.9.1: " + installedEngineInfo.getVersion());
System.out.println("Version for instance created with 1.9.0: " + notInstalledEngineInfo.getVersion());

Output:

Version for instance created with 1.9.1: 1.9.1
Version for instance created with 1.9.0: 1.9.1

Note that for both the version used is 1.9.1. For the first case it is obvious because it was the argument to the method, but for the second case it is not the same as the one used as the argument. As it is not installed is the closest compatible one installed.

Finally if any of the installed engines support GPU, the method will return null.

// Define the information for the wanted engine
// Note that in order to specify the `pytorch` framework both the `pytorch` and `torchscript` can be used,
// for tensorflow both `tensorflow` and `tensorflow_saved_model_bundle` work too.
String framework = "tensorflow_saved_model_bundle";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";

// Define first a vesion installed or compatible with one of the installed ones
String installedVersion = "1.15.0";

EngineInfo installedEngineInfo = EngineInfo.defineCompatibleDLEngineGPU(framework, installedVersion , enginesDir);
System.out.println("Installed is valid: " + installedEngineInfo != null);

Output:

Installed is valid: false

EngineInfo.getLoadedVersions(String framework, String version)

For the framework and version selected returns null if no compatible version has already been loaded in the process or a String corresponding to the compatible engine version that was previously loaded.

It is usually not possible to load different versions of the same DL framework on the same process, an error will be yield and sometime even the JVM might crash due to different errors such as memory allocation problems (more info here). Thanks to this method it is possible to know if any engine from the same framework has already been loaded and avoid the problems related.

  • framework: the DL framework of interest. The list of possible frameworks can be found here.
  • version: teh version of the framework of interest

Non-static methods

getFramework()

Get the framework name of the specific EngineInfo instance.

getVersion()

Get the version of the framework for the engine defined by the specific EngineInfo instance.

getJavaVersion()

Get the Java version of the framework for the engine defined by the specific EngineInfo instance. The Java version might differ from the version of the framework in Python.

isGPU()

Whether the engine defined by the specific EngineInfo instance supports GPU or not.

isCPU()

Whether the engine defined by the specific EngineInfo instance supports CPU or not.

getDeepLearningVersionJarsDirectory()

For the EngineInfo instance, return its directory of installed engines. The directory where it went to look for the engines.

Clone this wiki locally