-
Notifications
You must be signed in to change notification settings - Fork 6
XIII. Load and run models 0 (Summary)
In order to load a model, the library needs to know first in which framework the model is going to be loaded, and then where is the model of interest.
The user needs to give information about the DL framework. For that the creation of an object called EngineInfo
is required. An EngineInfo
object has to be created with the framework name that is given by the Bioimage.io specs. Tensorflow should be tensorflow_saved_model_bundled
, PyTorch for Java, torchscript
and Onnx, onnx
.
The other required parameters are the version of the framework in Python (sometimes it differs from the Java API version) that wants to be loaded (1.15.0, 1.9.1, 15...) and the directory where all the engines folders are stored. Looking at the previous example this directory would be C:\Users\carlos\icy\engines
.
With this information an example code snippet would be:
String framework = "pytorch";
String version = "1.9.1";
String enginesDir = "C:\Users\carlos\icy\engines";
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, enginesDir);
The engineInfo
object is needed to know which of the engines has to be loaded. Note that EngineInfo.defineDLEngine(...)
will only try to load the exact same engine that is specified. If it is not installed the method will fail when trying to load the engine.
In order to check if a engine version is installed:
String engine = "tensorflow";
String version = "1.13.1";
String enginesDir = "/path/to/engines";
boolean installed = InstalledEngines.checkEngineVersionInstalled(engine, version, enginesDir);
It is also possible to load an engine version compatible with the wanted one. Compatible engine versions are those from teh same DL frameworks that share the same major version number. For example Pytorch 1.13.1 and 1.11.0 are compatible but Tensorflow 1.15.0 and Tensorflow 2.7.0 are NOT compatible.
The following method can be used to try to load a compatible engine version if the particular version does not exist:
String framework = "pytorch";
String version = "1.9.1";
String enginesDir = "C:\Users\carlos\icy\engines";
EngineInfo engineInfo = EngineInfo.defineCompatibleDLEngine(framework, version, enginesDir);
In this case, if Pytorch 1.9.1 is not installed but Pytorch 1.13.1 is, loading the model will load using Pytorch 1.13.1 instead of failing. In order to know which version has been loaded:
System.out.println(engineInfo.getVersion());
NOTE THAT THIS MIGHT BE A SOURCE OF ERRORS AS NOT EVERY ENGINE JDLL DEFINES AS COMPATIBLE IS ACTUALLY COMPATIBLE. If Pytorch 1.12.0 includes a new functionality that was not included in Pytorch 1.9.1 and we try to load a Pytorch 1.12.0 model that uses that functionality with the Pytorch 1.9.1 engine, WE WILL GET AN ERROR IN THE MODEL INFERENCE STEP.
This engine info must be used to load the corresponding model. Model loading requires 3 parameters, the model folder (directory where all the files for a model are stored), the model source (path to the file that is specified in the weights→source field in the rdf.yaml
file) and the EngineInfo
object previously created.
An example code to load a model would be:
String modelPath = "C:\Users\carlos\icy\models\EnhancerMitochondriaEM2D_13102022_171141";
String modelSource = modelPath + "weights-torchscript.pt";
Model model = Model.createDeepLearningModel(modelPath, modelSource, engineInfo);
The above piece of code would call the corresponding engine instance in a separate classloader and load the model in its corresponding engine. This model can now be used to make inference.
In order to load a model and avoid conflicts with other engines, JDLL creates a separate classloader that loads the JARs needed to run the corresponding DL framework. The new classloader is and URLClassLoader
using the context classloader (Thread.currentThread().getContextClassLoader()
) as its parent.
However, there are some examples in which the application or software using JDLL might have designed their custom approach to handle classloading. For those cases the methods used to create Deep Learning models can digest one argument more at the end: ClassLoader classLoader
.
The new call to the method would be the same as in the begining of the section but with one more arg:
Model model = Model.createBioimageioModel(modelFolder , enginesFolder, classloader);
If classloader=null
, the method will fallback to the context classloader (as if the argument had not been provided): Thread.currentThread().getContextClassLoader()
.
Once the model and tensors have been defined, everything is ready to make inference.
The process should be relatively easy to implement in the main software.
All the input tensors should be put together in a List
, same for the output tensors. Then the model should be called as model.runModel(....)
. The output list of tensors is then updated inplace.
// List that will contain the input tensors
List<Tensors> inputTensors = new ArrayList<Tensor>();
// List that will contain the output tensors
List<Tensors> outputTensors = new ArrayList<Tensor>();
inputTensors.add(inputTensor);
outputTensors.add(outputTensor);
model.runModel(inputTensors, outputTensors);
// The results of applying inference will be
// stored in the Tensors of the list ‘outputTensors’ variable
Please click here to more information about how to load Bioiamge.io models more easily.
Note that making inference with Bioimage.io models requires the same steps as with standard models. There is no easier way to do it as with loading.