This module contains the Deep Java Library (DJL) EngineProvider for DLR.
It is based off the Neo DLR.
We don’t recommend developers use classes within this module directly. Use of these classes will couple your code to the DLR and make switching between engines difficult.
DLR is a DL library with limited support for NDArray operations. Currently, it only covers the basic NDArray creation methods. To better support the necessary preprocessing and postprocessing, you can use one of the other Engine along with it to run in a hybrid mode. For more information, see Hybrid Engine.
The latest javadocs can be found on here.
You can also build the latest javadocs locally using the following command:
# for Linux/macOS:
./gradlew javadoc
The javadocs output is generated in the build/doc/javadoc
folder.
You can pull the DLR engine from the central Maven repository by including the following dependency:
<dependency>
<groupId>ai.djl.dlr</groupId>
<artifactId>dlr-engine</artifactId>
<version>0.18.0</version>
<scope>runtime</scope>
</dependency>
By default, DJL will download the DLR native libraries into cache folder the first time you run DJL. It will automatically determine the appropriate jars for your system based on the platform and GPU support.
You can choose a native library based on your platform if you don’t have network access at runtime.
For macOS, you can use the following library:
<dependency>
<groupId>ai.djl.dlr</groupId>
<artifactId>dlr-native-cpu</artifactId>
<version>1.6.0</version>
<scope>runtime</scope>
<classifier>osx-x86_64</classifier>
</dependency>
For Linux, you can use the following library:
<dependency>
<groupId>ai.djl.dlr</groupId>
<artifactId>dlr-native-cpu</artifactId>
<version>1.6.0</version>
<scope>runtime</scope>
<classifier>linux-x86_64</classifier>
</dependency>
You can use environment variable to specify your custom dlr by
export DLR_LIBRARY_PATH=path/to/your/dlr
DLR engine is still under development. The supported platform are limited to Macosx, Linux CPU. If you would like to use other platforms, please let us know.
TVM runtime itself doesn’t support multi-threading. As a result, when creating a new Predictor, we will copy the tvm model to avoid sharing the states. We are still actively testing multithreading capability.