-
“The Netflix observability team's future plans with DJL include trying out its training API, scaling usage of transfer learning inference, and exploring its bindings for PyTorch and MXNet to harness the power and availability of transfer learning.”
Stanislav Kirdey, Engineer at Netflix observability team
-
“Using DJL allowed us to run large batch inference on Spark for Pytorch models. DJL helped reduce inference time from over six hours to under two hours.”
-- Xiaoyan Zhang, Data Scientist at TalkingData
-
“DJL enables us to run models built with different ML frameworks side by side in the same JVM without infrastructure changes. ”
-- Hermann Burgmeier, Engineer at Amazon Advertising team
-
“Our science team prefers using Python. Our engineering team prefers using Java/Scala. With DJL, data science team can build models in different Python APIs such as Tensorflow, Pytorch, and MXNet, and engineering team can run inference on these models using DJL. We found that our batch inference time was reduced by 85% from using DJL.”
-- Vaibhav Goel, Engineer at Amazon Behavior Analytics team