Reviews run for a minimum of one week. The outcome of the review is decided on this date. This is the last day to make comments or ask questions about this review.
Deep learning is a fast-growing field within artificial intelligence, and deep neural networks have seen rapid adoption in recent years. Successes in machine vision, speech recognition and natural language processing show the technology is ripe for adoption by enterprise.
Eclipse Deeplearning4j targets enterprises looking to implement deep learning technologies. Many large organizations have already adopted big data technologies such as Apache Spark, Apache Hadoop, and Apache Kafka for building large-scale data pipelines and integrating various data warehouses.
Deeplearning4j will integrate with these popular open-source technologies to make it easy for enterprise to adopt deep learning technologies as part of their existing stack.
Deep learning technologies are also being utilized at the edge to support Internet of Things (IoT) deployments. Android is an increasingly popular OS for writing applications for smart edge devices. Deeplearning4j primarily targets large-scale enterprise environments and embedded environments on Android, but in the future, it will be deployable on other kinds of environments via export in C.
In its current incarnation, Deeplearning4j is a software distribution of several projects targeted at integrating with enterprise environments. The project as submitted to the Eclipse Foundation would encompass everything from reinforcement learning to integrations with various platforms, with the goal of building machine learning workflows from ETL through training to inference.
The main goal of the project is to be a production runtime that imports models from the Python ecosystem and runs them at scale, offering a stable and secure product to large organizations.
Most deep learning frameworks in today's environment are simply tensor libraries implementing automatic differentiation, with the rest of the production stack being left as an "exercise for the reader". Production stack refers to everything from integration with web development frameworks to dealing with messy data.
Deeplearning4j also targets a mix of legacy big data environments and running on hybrid big data clusters (which are now an emerging trend with Apache YARN and Apache Mesos supporting GPUs).
Capitalizing on this trend, Deeplearning4j aims to provide smooth integration for large enterprise environments while also allowing access to more cutting-edge resources in settings that other deep learning projects don’t consider, including:
- J2EE application servers
- Hadoop and Spark
- Connecting to enterprise data warehouses
- JMX integration
- Application frameworks like spring and play
- Proper Java annotation support
- Apache Camel and Spring Integrations (via datavec)
Deeplearning4j started in late 2013 as a project at Skymind (the company behind Deeplearning4j) and has grown quickly in feature scope and community since the project's inception.
Eclipse Deeplearning4j enables developers and large organizations to build deep learning applications, covering the whole deep learning workflow from data preprocessing through distributed training and hyperparameter optimization and production-grade deployment.
The goal of Eclipse Deeplearning4j is to provide a core set of components for building applications that incorporate AI. AI products within an enterprise often have a wider scope than just machine learning. The overall goal of a distribution is to provide smart defaults for building deep learning applications.
We define a machine learning product lifecycle as:
Securely connecting to enterprise environments via Kerberos™ and other auth protocols with the purpose of:
Connecting to disparate data sources
Using that data to build vectors that a neural network is capable of understanding
Building and tuning a neural network
Deploying to production via REST, Spark, or embedded environments such as Android™ phones or Raspberry Pi’s
Deeplearning4j can facilitate the process of building an application without relying on third-party providers for ETL libraries, tensor libraries, etc. Convention over configuration is key for scaling large software projects that will be maintained for long periods.
Most current projects in deep learning don't think about backwards compatibility with large enterprise applications, nor do they facilitate the building of applications. Instead, they optimize for flexibility and loose coupling (which is great for research). Deeplearning4j is the bridge between research in the lab and applications in the real world.
The Deeplearning4j software distribution contains the following components:
Deeplearning4j: Neural network DSL (facilitates building neural networks integrated with data pipelines and Spark)
ND4J: N-dimensional arrays for Java, a tensor library: "Eclipse January with C code and wider scope". The goal is to provide tensor operations and optimized support for various hardware platforms
DataVec: An ETL library that vectorizes and "tensorizes" data. Extract transform load with support for connecting to various data sources and outputting n-dimensional arrays via a series of data transformations
libnd4j: Pure C++ library for tensor operations, which works closely with the open-source library JavaCPP (JavaCPP was created and is maintained by a Skymind engineer, but it is not part of this project).
RL4J: Reinforcement learning on the JVM, integrated with Deeplearning4j. Includes Deep-Q learning used in AlphaGo and A3C.
Jumpy: A Python interface to the ND4J library integrating with Numpy
Arbiter: Automatic tuning of neural networks via hyperparameter search. Hyperparameter optimization using grid search, random search and Bayesian methods.
The Eclipse Foundation is a great foundation that closely matches the goals of the Deeplearning4j committers. The Eclipse community is also a proven ground for Java projects with a great set of complementary projects that could benefit from having an associated deep learning/AI project.
No known issues.
As soon as possible.
Better Python framework interoperability; automatic differentiation for a Tensorflow/PyTorch like experience based on the ND4J framework; interpretability; larger model zoo.
More layers and algorithms and data pipeline integrations, etc.