Keras is an API (Application Program Interface) originally designed to allow Tensorflow developers to focus their attention on the deep-learning problems of interest rather than concern themselves with minutia of how to implement their model in Tensorflow. Since its introduction, Keras has become popular not only among Tensorflow developers, but also among developers for other platforms such as MXNet. In the upcoming Tensorflow 2.0 (now in beta) the Keras API will built-in to Tensorflow itself.
APIs are ubiquitous in the world of software. Like all software tools, there are good APIs and there are, well, the others. Keras is an example of an API that has pretty much got it right.
Keras takes care of the low-level implementation details
Technically, this is the job of any good API. The developer can focus on the job he or she wants to do rather than on the details required by the underlying library. This means that deep-learning code can be developed faster.
Keras does not restrict customization
While Keras makes it easier to build the structures most commonly used in deep learning, the developer always has the option of crafting custom code using the underlying library when special features or capabilities are required.
Keras is not tied to a particular backend
The primary job for most APIs is to provide an interface to a specific library of program code. The limitations of the library immediately become the limitations of the API. Although Keras was designed to be used with Google’s Tensorflow, Keras can can serve as the front-end for other libraries of interest in the field of deep earning. These include Microsoft’s CNTK, Intel’s PlaidML, and the Python math library Theano. Thanks to an interface contributed by Amazon, Keras can also be used with the Apache deep learning project MXNet.
Keras’ multiple backends mean extended hardware support
Since Keras can utilize different backends it can exploit the hardware supported by those backends. Unfortunately, in the world of deep-learning support for advanced hardware is almost always version- and vendor-specific. For example, Keras’ built-in support for multiple Graphics Processing Units (GPUs) is limited to the Tensorflow backend. In contrast, the PlaidML backend provides OpenCL support and therefore works with AMD GPUs as well as NVidia. Unfortunately, PlaidML does not yet provide support for multiple GPUs. The PlaidML team says “real soon now”.
Keras provides for effective deployment as well as development
Deep-learning APIs face a difficulty not encountered in many development systems. Deep-learning models must be built on substantial systems with great computing power. However, once a model is created, it might be deployed on a tiny device. Image recognition systems are a good example. They can take days to compute on powerful servers, but once complete a user can download an app to their smartphone and start recognizing images.
Keras models can be deployed on smartphones and edge devices, though not all in the same way. Support for Keras models on iOS is provided by Apple, not Keras. A Tensorflow runtime is available for Android, and support is also available for embedded devices and embedded device development boards such as the Raspberry Pi.
Tensorflow is at present the most popular deep-learning platform, and Google’s commitment to Tensorflow ensures that Keras will continue to provide a top-quality deep-learning API. Perhaps more, importantly, however, is the fact that developers who take time to learn Keras are not limiting themselves to Tensorflow, but are mastering skills useful on platforms such as MXNet and Microsoft CNTK.