Deep Learning Toolkit considerations for emerging data scientists

Overview

Disclaimer: This blog is my own opinion and not that of my employer, however it should be noted that I am a Microsoft employee and this may reflect that perspective.

Update: New version of these benchmarks is being worked on and can be tracked here: http://dlbench.comp.hkbu.edu.hk/

This post is a departure from my usual focus on Power BI. Enterprise deployment scenarios for Power BI have been a great subject for me. However, In my day job, I do work on a variety of data platform related subjects. These are my findings on deep learning toolkits and what you should know before getting too deep into them, especially pay attention to my section on Keras

Deep learning is popular for image processing (computer vision, facial recognition, emotion detection), natural language processing (sentiment analysis, translation), and even starting to find its way into areas such as customer churn. Neural networks with many layers are used to increase precision of a prediction as opposed to more statistical type algorithms such as linear regression.

There are several popular open source deep learning toolkits including Caffe, Torch, TensorFlow, CNTK (now Cognitive toolkit), and mxnet

This post will mostly reference TensorFlow and CNTK for reasons established in the section on Keras.

Python vs R

This debate will rage on for probably another decade similar to how I remember the Java vs C# debate as a developer in the early 2000’s. From what I have seen, Python appears to have more support in the area of deep learning than R. All but Torch support Python integration while only TensorFlow and mxnet support R directly.

Toolkit Performance

One of the most important aspects of a deep learning toolkit is performance.

Lets consider a couple of scenarios:

In the software development cycle a poorly indexed table could be the difference in 5 seconds and 5 minutes to call the database. This is annoying but is not the critical path to meeting a deadline. It takes many developer hours and iterations to build the code around that database call making that index issue less significant, but of course something that should be addressed.

In deep learning on the other hand, the difference between a model that performs twice as fast as another toolkit could mean the difference between 1 vs 2 days of training time. The iteration cycle is greatly impacted and retraining a model 5 times could be the difference between 1 week and 2 weeks to deliver results. This is significant!

Benchmarking Performance among leading toolkits

Benchmarking State-of-the-Art Deep Learning Software Tools is an academic journal (latest revision February 2017) comparing the most popular deep learning toolkits for CNN, FCN, and LTSM. These acronyms are neural network types you will want to familiarize yourself with if you are not already. There is a new eDX course that is just starting that you can learn all about these concepts.

Below i have included some links that may be to other frameworks but the content explanations seemed more easily understood

Convolutional Neural Network (CNN) – used primarily for image processing. Popular implementations include:
  • AlexNet – an 8 layer CNN circa 2012 that cut error rate nearly in half from previous versions
  • ResNet-50/101/152/etc – A deep residual learning network with 50/101/152/etc layers respectively circa 2015 achieving an error rate of 3.57% which was 4 times improvement from AlexNet

Fully Convolutional Neural Nework (FCN) – variation of CNN that doesn’t include the fully connected layer

Recurrent Neural Network (RNN) & Long Short Term Memory (LTSM) – widely used for natural language processing

This paper is extremely thorough and as our instincts are to scroll immediately to page 7 to start interpreting the bar charts, it is important to note how they ran these tests and gathered the results as described in pages 1-6.

One summary table that doesn’t fully represent all results is shown below.

Shaohuai Shi, Qiang Wang, Pengfei Xu, Xiaowen Chu, “BenchmarkingState-of-the-ArtDeepLearningSoftwareTool”

As you interpret these results, as well as the rest of them in the journal, you will notice three glaring observations

  • There is not one toolkit that has best performance across all neural network types. In fact, there can be wide variation in performance rank for a single toolkit based on # of CPUs or # of GPUs used.
  • Google TensorFlow is arguably the most popular of all of these toolkits, yet the results published in this paper other than in a few cases show it is quite average if not consistently poorer performing than others.
  • CNTK is orders of magnitude better than all of the competition in LTSM

Note on Google TensorFlow

CNTK performs better overall and by orders of magnitude in some cases to TensorFlow. As emerging data scientists start to pick toolkits for deep learning, TensorFlow seems to be a popular choice. In many cases, it will have desirable performance, but to put “all your eggs in one basket” so to speak, may not be the best approach here.

I actually am a fan of TensorFlow and picking a toolkit on performance alone would also not be wise. TensorFlow has some neat features one being TensorBoard that helps visualize the execution graph (note that CNTK also supports TensorBoard). Google has also recently introduced a dedicated TensorFlow processor (TPU) when running on their cloud platform that will surely speed up processing time. But if you are doing NLP (natural language processing), it is quite obvious you would want to use CNTK for performance reasons…

What is an emerging data scientist to do?

This is where Keras comes in…

Keras

Keras is an abstraction layer that allows you to run the same code on top of both TensorFlow and CNTK (as well as Theano, another deep learning toolkit) as the backend.

For Big Data people, I would make a correlation between Keras and the use of HIVE as an abstraction layer for Map/Reduce. It is rare to actually write Map/Reduce code anymore with the evolution of libraries around big data, and that is what Keras reminds me of compared to actually writing TensorFlow (or CNTK) code. For instance, TensorFlow on its own actually requires you to write the formula for Mean Squared Error to pass into the model. Although trivial, this is totally annoying and the use of Keras builds a lot of shortcuts for us that makes life much easier and reduces code often by 50%.

In the keras.json file that is created during installation, the backend can be configured by changing one line between “tensorflow” and “cntk”

{
    "floatx": "float32",
    "epsilon": 1e-07,
    "backend": "cntk",
    "image_data_format": "channels_last"
}

to verify the backend that is being used, from python simply enter

import backend from keras

or at anytime you can access the _BACKEND variable from the same library to see the result

These details are all described clearly on the keras.io site referenced above.

…and for all of the R users, there is a nice CRAN package available too:
https://cran.r-project.org/web/packages/kerasR/vignettes/introduction.html

from my somewhat limited experience, I can say that using Keras on top of TensorFlow or CNTK keeps me from pulling my hair out. Kudos to the creators and contributors to this library. Maybe we can dive deeper into this in a future post.

Transfer Learning

Transfer learning is the ability to take a preexisting model and use it as the base for another model. This allows you to take for instance a model that has classified millions of images and has trained for possibly weeks and apply it to new images that are more specific to your scenario. This allows for more rapid model development if you can build on preexisting work.

CNTK has a really nice tutorial on this technique here:
https://docs.microsoft.com/en-us/cognitive-toolkit/build-your-own-image-classifier-using-transfer-learning

TensorFlow also has it’s own “Inception” library that can be transferred.

This concept is the basis for the next section of “Deep Learning as a Service”

Deep Learning as a Service

I don’t believe this has actually become a term yet. I am just making it up as I go here 🙂

The concept of transfer learning opens some new capabilities to more easily apply your own scenarios to previously trained models.

Microsoft has developed a few interesting services that make deep learning very accessible to end users

One is the Custom Vision Service: https://www.customvision.ai/

This allows you to bring your own images to train on and allows you to reinforce in an iterative approach

Another is Q&A Maker: https://qnamaker.ai/

this allows you to build a bot in minutes to scroll through FAQ and document content on a subject that is important to your organization. This bot can then interact in an intelligent way without having to use a deep learning toolkit or a bunch of coding.

I did one using the Power BI FAQ pages and it worked really well

What is interesting about these services is that it is actually training a model on YOUR data. Not simply tapping into a pre-existing model. You are able to influence the results.

I believe we will continue to see many more services pop up like this that will continue to “democratize” AI for the masses

Conclusion

I would never claim to be a data scientist, but many of us are doing more and more data science like activities. For a person moving from data and business intelligence into machine learning and artificial intelligence, I feel like the above content would have saved me a lot of time. There is plenty of getting started content out there so start using your google/bing search skills to get deeper into it.