Welcome to Part 3 of my series on TensorFlow! If you haven’t had a chance to read Part 1 on TensorFlow and Google Colab or Part 2 on TensorFlow Serving, I highly recommend going back and checking those out. While this blog post can stand on its own, it would help to review those earlier parts as I cover the details on how to build a model with TensorFlow.
For this post, we will be focusing on running Machine Learning models on constrained devices. More specifically, TensorFlow Lite allows us to deploy machine learning models on mobile and IoT devices. TensorFlow Lite is an open-source deep learning framework for on-device inference. This is huge! No longer do we need to have connectivity to the internet and a cloud-based system to allow the device at the edge to react based on the data it’s sensing. – it can be done all on the device and autonomously from any other system. Let’s dig in and find out how this is possible and how we can use this knowledge in our products and services!
Why Machine Learning at the Edge?
There’s a variety of different reasons that you might want to deploy intelligence to the edge, but just a few of them might be
- Latency – There’s no round-trip to a server. You therefore can have your product and equipment respond and react quickly.
- Privacy – No data needs to leave the device. This can be huge in any cases where you have regulatory compliance that might need to be followed
- Connectivity – An internet connection isn’t required. You can deploy your smart product anywhere in the world. It’s not contained by needing to have access to the Internet via cellular, wifi, LoRa or some other technology.
- Power consumption – Unfortunately, network connections are power-hungry. Your device can run longer either between recharges or just remain in the field longer.
TensorFlow Lite works with a huge range of devices, from tiny microcontrollers to powerful mobile phones. If you wanted to dig in further, I highly suggest checking out the TensorFlow Lite Guide
Let’s Dig In!
As we could try and train a full model in this tutorial, in many cases it’s easiest if we just use a model that has already been trained for some of the more common tasks. Therefore, let’s plan to use a pre-trained model. For this tutorial, we’ll be focused on Digit Classification.
Additionally, there are hosts of edge hardware that we could run our model on, but we will keep it simple and run the models on an Android device. Android is an open-source and easily accessible hardware platform that is the most used mobile operating system in the world and there are dozens of devices that run Android. At Lab651, we even have a client that runs Android on their edge devices to monitor the vehicle and driver performance of their fleet. Let’s dig in!
Do you ever wonder how when you deposit a check using a picture from a mobile camera, how it’s able to recognize the digits in the check so accurately and then be able to flag fraud before it hits the banking system? It’s with digit classification! The project we’ll load is here:
In this example, you’ll need
- Android Studio 3.2 (installed on a Linux, Mac, or Windows machine)
- An Android device, or an Android Emulator
Follow along as we create an interactive demonstration of loading a fully trained model to perform the classification of digits written on the screen of an Android device. The steps are pretty straightforward.
Step 1. Clone the TensorFlow examples source code
Clone the TensorFlow examples GitHub repository to your computer to get the demo application.
git clone https://github.com/tensorflow/examples
Step 2. Import the sample app to Android Studio
Open the TensorFlow source code in Android Studio. To do this, open Android Studio and select Import Projects (Gradle, Eclipse ADT, etc.), setting the folder to examples/lite/examples/digit_classifier/android
Step 3. Run the Android app
Connect the Android device to the computer and be sure to approve any ADB permission prompts that appear on your phone. Select Run -> Run app. Select the deployment target in the connected devices to the device on which the app will be installed. This will install the app on the device.
To test the app, open the app called Digit Classifier on your device. Re-installing the app may require you to uninstall the previous installations.
A few details
The model that the application will be using is actually NOT included in the repository. You might notice during the build process that there is a file called
This is the build file that is used to download the
‘mnist.tflite’ to be used for the classification of the digits that are written on the screen. You can use this model for your initial testing, but as you’ll see below, I also suggest you try training your own model and seeing how it might behave differently.
Build Your Own Model
If you would like to go through the steps on Google Colab to build your own model, it’s not very difficult. There is a Python Notebook that covers the steps and code to build your own notebook. Just follow this link.
Let’s Play with the App!
Carl Jung said, “The creation of something new is not accomplished by the intellect but by the play instinct.”
I couldn’t agree more. It’s often when we start to play around with technology where we expand our minds and allow begin to think about what’s possible. Therefore, I suggest you play around with the application after you get it running on your phone! It’s interesting to see both how it might originally predict a number, but then change it after you add more lines. For example, screenshots of some of the results that I’ve received.
If you look at the results. The digit on the left ( number 3 ) is CORRECT with high confidence. However, the screenshot on the right unfortunately is INCORRECT. It’s clearly an 8, but it predicted the number 2. Whoops. As you can see, this shows that Machine Learning is not perfect. It’s your turn now! Play around with the application and see how it works for you. Try training your own model. Is it better, worse? What are some ways in which you could make it better?
So, what have we done? A lot! We have shown that you can take a public dataset such as MNIST and use TensorFlow running on Google Collab to train this dataset to recognize numbers being entered on the screen of a mobile phone. So cool! The applications of the are endless. While we looked at numbers, we could have just as easily looked at characters and tried to
- Predict signatures and if that person, says signing for the package with your name really was you or not.
- You can see other applications of course in banking by using computer vision to look at handwriting on checks to make sure the amount entered when depositing at the ATM is the same as what is on the check.
- A mobile application could also verify the entry of the number of items entered on a physical order form before the order was placed or redeeming coupons on a paper receipt.
Any place in which alphanumeric validation needs to be verified is where these algorithms are being applied today.
Thank you for taking the time to learn about TensorFlow and what are some of the amazing things that can be done. This is the final article of our series. If you are just joining now, I suggest you review the prior two sessions on using TensorFlow from Jupyter Notebooks to train models to deployment with TensorFlow Serving and now TensorFlow Lite.
Please reach out to me if you ever run into issues with TensorFlow or would like another set of eyes looking at your data. Free of charge. I am blessed to have had the opportunity to talk with and learn from some of the brightest people in the Upper Midwest when it comes to understanding the value of data. Until next time, I look forward to connecting and learning more from you, your team, and your processes to bring AI & ML from a world of abstraction to real-world use cases that will improve the service and products you create in your business.