How to apply a trained Tensorflow Lite model (.tflite) in an Android application.
Please contact me for a code sample of the Android project.
Training neural networks for use in AI agents is fun, but having them run on a VM, far from where their use can be applied is boring. Therefore running the trained models for prediction inside mobile applications makes sense.
Here I will show how I implement a TensorFlowLite model (.tflite) in an Android application, for the task of image recognition.
First thing we need to do is prepare the Android gradle file, in order to tell Android, not to compress the model, which would kill it, as the weights might slightly change through compression.
Load TFLite libraries:
The model file needs to be put into a folder of type "Assets". if it does not yet exist, create such a folder under resources (res):
Place the trained model, of format .tflite, into this assets folder.
The Android activity is the actual java class that will run the trained TFLite model. Therefore we need to import the TFLite interpreter:
This method will load the TFLite model:
Calling the method, to load the model into the TFLite interpreter:
Change "Activity" to "Context" in LoadModel():
At this point, we have to prepare our input data for processing by the TFLite model. As I have mentioned, this example discusses processing image input data. Therefore we need to preprocess the input image, to fit the input layer of the trained neural network.
In my case the input layer of the neural network is of dimensions 224x224x3. (quadratic RGB image with 224 pixels side length) Below method will crop a region of size 224x224 out of a larger image, as this is the region of interest:
The cropped image now fits the input layer dimensions.
Run the image operations, as well as the prediction in an asynchronous task, as these operations are CPU/GPU intensive. This will create a new CPU/GPU thread and not slow down the UI Thread
The TFLite interpreter takes either a ByteBuffer or a FloatBuffer as input, no bitmap. For that we need to convert the RGB bitmap to one of these formats.
In below function we will iterate over the RGB pixel values with 2 for-loops and will write the pixel values into the ByteBuffer:
My model is trained to recognize humans, nothing else. Therefore my output is a single probability. In below result assignment, only the first element of the output array is required:
Input and output must be float arrays, of the exact dimension defined by the model of the CNN.