This library is a TensorFlowLiteSwift wrapping library for vision pre/post-processing. You can use TFLiteSwift-Vision, if you want to implemented preprocessing and postprocessing functions in the repository.
Here is more detail of this repo's background and goal, and what is diff.
- Xcode
- CocodaPods
- iOS 10.0+
TFLiteSwift-Vision is available through CocoaPods. To install it, simply add the following line to your Podfile:
target 'MyXcodeProject' do
use_frameworks!
# Pods for Your Project
pod 'TFLiteSwift-Vision', '~> 0.2.6'
end
post_install do |installer|
installer.pods_project.build_configurations.each do |config|
config.build_settings["EXCLUDED_ARCHS[sdk=iphonesimulator*]"] = "arm64"
end
end
And then, run following:
pod install
If you have other label file or other meta data file, also import it.
Import TFLiteSwift_Vision framework.
import TFLiteSwift_Vision
Setup interpreter.
let options = TFLiteVisionInterpreter.Options(
modelName: "mobilenet_v2_1.0_224",
inputRankType: .bwhc, // if it is pytorch model, use `.bchw`
normalization: .scaled(from: 0.0, to: 1.0)
)
var visionInterpreter = try? TFLiteVisionInterpreter(options: options)
Inference with an image. The following is an image classification case.
// inference
guard let output: TFLiteFlatArray = try? self.visionInterpreter?.inference(with: uiImage)?.first
else { fatalError("Cannot inference") }
// postprocess
let predictedIndex: Int = Int(output.argmax())
print("predicted index: \(predictedLabel)")
print(output.dimensions)
git clone https://github.com/tucan9389/TFLiteSwift-Vision
cd TFLiteSwift-Vision/Example
pod install
open TFLiteSwift-Vision.xcworkspace
Download tflite model and label txt, and then import the files into Xcode project.
You can also download the following files on here
After build and run the project, you can test the model(mobilenet_v2_1.0_224.tflite
) with your own image data.
image classification |
---|
![]() |
TFLiteSwift-Vision is supporting (or wants to support) follow functions:
- Preprocessing (convert Cocoa's image type to TFLiteSwift's Tensor)
- Supporting input image type:
- UIImage
- CVPixelBuffer
- CGImage
- MTLTexture
- Supporing bhwc sequence:
- bwhc (normally used in TF)
- bcwh (noramlly used in Pytorch)
- bwh (for gray image)
- bhw (for gray image)
- whc
- hwc
- wh (for gray input)
- Supporting normalization methods:
- Normalization with scaling (0...255 → 0.0...1.0)
- Normalization with mean and std (normaly used in pytorch and it is used in ImageNet firstly)
- Grayscaling (not to 4 dim tensor, but to 3 dim tensor from an image)
- Supporting cropping methods:
- Resizing (
vImageScale_ARGB8888
) - Centercropping
- Padding
- If basic functions are implemented, need to optimize with Metal or Accelerate (or other domain specific frameworks)
- Resizing (
- Supporting input type
- Float32
- UInt8
- Supporting output type
- Float32
- UInt8
- Supporting input image type:
- Inferenceing
- batch size
- 1 batch
- n batch
- cpu or gpu(metal) selectable
- batch size
- Postprocessing (convert TFLiteSwift's Tensor to Cocoa's type)
- Tensor → UIImage
- Tensor → MTLTexture
- Domain specific postprocessing examples
- Image classification
- Object detection
- Semantic segmentation
- Pose estimation
- Replace TensorFlowLiteSwift to TFLiteSwift-Vision in tensorflow/examples
- image_classification
- object_detection
- posenet
- digit_classification
- semantic_segmentation
TFLiteSwift-Vision is available under the Apache license. See the LICENSE file for more info.