Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

OpenVINO™ integration with TensorFlow v2.2.0

Compare
Choose a tag to compare
@sspintel sspintel released this 04 Oct 11:13
feff504

This release brings an overhaul in how OpenVINO™ integration with TensorFlow handles operator translations through the new TensorFlow FrontEnd API. This release also includes additional functional bug fixes from the previous release. It is based on the TensorFlow version v2.9.2 and the OpenVINO™ version v2022.2

  • The TensorFlow upgrade to v2.9.2 provides bug fixes and addresses vulnerabilities over TensorFlow v2.9.1.
  • [Preview] OpenVINO™ integration with TensorFlow's Backend Manager now has support to choose between various GPU backends like Intel's integrated graphics cards, Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.
  • Added a new notebook that demonstrates the performance benefits OpenVINO™ integration with TensorFlow brings for Object Detection architectures like SSD, FasterRCNN, and EfficientDet from the TensorFlow Hub.
  • [Experimental] Model caching support added to reduce first inference latency on GPUs. It can be enabled by setting the environment variable OPENVINO_TF_MODEL_CACHE_DIR to the corresponding cache directory. For more details, please see USAGE.md.

For the complete list of features offered by OpenVINO™ check the release notes of the new OpenVINO™ toolkit v.2022.2

  • Docker Support

    • OpenVINO™ integration with TensorFlow Runtime Dockerfiles for Ubuntu 18.04 and Ubuntu 20.04 are updated
    • OpenVINO™ integration with TensorFlow Runtime Dockerfiles with TF-Serving for Ubuntu 18.04 and Ubuntu 20.04 are updated
    • Prebuilt images are updated and can be found on Docker Hub and Azure Marketplace.