Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Commit

Permalink
Release 1.1.0 (#237)
Browse files Browse the repository at this point in the history
* New TFhub Models  (#217)

* adding new tfhub models to the documentation

* Fix for performance degradation issue with TF2 api  (#213)

* Fix for performance issue with Keras api

* More fixes in the parm->const logic in translate graph

- enabling the env var by default
- adding datatype check for tensors to create tensor data in vector
- whitespace removal

* Formatting fixes

* Fixing TranslateGraph defn for tests

* Fix for sample execution crash on GPU

* GPU specific check for clean up

* OVTF examples/colab samples updated to TF2.x (#214)

* Update classification sample to TF2 api (saved model)

* Updated object detection sample to TF2.x (saved model)

* Updated README, moving TF1 classification sample to TF_1_x

* Updated colab samples, requirements txt

* Updated colab samples

* Formatting fix

* Minor fix in COlab sample

* Minor documentation update

* Add translations for new TF ops (#216)

* Upgrade OpenVINO Opset to version 7

* Add tests for MaxPool3D, Floor

* Add translations for ScatterNd, AvgPool3D

* Enable Conv3DBackpropInputV2; Add Conv3D* Python tests

* Update OCM

* Apply Code Formatting

* Disable Conv3D tests for GPU

* Modify MYD tests

* Update OCM

* Update OCM for binary op type support issue

* Update MYD Python tests list for mac

* Update to OCM master

* Skip redundant Conv3D tests; Disable Conv3D on MYD

* Disable conv3d_transpose_test for MYD

* Replace old method of pad compute in AvgPool op

* Remove Conv3D Gradient Test

* Readme: the introduction text is updated to include the two lines of code (#219)

* Infinite loop bug fixed in dyn to static input check (#218)

* Infinite loop bug fixed in dyn to static input check

* Upgrade OV version to 2021.4.2 and TF version to 2.7.0 (#221)

* Upgrade OV to 2021.4.2 and TF to 2.7.0

* Add fix for attributes which do not have default values in TF - 2.7

* One Gather TF test added to skip list

* Remove unused yml files

* Revert mac.yml to cpu-mac.yml

* C++14 fix for CPP Tests

* Update OV version in Mac yml file

Co-authored-by: Suryaprakash Shanmugam <[email protected]>

* Update model in Object Detection sample from YoloV3 to YoloV4 (#220)

* Upgrade Obj Det Samples to YoloV4

* Update Code Formatting

* Convert model in a virtualenv

* Update Colab example with YoloV4 changes

* Update examples documentation for yolov4

* Enable windows support for OVTF (#222)

* Build scripts and code updated for windows support
* Updated code file and ocm submoudle branch for windows support
* Added default TRUE condition for wheel and framework packages for windows
* OVTF Python enabled on Windows
* Separate compilation of Protobuf library is disabled, required protobuf symbols are added in TF source code and the generated wheel package resolved the protobuf linking issues
* C++ samples cmakelist modified to add the tensorflow C++ libs dependencies
* Build script modified for windows related paths
* Adding a new ngraph input/output node in encapsulate clusters is not working properly and a fix is added for the same
*_ovtf_static_inputs attribute is added only if there are any static inputs for that node, as reading empty list value throws error on windows
* Enabled Python and TF Python tests for Windows
* Corrected a path for build on linux
* Restore C++ example in CMakeList
* Correction in Indentation in TF test runner to avoid build failure on Macos
* Rebased with 1.0.1 release
* Updated build script to copy python framework libs
* Review comments incorporated, and scatter_nd python test added to skip list
* Code format check applied

* Change OV, TF, and OVTF versions (#223)

* Change OV, TF, and OVTF versions

* Remove py36 from ABI1 TF builds

* New patch for TF ABI1 builds

* Update manylinux2014 patch

* Downgrade TF version in yolov4 conversion scripts

* Refactor CI; Upgrade OCM (#228)

* Update windows documentation  (#229)

* Update cmakelist for c++14 change infor windows

* Update documentation for Windows, add test files and TF symbols patch

* Update examples and its readme for windows

* Apply code format

* Update readme for examples and build based on review comments

* Update READMEs (#230)

* Update READMEs

* Correction on iGPU

* Add new Colab samples to the examples directory

* Add Colab notebook deprecation notice

* Update cmakelist for windows wheel, colab notebook links and readme files (#231)

* Update Cmakelist for windows wheel, colab link in notebooks, and Readme files

* Notebooks directory renamed as notebooks

* Update python and tf unit tests for GPU on windows (#232)

* Fix Obj Det Sample Test Cases; Fix Test Script; Minor Documentation Updates (#235)

* Fix for output_layer arg

* Improve documentation for readability; Fix test script issue

* Code formatting

* Update INSTALL.md

* Backend error check added to the c++ sample (#233)

* Backend error check added to the c++ sample

* Modified Readme for windows release name

* Updated windows release comment in main readme

* Update ObjDet Conversion Scripts (#236)

* Add win conversion script; Patch through git

* Update documentation for Windows conv script

* Updated mandarin documentation (#238)

* Updated mandarin documentation

* Update links in some of the mandarin readme files

* Update models.md (#234)

* Fix issue with tf import (#239)

Co-authored-by: pratiksha123507 <[email protected]>
Co-authored-by: Laxmi Ganesan <[email protected]>
Co-authored-by: Suryaprakash Shanmugam <[email protected]>
Co-authored-by: Mustafa Cavus <[email protected]>
  • Loading branch information
5 people authored Dec 8, 2021
1 parent 831f1e5 commit b31f109
Show file tree
Hide file tree
Showing 119 changed files with 6,716 additions and 3,063 deletions.
25 changes: 22 additions & 3 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,11 @@ include(cmake/sdl.cmake)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)

if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "^(Apple)?Clang$")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wno-comment -Wno-sign-compare -Wno-backslash-newline-escape")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -Wall -Wno-comment -Wno-sign-compare -Wno-backslash-newline-escape")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wno-comment -Wno-sign-compare")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -Wall -Wno-comment -Wno-sign-compare")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /std:c++14 /wd4308 /wd4146 /wd4703 /wd4244 /wd4819 /EHsc")
endif()

# In order to compile ngraph-tf with memory leak detection, run `cmake` with `-DCMAKE_BUILD_TYPE=Sanitize`.
Expand Down Expand Up @@ -121,6 +123,10 @@ if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(OPENVINO_TF_CXX11_ABI "${TensorFlow_CXX_ABI}")
message( STATUS "nGraph-TensorFlow using CXX11 ABI: ${OPENVINO_TF_CXX11_ABI}" )
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GLIBCXX_USE_CXX11_ABI=${OPENVINO_TF_CXX11_ABI}")
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(OPENVINO_TF_CXX11_ABI "${TensorFlow_CXX_ABI}")
message( STATUS "nGraph-TensorFlow using CXX11 ABI: ${OPENVINO_TF_CXX11_ABI}" )
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GLIBCXX_USE_CXX11_ABI=${OPENVINO_TF_CXX11_ABI}")
endif()

if(APPLE)
Expand All @@ -129,10 +135,16 @@ if(APPLE)
else()
set(LIBNGRAPH "libngraph.dylib")
endif()
elseif(WIN32)
set(LIBNGRAPH "ngraph.lib")
else()
set(LIBNGRAPH "libngraph.so")
endif(APPLE)

if(WIN32)
add_definitions(-DNOMINMAX)
endif()

# Build options
option(UNIT_TEST_ENABLE "Control the building of unit tests" FALSE)
option(UNIT_TEST_TF_CC_DIR "Location where TensorFlow CC library is located" FALSE)
Expand Down Expand Up @@ -174,7 +186,9 @@ message(STATUS "UNIT_TEST_ENABLE: ${UNIT_TEST_ENABLE}")
message(STATUS "OPENVINO_ARTIFACTS_DIR: ${OPENVINO_ARTIFACTS_DIR}")
message(STATUS "USE_PRE_BUILT_OPENVINO: ${USE_PRE_BUILT_OPENVINO}")
message(STATUS "OPENVINO_VERSION: ${OPENVINO_VERSION}")
if (${OPENVINO_VERSION} MATCHES "2021.4.1")
if (${OPENVINO_VERSION} MATCHES "2021.4.2")
add_definitions(-DOPENVINO_2021_4_2=1)
elseif (${OPENVINO_VERSION} MATCHES "2021.4.1")
add_definitions(-DOPENVINO_2021_4_1=1)
elseif (${OPENVINO_VERSION} MATCHES "2021.4")
add_definitions(-DOPENVINO_2021_4=1)
Expand All @@ -185,6 +199,11 @@ elseif (${OPENVINO_VERSION} MATCHES "2021.2")
else()
message(FATAL_ERROR "Unsupported OpenVINO version: ${OPENVINO_VERSION")
endif()

if(WIN32)
add_definitions(-DBUILD_API=1)
endif()

if(OS_VERSION STREQUAL "\"centos\"")
set(LIB "lib64")
else()
Expand Down
37 changes: 23 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,13 @@

# **OpenVINO™ integration with TensorFlow**

This repository contains the source code of **OpenVINO™ integration with TensorFlow**, designed for TensorFlow* developers who want to get started with [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) in their inferencing applications. This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow** accelerates inference across many [AI models](docs/MODELS.md) on a variety of Intel<sup>®</sup> silicon such as:
This repository contains the source code of **OpenVINO™ integration with TensorFlow**, designed for TensorFlow* developers who want to get started with [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) in their inferencing applications. TensorFlow* developers can now take advantage of [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) toolkit optimizations with TensorFlow inference applications across a wide range of Intel® compute devices by adding just two lines of code.

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')

This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow accelerates** inference across many AI models on a variety of Intel<sup>®</sup> silicon such as:

- Intel<sup>®</sup> CPUs
- Intel<sup>®</sup> integrated GPUs
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred to as VPU
Expand All @@ -17,22 +23,27 @@ This repository contains the source code of **OpenVINO™ integration with Tenso
## Installation
### Prerequisites

- Ubuntu 18.04, 20.04 or macOS 11.2.3
- Python* 3.6<sup>1</sup>, 3.7, 3.8 or 3.9
- TensorFlow* v2.5.1
- Ubuntu 18.04, 20.04, macOS 11.2.3 or Windows<sup>1</sup> 10 - 64 bit
- Python* 3.7, 3.8 or 3.9
- TensorFlow* v2.7.0

<sup>1</sup>Windows package is released in Beta preview mode and currently supports only Python3.9

Check our [Interactive Installation Table](https://openvinotoolkit.github.io/openvino_tensorflow/) for a menu of installation options. The table will help you configure the installation process.

The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2021.4.1. The users do not have to install OpenVINO™ separately. This package supports:
The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2021.4.2. The users do not have to install OpenVINO™ separately. This package supports:
- Intel<sup>®</sup> CPUs
- Intel<sup>®</sup> integrated GPUs
- Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)


pip3 install pip==21.0.1
pip3 install tensorflow==2.5.1
pip3 install -U pip
pip3 install tensorflow==2.7.0
pip3 install -U openvino-tensorflow

For installation instructions on Windows please refer to [**OpenVINO™ integration with TensorFlow** for Windows ](docs/INSTALL.md#InstallOpenVINOintegrationwithTensorFlowalongsideTensorFlow)

To use Intel<sup>®</sup> integrated GPUs for inference, make sure to install the [Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)

To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](docs/INSTALL.md#12-install-openvino-integration-with-tensorflow-alongside-the-intel-distribution-of-openvino-toolkit).

Expand All @@ -51,10 +62,10 @@ To see if **OpenVINO™ integration with TensorFlow** is properly installed, run

This should produce an output like:

TensorFlow version: 2.5.1
OpenVINO integration with TensorFlow version: b'1.0.1'
OpenVINO version used for this build: b'2021.4.1'
TensorFlow version used for this build: v2.5.1
TensorFlow version: 2.7.0
OpenVINO integration with TensorFlow version: b'1.1.0'
OpenVINO version used for this build: b'2021.4.2'
TensorFlow version used for this build: v2.7.0
CXX11_ABI flag used for this build: 0

By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.
Expand Down Expand Up @@ -96,6 +107,4 @@ We welcome community contributions to **OpenVINO™ integration with TensorFlow*
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.

---
\* Other names and brands may be claimed as the property of others.

<sup>1</sup> Python 3.6 support is available only for Ubuntu
\* Other names and brands may be claimed as the property of others.
34 changes: 22 additions & 12 deletions README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,12 @@

# **OpenVINO™ integration with TensorFlow**

该仓库包含 **OpenVINO™ integration with TensorFlow** 的源代码,该产品可提供所需的 [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 内联优化和运行时,显著增强对TensorFlow 的兼容性。该产品专为开发人员设计,支持他们将 OpenVINO™ 运用在自己的推理应用,只需稍微修改代码,就可显著增强推理性能。**OpenVINO™ integration with TensorFlow** 可在各种英特尔<sup>®</sup> 芯片上加速AI模型(如下所示) [AI 模型](docs/MODELS_cn.md)的推理速度:
该仓库包含 **OpenVINO™ integration with TensorFlow** 的源代码,该产品专为希望在推理应用中体验[OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 的TensowFlow*开发人员设计。TensorFlow*应用开发者只需2行代码,就可在各种英特尔<sup>®</sup> 芯片上利用[OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 加速AI模型的推理速度。

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')

该产品专为开发人员设计,支持他们将[OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 运用在自己的推理应用,只需稍微修改代码,就可显著增强推理性能。**OpenVINO™ integration with TensorFlow** 可在各种英特尔<sup>®</sup> 芯片上加速AI模型(如下所示) [AI 模型](docs/MODELS_cn.md)的推理速度:

- 英特尔<sup>®</sup> CPU
- 英特尔<sup>®</sup> 集成 GPU
Expand All @@ -18,22 +23,27 @@
## 安装
### 前提条件

- Ubuntu 18.04, 20.04 or macOS 11.2.3
- Python* 3.6, 3.7, 3.8 or 3.9
- TensorFlow* v2.5.1
- Ubuntu 18.04, 20.04, macOS 11.2.3 or Windows<sup>1</sup> 10 - 64 bit
- Python* 3.7, 3.8 or 3.9
- TensorFlow* v2.7.0

<sup>1</sup>目前Windows release还处于预览阶段,仅支持Python3.9

请参阅我们的[交互式安装表](https://openvinotoolkit.github.io/openvino_tensorflow/),查看安装选项菜单。该表格将引导您完成安装过程。

**OpenVINO™ integration with TensorFlow** 安装包附带 OpenVINO™ 2021.4.1 的预构建库,这意味着您无需单独安装 OpenVINO™。该安装包支持:
**OpenVINO™ integration with TensorFlow** 安装包附带 OpenVINO™ 2021.4.2 的预构建库,这意味着您无需单独安装 OpenVINO™。该安装包支持:
- 英特尔<sup>®</sup> CPU
- 英特尔<sup>®</sup> 集成 GPU
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)


pip3 install pip==21.0.1
pip3 install tensorflow==2.5.1
pip3 install -U pip
pip3 install tensorflow==2.7.0
pip3 install -U openvino-tensorflow

关于在Windows上的安装步骤,请参考 [**OpenVINO™ integration with TensorFlow** for Windows ](docs/INSTALL_cn.md#InstallOpenVINOintegrationwithTensorFlowalongsideTensorFlow)

如果您想使用Intel<sup>®</sup> 集成显卡进行推理,请先安装[Intel® Graphics Compute Runtime for OpenCL™ drivers](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html#install-gpu)

如果您想使用支持 Movidius™ 的英特尔® 视觉加速器设计 (VAD-M) 进行推理,请安装 [**OpenVINO™ integration with TensorFlow** 以及英特尔® OpenVINO™ 工具套件发布版](docs/INSTALL_cn.md#12-install-openvino-integration-with-tensorflow-alongside-the-intel-distribution-of-openvino-toolkit)

Expand All @@ -52,10 +62,10 @@

它会生成以下输出:

TensorFlow version: 2.5.1
OpenVINO integration with TensorFlow version: b'1.0.1'
OpenVINO version used for this build: b'2021.4.1'
TensorFlow version used for this build: v2.5.1
TensorFlow version: 2.7.0
OpenVINO integration with TensorFlow version: b'1.1.0'
OpenVINO version used for this build: b'2021.4.2'
TensorFlow version used for this build: v2.7.0
CXX11_ABI flag used for this build: 0

默认情况下,英特尔<sup>®</sup> CPU 用于运行推理。您也可以将默认选项改为英特尔<sup>®</sup> 集成 GPU 或英特尔<sup>®</sup> VPU 来进行 AI 推理。调用以下函数,更改执行推理的硬件。
Expand Down Expand Up @@ -93,4 +103,4 @@

我们将以最快的速度审核您的贡献!如果需要进行其他修复或修改,我们将为您提供引导和反馈。贡献之前,请确保您可以构建 **OpenVINO™ integration with TensorFlow** 并运行所有示例和修复/补丁。如果您想推出重要特性,可以创建特性测试案例。您的 pull 请求经过验证之后,我们会将其合并到存储库中,前提是 pull 请求满足上述要求并经过认可。
---
\* 其他名称和品牌可能已被声称为他人资产
\* 其他名称和品牌可能已被声明为他人资产
2 changes: 1 addition & 1 deletion build_ov.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@


def main():
openvino_version = "2021.4.1"
openvino_version = "2021.4.2"
build_dir = 'build_cmake'
cxx_abi = "1"
print("openVINO version: ", openvino_version)
Expand Down
Loading

0 comments on commit b31f109

Please sign in to comment.