Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wrong shape returned when compile an onnx model #94

Open
saharinqashi opened this issue Oct 24, 2024 · 0 comments
Open

wrong shape returned when compile an onnx model #94

saharinqashi opened this issue Oct 24, 2024 · 0 comments

Comments

@saharinqashi
Copy link

saharinqashi commented Oct 24, 2024

i wrote a simple model with this code:

import tensorflow as tf
from tensorflow.keras import layers, Model
class ObjectDetectionModel(Model):
    def __init__(self, num_classes):
        super(ObjectDetectionModel, self).__init__()
        self.num_classes = num_classes
                self.backbone = tf.keras.Sequential([
            layers.Conv2D(32, (3, 3), activation='relu', padding='same'),
            layers.MaxPooling2D(pool_size=(2, 2)),
            layers.Conv2D(64, (3, 3), activation='relu', padding='same'),
            layers.MaxPooling2D(pool_size=(2, 2)),
            layers.Conv2D(128, (3, 3), activation='relu', padding='same'),
            layers.MaxPooling2D(pool_size=(2, 2)),
            layers.Conv2D(256, (3, 3), activation='relu', padding='same'),
            layers.GlobalAveragePooling2D()])
        
        self.classification_head = layers.Dense(num_classes, activation='softmax')
        self.bbox_head = layers.Dense(4, activation='linear')
        self.score_head = layers.Dense(1, activation='sigmoid')

    def call(self, inputs):
        features = self.backbone(inputs)
        class_predictions = self.classification_head(features)
        bbox_predictions = self.bbox_head(features)
        score_predictions = self.score_head(features)
        return {
            'class': class_predictions,
            'bbox': bbox_predictions,
            'score': score_predictions
        }

num_classes = 10  # Example number of classes
model = ObjectDetectionModel(num_classes)

input_shape = (None, 224, 224, 3)  # Example input shape
model.build(input_shape)
model.summary() 

and i converted it to ONNX model:

import onnx
import tf2onnx

model_proto, _ = tf2onnx.convert.from_keras(model, [tf.TensorSpec(shape=(1, 224, 224, 3), dtype=tf.float32),], opset=18, output_path='model.onnx')

when i created a runtime with onnx, the input and outputs of model returned correct shapes:

import onnxruntime as rt
sess = rt.InferenceSession('model.onnx')
sess.get_inputs()[0].shape
sess.run(None, {sess.get_inputs()[0].name: tf.random.normal((1, 224, 224, 3)).numpy()})

i want to convert this model and use it on TI dev board, with this code:

import os
import cv2
import numpy as np
import onnxruntime as rt
import onnx

def preprocess(image_path, size):

    img = cv2.imread(image_path)
    img = img[:,:,::-1]
    img = cv2.resize(img, (size[1], size[0]), interpolation=cv2.INTER_CUBIC)
    img = img.astype('float32')
    img = np.expand_dims(img,axis=0)
    
    return img  

calib_images = [
'/home/user/Project/TDA4VM/Compile/sample-images/elephant.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bus.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bicycle.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/zebra.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/elephant.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bus.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bicycle.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/zebra.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/elephant.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bus.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/elephant.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bus.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bicycle.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/zebra.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/elephant.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bus.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bicycle.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/zebra.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/elephant.bmp',
'/home/user/Project/TDA4VM/Compile/sample-images/bus.bmp']

onnx_model_path ='/home/user/Downloads/model.onnx'
output_dir = '/home/user/Downloads/artifacts_16_5'
onnx.shape_inference.infer_shapes_path(onnx_model_path)

compile_options = {
    'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
    'artifacts_folder' : output_dir,
    'tensor_bits' : 16,
    'accuracy_level' : 1,
    'advanced_options:calibration_frames' : len(calib_images),
    'advanced_options:calibration_iterations' : 50,
    'deny_list:layer_name' : 'object_detection_model_3/sequential_3/global_average_pooling2d_1/Mean',
    'max_num_subgraphs': 16
}

if not os.path.exists(output_dir): 
    os.makedirs(output_dir, exist_ok=True)

for root, dirs, files in os.walk(output_dir, topdown=False):
    [os.remove(os.path.join(root, f)) for f in files]
    [os.rmdir(os.path.join(root, d)) for d in dirs]

so = rt.SessionOptions()
EP_list = ['TIDLCompilationProvider','CPUExecutionProvider']
sess = rt.InferenceSession(onnx_model_path ,providers=EP_list, provider_options=[compile_options, {}], sess_options=so)


input_details = sess.get_inputs()
size = [224, 224]

for num in range(len(calib_images)):

    outputs = sess.run(None, {input_details[0].name : preprocess(calib_images[num], size)})

    for output in outputs:
        print(output.shape)

but when i start it, it returned wrong shpaes:

========================= [Model Compilation Started] =========================

Model compilation will perform the following stages:

  1. Parsing
  2. Graph Optimization
  3. Quantization & Calibration
  4. Memory Planning

============================== [Version Summary] ==============================


| TIDL Tools Version | 10_00_04_00 |

| C7x Firmware Version | 10_00_02_00 |

| Runtime Version | 1.14.0+10000005 |

| Model Opset Version | 18 |

NOTE: The runtime version here specifies ONNXRT_VERSION+TIDL_VERSION
Ex: 1.14.0+1000XXXX -> ONNXRT 1.14.0 and a TIDL_VERSION 10.00.XX.XX

============================== [Parsing Started] ==============================

[TIDL Import] [PARSER] WARNING: Network not identified as Object Detection network : (1) Ignore if network is not Object Detection network (2) If network is Object Detection network, please specify "model_type":"OD" as part of OSRT compilation options

------------------------- Subgraph Information Summary -------------------------

| Core | No. of Nodes | Number of Subgraphs |

| C7x | 17 | 2 |
| CPU | 1 | x |


| Node | Node Name | Reason |

| ReduceMean | object_detection_model/sequential/global_average_pooling2d/Mean | Layer type not supported by TIDL |

============================= [Parsing Completed] =============================

==================== [Optimization for subgraph_0 Started] ====================

----------------------------- Optimization Summary -----------------------------

| Layer | Nodes before optimization | Nodes after optimization |

| TIDL_ReLULayer | 4 | 0 |
| TIDL_ConvolutionLayer | 4 | 4 |
| TIDL_TransposeLayer | 1 | 0 |
| TIDL_PoolingLayer | 3 | 3 |

=================== [Optimization for subgraph_0 Completed] ===================

The soft limit is 10240
The hard limit is 10240
MEM: Init ... !!!
MEM: Init ... Done !!!
0.0s: VX_ZONE_INIT:Enabled
0.6s: VX_ZONE_ERROR:Enabled
0.9s: VX_ZONE_WARNING:Enabled
0.2179s: VX_ZONE_INIT:[tivxInit:190] Initialization Done !!!
============= [Quantization & Calibration for subgraph_0 Started] =============

==================== [Optimization for subgraph_1 Started] ====================

----------------------------- Optimization Summary -----------------------------

| Layer | Nodes before optimization | Nodes after optimization |

| TIDL_BatchNormLayer | 0 | 1 |
| TIDL_ConstDataLayer | 0 | 3 |
| TIDL_SoftMaxLayer | 1 | 1 |
| TIDL_SigmoidLayer | 1 | 0 |
| TIDL_InnerProductLayer | 3 | 3 |

=================== [Optimization for subgraph_1 Completed] ===================

============= [Quantization & Calibration for subgraph_1 Started] =============

2024-10-24 11:59:07.250321771 [W:onnxruntime:, execution_frame.cc:835 VerifyOutputSizes] Expected shape from model of {1,10} does not match actual shape of {1,1,1,1,28,10} for output class
2024-10-24 11:59:07.250346010 [W:onnxruntime:, execution_frame.cc:835 VerifyOutputSizes] Expected shape from model of {1,4} does not match actual shape of {1,1,1,1,28,4} for output bbox
2024-10-24 11:59:07.250352411 [W:onnxruntime:, execution_frame.cc:835 VerifyOutputSizes] Expected shape from model of {1,1} does not match actual shape of {1,1,1,1,28,1} for output score
(1, 1, 1, 1, 28, 4)
(1, 1, 1, 1, 28, 10)
(1, 1, 1, 1, 28, 1)

the correct shape should be
(1, 4)
(1, 10)
(1, 1)

what should i do?
how can i solve this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant