How to extract output tensors in batch mode inference? #21191
-
This QA describes how to extract multiple outputs in inference time. https://www.intel.com/content/www/us/en/support/articles/000090966/software/development-software.html How should this example be extended when one is working in batch mode? E.g. a batch of 4, where each batch element would generate 3 tensors of different dimensions (in total 4 x 3 tensors are expected at output). |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Turns out that OpenVINO supports rich output data structures (list, tuple...) in batch 1 mode. For batch sizes above 1 one has to work with tensors only. Doing that, the regular calls to infer_request.get_output_tensor(0), infer_request.get_output_tensor(1),... works just fine. |
Beta Was this translation helpful? Give feedback.
Turns out that OpenVINO supports rich output data structures (list, tuple...) in batch 1 mode. For batch sizes above 1 one has to work with tensors only. Doing that, the regular calls to infer_request.get_output_tensor(0), infer_request.get_output_tensor(1),... works just fine.