Skip to content

[CPU] Matmul node support quantized outputs for f16 bias to avoid unnecessary reorder following #136543

[CPU] Matmul node support quantized outputs for f16 bias to avoid unnecessary reorder following

[CPU] Matmul node support quantized outputs for f16 bias to avoid unnecessary reorder following #136543

Triggered via pull request November 15, 2024 02:42
Status Success
Total duration 4m 27s
Artifacts 2

build_doc.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
openvino_docs_html_27557.zip
43.1 MB
sphinx_build_log_27557.log
1.2 KB