Skip to content

Commit

Permalink
[bot] update built doc
Browse files Browse the repository at this point in the history
  • Loading branch information
ifm-csr committed Mar 4, 2025
1 parent 815eeb0 commit ce82083
Show file tree
Hide file tree
Showing 11 changed files with 51 additions and 59 deletions.
4 changes: 1 addition & 3 deletions v1.10.13/SoftwareInterfaces/Docker/autostart.html
Original file line number Diff line number Diff line change
Expand Up @@ -93,9 +93,7 @@
</li>
<li class="toctree-l3"><a class="reference internal" href="resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l3"><a class="reference internal" href="gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down
4 changes: 1 addition & 3 deletions v1.10.13/SoftwareInterfaces/Docker/deployVPU.html
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,7 @@
<li class="toctree-l3"><a class="reference internal" href="autostart.html">Autostarting the container</a></li>
<li class="toctree-l3"><a class="reference internal" href="resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l3"><a class="reference internal" href="gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down
4 changes: 1 addition & 3 deletions v1.10.13/SoftwareInterfaces/Docker/docker.html
Original file line number Diff line number Diff line change
Expand Up @@ -102,9 +102,7 @@
<li class="toctree-l3"><a class="reference internal" href="autostart.html">Autostarting the container</a></li>
<li class="toctree-l3"><a class="reference internal" href="resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l3"><a class="reference internal" href="gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down
4 changes: 1 addition & 3 deletions v1.10.13/SoftwareInterfaces/Docker/gpu.html
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,7 @@
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down
15 changes: 8 additions & 7 deletions v1.10.13/SoftwareInterfaces/Docker/index_docker.html
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,7 @@
<li class="toctree-l3"><a class="reference internal" href="autostart.html">Autostarting the container</a></li>
<li class="toctree-l3"><a class="reference internal" href="resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l3"><a class="reference internal" href="gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down Expand Up @@ -204,7 +202,7 @@ <h1>Docker on O3R<a class="headerlink" href="#docker-on-o3r" title="Link to this
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a><ul>
<li class="toctree-l1"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a><ul>
<li class="toctree-l2"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#building-a-tensorrt-container">Building a TensorRT container</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#nvidia-base-containers">NVIDIA base containers</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#compatibility-matrix">Compatibility Matrix</a></li>
Expand All @@ -218,11 +216,14 @@ <h1>Docker on O3R<a class="headerlink" href="#docker-on-o3r" title="Link to this
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#runtime-inference-cycle-times">Runtime inference cycle times</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#calculating-the-inference-on-ovp81x-vpu-using-yolov11-onnx-model-file">Calculating the inference on OVP81x VPU using YOLOv11 ONNX Model file</a></li>
<li class="toctree-l2"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#calculating-the-inference-on-ovp81x-vpu-using-yolov11-onnx-model-file">Calculating the inference on OVP81x VPU using YOLOv11 ONNX Model file</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#example-runs">Example runs</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#inference-timings">Inference timings</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#deepstream-l4t">Deepstream-l4t</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
</ul>
</div>
<p>This section explains the handling and deployment of customer specific containers for the O3R.</p>
Expand Down
4 changes: 1 addition & 3 deletions v1.10.13/SoftwareInterfaces/Docker/logging.html
Original file line number Diff line number Diff line change
Expand Up @@ -93,9 +93,7 @@
<li class="toctree-l3"><a class="reference internal" href="autostart.html">Autostarting the container</a></li>
<li class="toctree-l3"><a class="reference internal" href="resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l3"><a class="reference internal" href="gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down
4 changes: 1 addition & 3 deletions v1.10.13/SoftwareInterfaces/Docker/resource_management.html
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,7 @@
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l3"><a class="reference internal" href="tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
<li class="toctree-l3"><a class="reference internal" href="../autostart.html">Autostarting the container</a></li>
<li class="toctree-l3"><a class="reference internal" href="../resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l3"><a class="reference internal" href="../gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">Using TensorRT</a><ul>
<li class="toctree-l3 current"><a class="current reference internal" href="#">TensortRT: DL / ML model deployment</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#building-a-tensorrt-container">Building a TensorRT container</a><ul>
<li class="toctree-l5"><a class="reference internal" href="#nvidia-base-containers">NVIDIA base containers</a><ul>
<li class="toctree-l6"><a class="reference internal" href="#compatibility-matrix">Compatibility Matrix</a></li>
Expand All @@ -98,11 +98,14 @@
<li class="toctree-l5"><a class="reference internal" href="#runtime-inference-cycle-times">Runtime inference cycle times</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="#calculating-the-inference-on-ovp81x-vpu-using-yolov11-onnx-model-file">Calculating the inference on OVP81x VPU using YOLOv11 ONNX Model file</a></li>
<li class="toctree-l4"><a class="reference internal" href="#calculating-the-inference-on-ovp81x-vpu-using-yolov11-onnx-model-file">Calculating the inference on OVP81x VPU using YOLOv11 ONNX Model file</a><ul>
<li class="toctree-l5"><a class="reference internal" href="#example-runs">Example runs</a></li>
<li class="toctree-l5"><a class="reference internal" href="#inference-timings">Inference timings</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="#deepstream-l4t">Deepstream-l4t</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l3"><a class="reference internal" href="#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../ifmDiagnostic/index_diagnostic.html">Diagnostic</a></li>
Expand Down Expand Up @@ -284,34 +287,37 @@ <h2>Calculating the inference on OVP81x VPU using YOLOv11 ONNX Model file<a clas
<ol class="arabic simple">
<li><p>Pull the machine learning base image provided by NVIDIA</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="w"> </span>$<span class="w"> </span>docker<span class="w"> </span>pull<span class="w"> </span>nvcr.io/nvidia/l4t-ml:r32.7.1-py3
</pre></div>
</div>
<ol class="arabic simple" start="2">
<li><p>Create a YOLOv11 ONNX model file using python script</p></li>
</ol>
</section>
</section>
<section id="load-a-pretrained-yolo-model-recommended-for-training">
<h1>Load a pretrained YOLO model (recommended for training)<a class="headerlink" href="#load-a-pretrained-yolo-model-recommended-for-training" title="Link to this heading"></a></h1>
<p>model = YOLO(“yolo11n.pt”)</p>
</section>
<section id="export-the-model-to-onnx-format">
<h1>Export the model to ONNX format<a class="headerlink" href="#export-the-model-to-onnx-format" title="Link to this heading"></a></h1>
<p>model.export(format=”onnx”, imgsz=[480,640])</p>
<div class="docutils">
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span> <span class="kn">from</span><span class="w"> </span><span class="nn">ultralytics</span><span class="w"> </span><span class="kn">import</span> <span class="n">YOLO</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">YOLO</span><span class="p">(</span><span class="s2">&quot;yolo11n.pt&quot;</span><span class="p">)</span> <span class="c1"># Load a pretrained YOLO model (recommended for training)</span>

<span class="n">model</span><span class="o">.</span><span class="n">export</span><span class="p">(</span><span class="nb">format</span><span class="o">=</span><span class="s2">&quot;onnx&quot;</span><span class="p">,</span> <span class="n">imgsz</span><span class="o">=</span><span class="p">[</span><span class="mi">480</span><span class="p">,</span><span class="mi">640</span><span class="p">])</span> <span class="c1"># Export the model to ONNX format</span>
</pre></div>
</div>
<ol class="arabic simple" start="3">
<li><p>Copy the Docker container and ONNX model file to VPU</p></li>
<li><p>Run the docker image in interactive mode</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>docker<span class="w"> </span>run<span class="w"> </span>--runtime<span class="w"> </span>nvidia<span class="w"> </span>-it<span class="w"> </span>--runtime<span class="w"> </span>nvidia<span class="w"> </span>--gpus<span class="w"> </span>all<span class="w"> </span>-v<span class="w"> </span>/path/to/your/model:/workspace/model<span class="w"> </span>nvcr.io/nvidia/l4t-ml:r32.7.1-py3
</pre></div>
</div>
<h3 class="rubric" id="example-runs">Example runs</h3>
<section id="example-runs">
<h3>Example runs<a class="headerlink" href="#example-runs" title="Link to this heading"></a></h3>
<ul class="simple">
<li><p>Run the command inside a docker container to measure inference timings</p></li>
</ul>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>usr/src/tensorrt/bin/trtexec<span class="w"> </span>--onnx<span class="o">=</span>yolov11/yolov11n.onnx<span class="w"> </span>--verbose<span class="w"> </span>--fp16
</pre></div>
</div>
<h3 class="rubric" id="inference-timings">Inference timings</h3>
</section>
<section id="inference-timings">
<h3>Inference timings<a class="headerlink" href="#inference-timings" title="Link to this heading"></a></h3>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Model</p></th>
Expand All @@ -333,7 +339,10 @@ <h3 class="rubric" id="inference-timings">Inference timings</h3>
</tr>
</tbody>
</table>
<h2 class="rubric" id="deepstream-l4t">Deepstream-l4t</h2>
</section>
</section>
<section id="deepstream-l4t">
<h2>Deepstream-l4t<a class="headerlink" href="#deepstream-l4t" title="Link to this heading"></a></h2>
<p>The Deepstream-l4t NGC container is used in this example.</p>
<ol class="arabic simple">
<li><p>Pull the Deepstream-l4t NGC container.</p></li>
Expand Down Expand Up @@ -441,7 +450,7 @@ <h2 class="rubric" id="deepstream-l4t">Deepstream-l4t</h2>
</div>
</li>
</ol>
</div>
</section>
</section>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -145,22 +145,18 @@ Adapting the model as described will result in a model with a specific runtime o

1. Pull the machine learning base image provided by NVIDIA

:::{bash}
```bash
$ docker pull nvcr.io/nvidia/l4t-ml:r32.7.1-py3
:::

```
2. Create a YOLOv11 ONNX model file using python script

:::{python}
```python
from ultralytics import YOLO
:::

# Load a pretrained YOLO model (recommended for training)
model = YOLO("yolo11n.pt")

# Export the model to ONNX format
model.export(format="onnx", imgsz=[480,640])
:::

model = YOLO("yolo11n.pt") # Load a pretrained YOLO model (recommended for training)

model.export(format="onnx", imgsz=[480,640]) # Export the model to ONNX format
```

3. Copy the Docker container and ONNX model file to VPU
4. Run the docker image in interactive mode
Expand Down
4 changes: 1 addition & 3 deletions v1.10.13/index_software_interfaces.html
Original file line number Diff line number Diff line change
Expand Up @@ -140,9 +140,7 @@ <h1>Software Interfaces<a class="headerlink" href="#software-interfaces" title="
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/autostart.html">Autostarting the container</a></li>
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/resource_management.html">Resource Management on the VPU</a></li>
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/gpu.html">Enabling GPU usage</a></li>
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/tensorRT/TensorRT_on_a_VPU_hardware.html">Using TensorRT</a></li>
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/tensorRT/TensorRT_on_a_VPU_hardware.html#load-a-pretrained-yolo-model-recommended-for-training">Load a pretrained YOLO model (recommended for training)</a></li>
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/tensorRT/TensorRT_on_a_VPU_hardware.html#export-the-model-to-onnx-format">Export the model to ONNX format</a></li>
<li class="toctree-l2"><a class="reference internal" href="SoftwareInterfaces/Docker/tensorRT/TensorRT_on_a_VPU_hardware.html">TensortRT: DL / ML model deployment</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="SoftwareInterfaces/ifmDiagnostic/index_diagnostic.html">Diagnostic</a><ul>
Expand Down
2 changes: 1 addition & 1 deletion v1.10.13/searchindex.js

Large diffs are not rendered by default.

0 comments on commit ce82083

Please sign in to comment.