Skip to content

Commit

Permalink
Deployed 56370d6 to pr-385 with MkDocs 1.4.3 and mike 2.1.0.dev0
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Nov 20, 2023
1 parent d2de524 commit 8022ae6
Show file tree
Hide file tree
Showing 6 changed files with 4,837 additions and 200 deletions.
21 changes: 2 additions & 19 deletions pr-385/design/autoware-architecture/perception/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -4440,26 +4440,14 @@


<div><h1 id="perception-component-design">Perception component design<a class="headerlink" href="#perception-component-design" title="Permanent link">#</a></h1>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Under Construction</p>
</div>
<h2 id="purpose-of-this-document">Purpose of this document<a class="headerlink" href="#purpose-of-this-document" title="Permanent link">#</a></h2>
<p>This document outlines the high-level design strategies, goals and related rationales in the development of the Perception Component. Through this document, it is expected that all OSS developers will comprehend the design philosophy, goals and constraints under which the Perception Component is designed, and participate seamlessly in the development.</p>
<!-- この文書は、Perception Componentの開発における目標やハイレベルな設計戦略、およびそれに関連する意思決定とその理由を説明します。このドキュメントを通じて、すべてのOSS開発者は、Perception Componentがどのような設計思想や制約のもとで設計され、どのような目標を達成するために開発が行われているのかを理解することができます。これにより、円滑な開発参加が可能となります。 -->

<h2 id="overview">Overview<a class="headerlink" href="#overview" title="Permanent link">#</a></h2>
<p>The Perception Component receives inputs from Sensing, Localization, and Map components, and adds semantic information (e.g., Object Recognition, Obstacle Segmentation, Traffic Light Recognition, Occupancy Grid Map), which is then passed on to Planning Component. This component design follows the overarching philosophy of Autoware, defined as the <a href="https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-concepts/">microautonomy concept</a>.</p>
<h2 id="goals-and-non-goals">Goals and non-goals<a class="headerlink" href="#goals-and-non-goals" title="Permanent link">#</a></h2>
<p>The role of the Perception component is to recognize the surrounding environment based on the data obtained through Sensing and acquire sufficient information (such as the presence of dynamic objects, stationary obstacles, blind spots, and traffic signal information) to enable autonomous driving.</p>
<!-- Perception コンポーネントの役割は、Sensingで得られたデータを基に、周囲の環境を認識し、自動走行を実現するために充分な情報(たとえば、周囲の動物体や、静止障害物、死角、信号機の情報)を得ることです。 -->

<p>In our overall design, we emphasize the concept of <a href="https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-concepts">microautonomy architecture</a>. This term refers to a design approach that focuses on the proper modularization of functions, clear definition of interfaces between these modules, and as a result, high expandability of the system. Given this context, the goal of the Perception component is set not to solve every conceivable complex use case (although we do aim to support basic ones), but rather to provide a platform that can be customized to the user's needs and can facilitate the development of additional features.</p>
<!-- Perceptionの全体設計において、我々は [microautonomy architecture](https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-concepts) の概念を重視しています。microautonomy とは、適切な機能のモジュール化やインターフェースの明確な定義に基づき、システムの高い拡張性を焦点を当てた設計コンセプトです。そのため Planning component の目標は、すべての考えられる複雑なユースケースを解決することではなく(基本的なものはサポートすることを目指していますが)、ユーザーのニーズに合わせてカスタマイズでき、第三者によって機能が容易に追加可能なプラットフォームを提供することに設定されています。 -->

<p>To clarify the design concepts, the following points are listed as goals and non-goals.</p>
<!-- この設計コンセプトを明確にするため、以下に Goal と Non-Goal をリスト化します。 -->

<p><strong>Goals:</strong></p>
<ul>
<li>The basic functions are provided so that a simple <abbr title="Operational Design Domain">ODD</abbr> can be defined.</li>
Expand All @@ -4477,11 +4465,6 @@ <h2 id="goals-and-non-goals">Goals and non-goals<a class="headerlink" href="#goa
<li>The Perception component is not designed to always outperform human drivers.</li>
<li>The Perception component is not capable of achieving "zero overlooks" or "error-free recognition".</li>
</ul>
<!-- - Perceptionコンポーネントは自己完結している必要はない。ただし、サードパーティと共に拡張・強化することができることは必要。
- Perceptionコンポーネントは自動運転としての完全な機能を目指しているわけではない。
- Perceptionコンポーネントは常に人間のドライバーを上回るように設計されているわけではない。
- Perceptionコンポーネントは「未検知ゼロ」「誤認識ゼロ」を実現できるわけではない。 -->

<h2 id="high-level-architecture">High-level architecture<a class="headerlink" href="#high-level-architecture" title="Permanent link">#</a></h2>
<p>This diagram describes the high-level architecture of the Perception Component.</p>
<p><img alt="overall-perception-architecture" src="image/high-level-perception-diagram.drawio.svg"></p>
Expand All @@ -4493,7 +4476,7 @@ <h2 id="high-level-architecture">High-level architecture<a class="headerlink" hr
<li><strong>Traffic Light Recognition</strong>: Recognizes the colors of traffic lights and the directions of arrow signals.</li>
</ul>
<h2 id="component-interface">Component interface<a class="headerlink" href="#component-interface" title="Permanent link">#</a></h2>
<p>The following describes the input/output concept between Perception Component and other components. See <a href="../../autoware-interfaces/components/perception.md">the Perception Component Interface (WIP)</a> page for the current implementation.</p>
<p>The following describes the input/output concept between Perception Component and other components. See <a href="../../autoware-interfaces/components/perception/">the Perception Component Interface (WIP)</a> page for the current implementation.</p>
<h3 id="input-to-the-perception-component">Input to the perception component<a class="headerlink" href="#input-to-the-perception-component" title="Permanent link">#</a></h3>
<ul>
<li><strong>From Sensing</strong>: This input should provide real-time information about the environment.<ul>
Expand All @@ -4508,7 +4491,7 @@ <h3 id="input-to-the-perception-component">Input to the perception component<a c
</li>
<li><strong>From Map</strong>: This input should provide real-time information about the static information about the environment.<ul>
<li>Vector Map: Contains all static information about the environment, including lane aria information.</li>
<li>Point Cloud Map: Contains static point cloud maps, which shoud not include information about the dynamic objects.</li>
<li>Point Cloud Map: Contains static point cloud maps, which should not include information about the dynamic objects.</li>
</ul>
</li>
<li><strong>From API</strong>:<ul>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4514,8 +4514,8 @@ <h4 id="object-recognition">Object Recognition<a class="headerlink" href="#objec
<li>The trade-off involved in setting the upper limit is that, while it reduces computational load, recognition becomes impossible if the number of points within the unit region exceeds the limit. This trade-off can be addressed by incorporating a downsample filter in the preprocessing stage, but it comes at the cost of increased computational load.</li>
</ul>
</li>
<li>A map-dependent detection validator is being utilized.<ul>
<li>This allows the removal of instances where buildings are falsely detected as dynamic objects.</li>
<li>A map-dependent detection validator reduces the risk of False Positives<ul>
<li>Map based validator helps to decrease False Positives. For example, this module could help eliminate instances where a building is erroneously detected as a vehicle.</li>
<li>Since it relies on the map, if the map is incorrect, objects may disappear. There is a trade-off where reducing False Positives may increase the likelihood of False Negatives.</li>
</ul>
</li>
Expand All @@ -4525,13 +4525,13 @@ <h4 id="object-recognition">Object Recognition<a class="headerlink" href="#objec
</ul>
</li>
<li>In the Radar Pipeline, only objects at a distant range and with significant speed are detected.<ul>
<li>Considering the current standard performance of radar systems, not limiting detection to distant and fast-moving objects could result in numerous False Positives that could interfere with vehicle movement. Instead, this approach may lead to False Negatives for distant stationary objects.</li>
<li>Considering the current standard performance of radar systems, not limiting detection to distant and fast-moving objects could result in numerous False Positives that could interfere with vehicle movement. Instead, this approach leads to False Negatives for distant or stationary objects.</li>
<li>With advancements in radar performance and a reduction in false detections expected, it is anticipated that the system will become capable of detecting both close-range and stationary objects.</li>
</ul>
</li>
<li>Interpolator<ul>
<li>The use of an interpolator allows for the detection of unknown objects in clustering and, if successfully tracked, helps prevent False Negatives.</li>
<li>While reducing , there is a trade-off where issues such as inducing vehicle rotation or persistently holding False Positives at a distant range may occur.</li>
<li>Interpolator helps to decrease the likelihood of False Negatives.<ul>
<li>If target object is detected by unknown object in LiDAR clustering and tracked by multi object tracker, Interpolator can detect this object.</li>
<li>However, there is a trade-off where issues such as inducing vehicle rotation or persistently holding False Positives at a distant range may occur.</li>
</ul>
</li>
</ul>
Expand Down
Loading

0 comments on commit 8022ae6

Please sign in to comment.