diff --git a/latest/search/search_index.json b/latest/search/search_index.json index feb9af933e87f..9d328f75a1ff7 100644 --- a/latest/search/search_index.json +++ b/latest/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Autoware Universe","text":""},{"location":"#autoware-universe","title":"Autoware Universe","text":""},{"location":"#welcome-to-autoware-universe","title":"Welcome to Autoware Universe","text":"

Autoware Universe serves as a foundational pillar within the Autoware ecosystem, playing a critical role in enhancing the core functionalities of autonomous driving technologies. This repository is a pivotal element of the Autoware Core/Universe concept, managing a wide array of packages that significantly extend the capabilities of autonomous vehicles.

"},{"location":"#getting-started","title":"Getting Started","text":"

To dive into the vast world of Autoware and understand how Autoware Universe fits into the bigger picture, we recommend starting with the Autoware Documentation. This resource provides a thorough overview of the Autoware ecosystem, guiding you through its components, functionalities, and how to get started with development.

"},{"location":"#explore-autoware-universe-documentation","title":"Explore Autoware Universe documentation","text":"

For those looking to explore the specifics of Autoware Universe components, the Autoware Universe Documentation, deployed with MKDocs, offers detailed insights.

"},{"location":"#code-coverage-metrics","title":"Code Coverage Metrics","text":"

Below table shows the coverage rate of entire Autoware Universe and sub-components respectively.

"},{"location":"#entire-project-coverage","title":"Entire Project Coverage","text":""},{"location":"#component-wise-coverage","title":"Component-wise Coverage","text":"

You can check more details by clicking the badge and navigating the codecov website.

Component Coverage Common Control Evaluator Launch TBD Localization Map Perception Planning Sensing Simulator System Vehicle"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#contributor-covenant-code-of-conduct","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to a positive environment for our community include:

Examples of unacceptable behavior include:

"},{"location":"CODE_OF_CONDUCT/#enforcement-responsibilities","title":"Enforcement Responsibilities","text":"

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@autoware.org. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"

Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

"},{"location":"CONTRIBUTING/","title":"Contributing","text":""},{"location":"CONTRIBUTING/#contributing","title":"Contributing","text":"

See https://autowarefoundation.github.io/autoware-documentation/main/contributing/.

"},{"location":"DISCLAIMER/","title":"DISCLAIMER","text":"

DISCLAIMER

\u201cAutoware\u201d will be provided by The Autoware Foundation under the Apache License 2.0. This \u201cDISCLAIMER\u201d will be applied to all users of Autoware (a \u201cUser\u201d or \u201cUsers\u201d) with the Apache License 2.0 and Users shall hereby approve and acknowledge all the contents specified in this disclaimer below and will be deemed to consent to this disclaimer without any objection upon utilizing or downloading Autoware.

Disclaimer and Waiver of Warranties

  1. AUTOWARE FOUNDATION MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH RESPECT TO PROVIDING AUTOWARE (the \u201cService\u201d) including but not limited to any representation or warranty (i) of fitness or suitability for a particular purpose contemplated by the Users, (ii) of the expected functions, commercial value, accuracy, or usefulness of the Service, (iii) that the use by the Users of the Service complies with the laws and regulations applicable to the Users or any internal rules established by industrial organizations, (iv) that the Service will be free of interruption or defects, (v) of the non-infringement of any third party's right and (vi) the accuracy of the content of the Services and the software itself.

  2. The Autoware Foundation shall not be liable for any damage incurred by the User that are attributable to the Autoware Foundation for any reasons whatsoever. UNDER NO CIRCUMSTANCES SHALL THE AUTOWARE FOUNDATION BE LIABLE FOR INCIDENTAL, INDIRECT, SPECIAL OR FUTURE DAMAGES OR LOSS OF PROFITS.

  3. A User shall be entirely responsible for the content posted by the User and its use of any content of the Service or the Website. If the User is held responsible in a civil action such as a claim for damages or even in a criminal case, the Autoware Foundation and member companies, governments and academic & non-profit organizations and their directors, officers, employees and agents (collectively, the \u201cIndemnified Parties\u201d) shall be completely discharged from any rights or assertions the User may have against the Indemnified Parties, or from any legal action, litigation or similar procedures.

Indemnity

A User shall indemnify and hold the Indemnified Parties harmless from any of their damages, losses, liabilities, costs or expenses (including attorneys' fees or criminal compensation), or any claims or demands made against the Indemnified Parties by any third party, due to or arising out of, or in connection with utilizing Autoware (including the representations and warranties), the violation of applicable Product Liability Law of each country (including criminal case) or violation of any applicable laws by the Users, or the content posted by the User or its use of any content of the Service or the Website.

"},{"location":"common/","title":"Common","text":""},{"location":"common/#common","title":"Common","text":""},{"location":"common/#getting-started","title":"Getting Started","text":"

The Autoware.Universe Common folder consists of common and testing libraries that are used by other Autoware components, as well as useful plugins for visualization in RVIZ2.

Note

In addition to the ones listed in this folder, users can also have a look at some of the add-ons in the autoware_tools/common documentation page.

"},{"location":"common/#highlights","title":"Highlights","text":"

Some of the commonly used libraries are:

  1. autoware_universe_utils
  2. autoware_motion_utils
"},{"location":"common/autoware_adapi_specs/","title":"autoware_adapi_specs","text":""},{"location":"common/autoware_adapi_specs/#autoware_adapi_specs","title":"autoware_adapi_specs","text":"

This package is a specification of Autoware AD API.

"},{"location":"common/autoware_auto_common/design/comparisons/","title":"Comparisons","text":""},{"location":"common/autoware_auto_common/design/comparisons/#comparisons","title":"Comparisons","text":"

The float_comparisons.hpp library is a simple set of functions for performing approximate numerical comparisons. There are separate functions for performing comparisons using absolute bounds and relative bounds. Absolute comparison checks are prefixed with abs_ and relative checks are prefixed with rel_.

The bool_comparisons.hpp library additionally contains an XOR operator.

The intent of the library is to improve readability of code and reduce likelihood of typographical errors when using numerical and boolean comparisons.

"},{"location":"common/autoware_auto_common/design/comparisons/#target-use-cases","title":"Target use cases","text":"

The approximate comparisons are intended to be used to check whether two numbers lie within some absolute or relative interval. The exclusive_or function will test whether two values cast to different boolean values.

"},{"location":"common/autoware_auto_common/design/comparisons/#assumptions","title":"Assumptions","text":""},{"location":"common/autoware_auto_common/design/comparisons/#example-usage","title":"Example Usage","text":"
#include \"autoware_auto_common/common/bool_comparisons.hpp\"\n#include \"autoware_auto_common/common/float_comparisons.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::qp_interface::helper_functions::comparisons;\n\nstatic constexpr auto epsilon = 0.2;\nstatic constexpr auto relative_epsilon = 0.01;\n\nstd::cout << exclusive_or(true, false) << \"\\n\";\n// Prints: true\n\nstd::cout << rel_eq(1.0, 1.1, relative_epsilon)) << \"\\n\";\n// Prints: false\n\nstd::cout << approx_eq(10000.0, 10010.0, epsilon, relative_epsilon)) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_eq(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_ne(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_eq_zero(0.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_lt(4.0, 4.25, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_lte(1.0, 1.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gt(1.25, 1.0, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gte(0.75, 1.0, epsilon) << \"\\n\";\n// Prints: false\n
"},{"location":"common/autoware_component_interface_specs/","title":"autoware_component_interface_specs","text":""},{"location":"common/autoware_component_interface_specs/#autoware_component_interface_specs","title":"autoware_component_interface_specs","text":"

This package is a specification of component interfaces.

"},{"location":"common/autoware_component_interface_tools/","title":"autoware_component_interface_tools","text":""},{"location":"common/autoware_component_interface_tools/#autoware_component_interface_tools","title":"autoware_component_interface_tools","text":"

This package provides the following tools for component interface.

"},{"location":"common/autoware_component_interface_tools/#service_log_checker","title":"service_log_checker","text":"

Monitor the service log of component_interface_utils and display if the response status is an error.

"},{"location":"common/autoware_component_interface_utils/","title":"autoware_component_interface_utils","text":""},{"location":"common/autoware_component_interface_utils/#autoware_component_interface_utils","title":"autoware_component_interface_utils","text":""},{"location":"common/autoware_component_interface_utils/#features","title":"Features","text":"

This is a utility package that provides the following features:

"},{"location":"common/autoware_component_interface_utils/#design","title":"Design","text":"

This package provides the wrappers for the interface classes of rclcpp. The wrappers limit the usage of the original class to enforce the processing recommended by the component interface. Do not inherit the class of rclcpp, and forward or wrap the member function that is allowed to be used.

"},{"location":"common/autoware_component_interface_utils/#instantiation-of-the-wrapper-class","title":"Instantiation of the wrapper class","text":"

The wrapper class requires interface information in this format.

struct SampleService\n{\nusing Service = sample_msgs::srv::ServiceType;\nstatic constexpr char name[] = \"/sample/service\";\n};\n\nstruct SampleMessage\n{\nusing Message = sample_msgs::msg::MessageType;\nstatic constexpr char name[] = \"/sample/message\";\nstatic constexpr size_t depth = 1;\nstatic constexpr auto reliability = RMW_QOS_POLICY_RELIABILITY_RELIABLE;\nstatic constexpr auto durability = RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL;\n};\n

Create the wrapper using the above definition as follows.

// header file\nautoware::component_interface_utils::Service<SampleService>::SharedPtr srv_;\nautoware::component_interface_utils::Client<SampleService>::SharedPtr cli_;\nautoware::component_interface_utils::Publisher<SampleMessage>::SharedPtr pub_;\nautoware::component_interface_utils::Subscription<SampleMessage>::SharedPtr sub_;\n\n// source file\nconst auto node = autoware::component_interface_utils::NodeAdaptor(this);\nnode.init_srv(srv_, callback);\nnode.init_cli(cli_);\nnode.init_pub(pub_);\nnode.init_sub(sub_, callback);\n
"},{"location":"common/autoware_component_interface_utils/#logging-for-service-and-client","title":"Logging for service and client","text":"

If the wrapper class is used, logging is automatically enabled. The log level is RCLCPP_INFO.

"},{"location":"common/autoware_component_interface_utils/#service-exception-for-response","title":"Service exception for response","text":"

If the wrapper class is used and the service response has status, throwing ServiceException will automatically catch and set it to status. This is useful when returning an error from a function called from the service callback.

void service_callback(Request req, Response res)\n{\nfunction();\nres->status.success = true;\n}\n\nvoid function()\n{\nthrow ServiceException(ERROR_CODE, \"message\");\n}\n

If the wrapper class is not used or the service response has no status, manually catch the ServiceException as follows.

void service_callback(Request req, Response res)\n{\ntry {\nfunction();\nres->status.success = true;\n} catch (const ServiceException & error) {\nres->status = error.status();\n}\n}\n
"},{"location":"common/autoware_component_interface_utils/#relays-for-topic-and-service","title":"Relays for topic and service","text":"

There are utilities for relaying services and messages of the same type.

const auto node = autoware::component_interface_utils::NodeAdaptor(this);\nservice_callback_group_ = create_callback_group(rclcpp::CallbackGroupType::MutuallyExclusive);\nnode.relay_message(pub_, sub_);\nnode.relay_service(cli_, srv_, service_callback_group_);  // group is for avoiding deadlocks\n
"},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/","title":"Fake Test Node","text":""},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#fake-test-node","title":"Fake Test Node","text":""},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#what-this-package-provides","title":"What this package provides","text":"

When writing an integration test for a node in C++ using GTest, there is quite some boilerplate code that needs to be written to set up a fake node that would publish expected messages on an expected topic and subscribes to messages on some other topic. This is usually implemented as a custom GTest fixture.

This package contains a library that introduces two utility classes that can be used in place of custom fixtures described above to write integration tests for a node:

These fixtures take care of initializing and re-initializing rclcpp as well as of checking that all subscribers and publishers have a match, thus reducing the amount of boilerplate code that the user needs to write.

"},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#how-to-use-this-library","title":"How to use this library","text":"

After including the relevant header the user can use a typedef to use a custom fixture name and use the provided classes as fixtures in TEST_F and TEST_P tests directly.

"},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#example-usage","title":"Example usage","text":"

Let's say there is a node NodeUnderTest that requires testing. It just subscribes to std_msgs::msg::Int32 messages and publishes a std_msgs::msg::Bool to indicate that the input is positive. To test such a node the following code can be used utilizing the autoware::fake_test_node::FakeTestNode:

using FakeNodeFixture = autoware::fake_test_node::FakeTestNode;\n\n/// @test Test that we can use a non-parametrized test.\nTEST_F(FakeNodeFixture, Test) {\nInt32 msg{};\nmsg.data = 15;\nconst auto node = std::make_shared<NodeUnderTest>();\n\nBool::SharedPtr last_received_msg{};\nauto fake_odom_publisher = create_publisher<Int32>(\"/input_topic\");\nauto result_odom_subscription = create_subscription<Bool>(\"/output_topic\", *node,\n[&last_received_msg](const Bool::SharedPtr msg) {last_received_msg = msg;});\n\nconst auto dt{std::chrono::milliseconds{100LL}};\nconst auto max_wait_time{std::chrono::seconds{10LL}};\nauto time_passed{std::chrono::milliseconds{0LL}};\nwhile (!last_received_msg) {\nfake_odom_publisher->publish(msg);\nrclcpp::spin_some(node);\nrclcpp::spin_some(get_fake_node());\nstd::this_thread::sleep_for(dt);\ntime_passed += dt;\nif (time_passed > max_wait_time) {\nFAIL() << \"Did not receive a message soon enough.\";\n}\n}\nEXPECT_TRUE(last_received_msg->data);\nSUCCEED();\n}\n

Here only the TEST_F example is shown but a TEST_P usage is very similar with a little bit more boilerplate to set up all the parameter values, see test_fake_test_node.cpp for an example usage.

"},{"location":"common/autoware_global_parameter_loader/Readme/","title":"Autoware Global Parameter Loader","text":""},{"location":"common/autoware_global_parameter_loader/Readme/#autoware-global-parameter-loader","title":"Autoware Global Parameter Loader","text":"

This package is to set common ROS parameters to each node.

"},{"location":"common/autoware_global_parameter_loader/Readme/#usage","title":"Usage","text":"

Add the following lines to the launch file of the node in which you want to get global parameters.

<!-- Global parameters -->\n<include file=\"$(find-pkg-share autoware_global_parameter_loader)/launch/global_params.launch.py\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n</include>\n

The vehicle model parameter is read from config/vehicle_info.param.yaml in vehicle_model_description package.

"},{"location":"common/autoware_global_parameter_loader/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently only vehicle_info is loaded by this launcher.

"},{"location":"common/autoware_glog_component/","title":"glog_component","text":""},{"location":"common/autoware_glog_component/#glog_component","title":"glog_component","text":"

This package provides the glog (google logging library) feature as a ros2 component library. This is used to dynamically load the glog feature with container.

See the glog github for the details of its features.

"},{"location":"common/autoware_glog_component/#example","title":"Example","text":"

When you load the glog_component in container, the launch file can be like below:

glog_component = ComposableNode(\n    package=\"autoware_glog_component\",\n    plugin=\"autoware::glog_component::GlogComponent\",\n    name=\"glog_component\",\n)\n\ncontainer = ComposableNodeContainer(\n    name=\"my_container\",\n    namespace=\"\",\n    package=\"rclcpp_components\",\n    executable=LaunchConfiguration(\"container_executable\"),\n    composable_node_descriptions=[\n        component1,\n        component2,\n        glog_component,\n    ],\n)\n
"},{"location":"common/autoware_goal_distance_calculator/Readme/","title":"autoware_goal_distance_calculator","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#autoware_goal_distance_calculator","title":"autoware_goal_distance_calculator","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes deviation of self-pose from goal pose.

"},{"location":"common/autoware_goal_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/mission_planning/route autoware_planning_msgs::msg::Route Used to get goal pose /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/autoware_goal_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description deviation/lateral tier4_debug_msgs::msg::Float64Stamped publish lateral deviation of self-pose from goal pose[m] deviation/longitudinal tier4_debug_msgs::msg::Float64Stamped publish longitudinal deviation of self-pose from goal pose[m] deviation/yaw tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[rad] deviation/yaw_deg tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[deg]"},{"location":"common/autoware_goal_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period. [Hz]"},{"location":"common/autoware_goal_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation oneshot bool true publish deviations just once or repeatedly"},{"location":"common/autoware_goal_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_grid_map_utils/","title":"Grid Map Utils","text":""},{"location":"common/autoware_grid_map_utils/#grid-map-utils","title":"Grid Map Utils","text":""},{"location":"common/autoware_grid_map_utils/#overview","title":"Overview","text":"

This packages contains a re-implementation of the grid_map::PolygonIterator used to iterate over all cells of a grid map contained inside some polygon.

"},{"location":"common/autoware_grid_map_utils/#algorithm","title":"Algorithm","text":"

This implementation uses the scan line algorithm, a common algorithm used to draw polygons on a rasterized image. The main idea of the algorithm adapted to a grid map is as follow:

More details on the scan line algorithm can be found in the References.

"},{"location":"common/autoware_grid_map_utils/#api","title":"API","text":"

The autoware::grid_map_utils::PolygonIterator follows the same API as the original grid_map::PolygonIterator.

"},{"location":"common/autoware_grid_map_utils/#assumptions","title":"Assumptions","text":"

The behavior of the autoware::grid_map_utils::PolygonIterator is only guaranteed to match the grid_map::PolygonIterator if edges of the polygon do not exactly cross any cell center. In such a case, whether the crossed cell is considered inside or outside of the polygon can vary due to floating precision error.

"},{"location":"common/autoware_grid_map_utils/#performances","title":"Performances","text":"

Benchmarking code is implemented in test/benchmarking.cpp and is also used to validate that the autoware::grid_map_utils::PolygonIterator behaves exactly like the grid_map::PolygonIterator.

The following figure shows a comparison of the runtime between the implementation of this package (autoware_grid_map_utils) and the original implementation (grid_map). The time measured includes the construction of the iterator and the iteration over all indexes and is shown using a logarithmic scale. Results were obtained varying the side size of a square grid map with 100 <= n <= 1000 (size=n means a grid of n x n cells), random polygons with a number of vertices 3 <= m <= 100 and with each parameter (n,m) repeated 10 times.

"},{"location":"common/autoware_grid_map_utils/#future-improvements","title":"Future improvements","text":"

There exists variations of the scan line algorithm for multiple polygons. These can be implemented if we want to iterate over the cells contained in at least one of multiple polygons.

The current implementation imitate the behavior of the original grid_map::PolygonIterator where a cell is selected if its center position is inside the polygon. This behavior could be changed for example to only return all cells overlapped by the polygon.

"},{"location":"common/autoware_grid_map_utils/#references","title":"References","text":""},{"location":"common/autoware_interpolation/","title":"Interpolation package","text":""},{"location":"common/autoware_interpolation/#interpolation-package","title":"Interpolation package","text":"

This package supplies linear and spline interpolation functions.

"},{"location":"common/autoware_interpolation/#linear-interpolation","title":"Linear Interpolation","text":"

lerp(src_val, dst_val, ratio) (for scalar interpolation) interpolates src_val and dst_val with ratio. This will be replaced with std::lerp(src_val, dst_val, ratio) in C++20.

lerp(base_keys, base_values, query_keys) (for vector interpolation) applies linear regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/autoware_interpolation/#spline-interpolation","title":"Spline Interpolation","text":"

spline(base_keys, base_values, query_keys) (for vector interpolation) applies spline regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/autoware_interpolation/#evaluation-of-calculation-cost","title":"Evaluation of calculation cost","text":"

We evaluated calculation cost of spline interpolation for 100 points, and adopted the best one which is tridiagonal matrix algorithm. Methods except for tridiagonal matrix algorithm exists in spline_interpolation package, which has been removed from Autoware.

Method Calculation time Tridiagonal Matrix Algorithm 0.007 [ms] Preconditioned Conjugate Gradient 0.024 [ms] Successive Over-Relaxation 0.074 [ms]"},{"location":"common/autoware_interpolation/#spline-interpolation-algorithm","title":"Spline Interpolation Algorithm","text":"

Assuming that the size of base_keys (\\(x_i\\)) and base_values (\\(y_i\\)) are \\(N + 1\\), we aim to calculate spline interpolation with the following equation to interpolate between \\(y_i\\) and \\(y_{i+1}\\).

\\[ Y_i(x) = a_i (x - x_i)^3 + b_i (x - x_i)^2 + c_i (x - x_i) + d_i \\ \\ \\ (i = 0, \\dots, N-1) \\]

Constraints on spline interpolation are as follows. The number of constraints is \\(4N\\), which is equal to the number of variables of spline interpolation.

\\[ \\begin{align} Y_i (x_i) & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y_i (x_{i+1}) & = y_{i+1} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y'_i (x_{i+1}) & = Y'_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_i (x_{i+1}) & = Y''_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_0 (x_0) & = 0 \\\\ Y''_{N-1} (x_N) & = 0 \\end{align} \\]

According to this article, spline interpolation is formulated as the following linear equation.

\\[ \\begin{align} \\begin{pmatrix} 2(h_0 + h_1) & h_1 \\\\ h_0 & 2 (h_1 + h_2) & h_2 & & O \\\\ & & & \\ddots \\\\ O & & & & h_{N-2} & 2 (h_{N-2} + h_{N-1}) \\end{pmatrix} \\begin{pmatrix} v_1 \\\\ v_2 \\\\ v_3 \\\\ \\vdots \\\\ v_{N-1} \\end{pmatrix}= \\begin{pmatrix} w_1 \\\\ w_2 \\\\ w_3 \\\\ \\vdots \\\\ w_{N-1} \\end{pmatrix} \\end{align} \\]

where

\\[ \\begin{align} h_i & = x_{i+1} - x_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ w_i & = 6 \\left(\\frac{y_{i+1} - y_{i+1}}{h_i} - \\frac{y_i - y_{i-1}}{h_{i-1}}\\right) \\ \\ \\ (i = 1, \\dots, N-1) \\end{align} \\]

The coefficient matrix of this linear equation is tridiagonal matrix. Therefore, it can be solve with tridiagonal matrix algorithm, which can solve linear equations without gradient descent methods.

Solving this linear equation with tridiagonal matrix algorithm, we can calculate coefficients of spline interpolation as follows.

\\[ \\begin{align} a_i & = \\frac{v_{i+1} - v_i}{6 (x_{i+1} - x_i)} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ b_i & = \\frac{v_i}{2} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ c_i & = \\frac{y_{i+1} - y_i}{x_{i+1} - x_i} - \\frac{1}{6}(x_{i+1} - x_i)(2 v_i + v_{i+1}) \\ \\ \\ (i = 0, \\dots, N-1) \\\\ d_i & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\end{align} \\]"},{"location":"common/autoware_interpolation/#tridiagonal-matrix-algorithm","title":"Tridiagonal Matrix Algorithm","text":"

We solve tridiagonal linear equation according to this article where variables of linear equation are expressed as follows in the implementation.

\\[ \\begin{align} \\begin{pmatrix} b_0 & c_0 & & \\\\ a_0 & b_1 & c_2 & O \\\\ & & \\ddots \\\\ O & & a_{N-2} & b_{N-1} \\end{pmatrix} x = \\begin{pmatrix} d_0 \\\\ d_2 \\\\ d_3 \\\\ \\vdots \\\\ d_{N-1} \\end{pmatrix} \\end{align} \\]"},{"location":"common/autoware_motion_utils/","title":"Motion Utils package","text":""},{"location":"common/autoware_motion_utils/#motion-utils-package","title":"Motion Utils package","text":""},{"location":"common/autoware_motion_utils/#definition-of-terms","title":"Definition of terms","text":""},{"location":"common/autoware_motion_utils/#segment","title":"Segment","text":"

Segment in Autoware is the line segment between two successive points as follows.

The nearest segment index and nearest point index to a certain position is not always th same. Therefore, we prepare two different utility functions to calculate a nearest index for points and segments.

"},{"location":"common/autoware_motion_utils/#nearest-index-search","title":"Nearest index search","text":"

In this section, the nearest index and nearest segment index search is explained.

We have the same functions for the nearest index search and nearest segment index search. Taking for the example the nearest index search, we have two types of functions.

The first function finds the nearest index with distance and yaw thresholds.

template <class T>\nsize_t findFirstNearestIndexWithSoftConstraints(\nconst T & points, const geometry_msgs::msg::Pose & pose,\nconst double dist_threshold = std::numeric_limits<double>::max(),\nconst double yaw_threshold = std::numeric_limits<double>::max());\n

This function finds the first local solution within thresholds. The reason to find the first local one is to deal with some edge cases explained in the next subsection.

There are default parameters for thresholds arguments so that you can decide which thresholds to pass to the function.

  1. When both the distance and yaw thresholds are given.
  2. When only distance are given.
  3. When no thresholds are given.

The second function finds the nearest index in the lane whose id is lane_id.

size_t findNearestIndexFromLaneId(\nconst tier4_planning_msgs::msg::PathWithLaneId & path,\nconst geometry_msgs::msg::Point & pos, const int64_t lane_id);\n
"},{"location":"common/autoware_motion_utils/#application-to-various-object","title":"Application to various object","text":"

Many node packages often calculate the nearest index of objects. We will explain the recommended method to calculate it.

"},{"location":"common/autoware_motion_utils/#nearest-index-for-the-ego","title":"Nearest index for the ego","text":"

Assuming that the path length before the ego is short enough, we expect to find the correct nearest index in the following edge cases by findFirstNearestIndexWithSoftConstraints with both distance and yaw thresholds. Blue circles describes the distance threshold from the base link position and two blue lines describe the yaw threshold against the base link orientation. Among points in these cases, the correct nearest point which is red can be found.

Therefore, the implementation is as follows.

const size_t ego_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t ego_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\n
"},{"location":"common/autoware_motion_utils/#nearest-index-for-dynamic-objects","title":"Nearest index for dynamic objects","text":"

For the ego nearest index, the orientation is considered in addition to the position since the ego is supposed to follow the points. However, for the dynamic objects (e.g., predicted object), sometimes its orientation may be different from the points order, e.g. the dynamic object driving backward although the ego is driving forward.

Therefore, the yaw threshold should not be considered for the dynamic object. The implementation is as follows.

const size_t dynamic_obj_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\nconst size_t dynamic_obj_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\n
"},{"location":"common/autoware_motion_utils/#nearest-index-for-traffic-objects","title":"Nearest index for traffic objects","text":"

In lanelet maps, traffic objects belong to the specific lane. With this specific lane's id, the correct nearest index can be found.

The implementation is as follows.

// first extract `lane_id` which the traffic object belong to.\nconst size_t traffic_obj_nearest_idx = findNearestIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\nconst size_t traffic_obj_nearest_seg_idx = findNearestSegmentIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\n
"},{"location":"common/autoware_motion_utils/#for-developers","title":"For developers","text":"

Some of the template functions in trajectory.hpp are mostly used for specific types (autoware_planning_msgs::msg::PathPoint, autoware_planning_msgs::msg::PathPoint, autoware_planning_msgs::msg::TrajectoryPoint), so they are exported as extern template functions to speed-up compilation time.

autoware_motion_utils.hpp header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/","title":"vehicle utils","text":""},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#vehicle-utils","title":"vehicle utils","text":"

Vehicle utils provides a convenient library used to check vehicle status.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#feature","title":"Feature","text":"

The library contains following classes.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#vehicle_stop_checker","title":"vehicle_stop_checker","text":"

This class check whether the vehicle is stopped or not based on localization result.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#parameters","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s]"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#member-functions","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#example-usage","title":"Example Usage","text":"

Necessary includes:

#include <autoware/universe_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_stop_checker_ = std::make_unique<VehicleStopChecker>(this);\n}\n\nstd::unique_ptr<VehicleStopChecker> vehicle_stop_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_stop_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_stop_checker_->isVehicleStopped(3.0);\n\n...\n}\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#vehicle_arrival_checker","title":"vehicle_arrival_checker","text":"

This class check whether the vehicle arrive at stop point based on localization and planning result.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#subscribed-topics_1","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry /planning/scenario_planning/trajectory autoware_planning_msgs::msg::Trajectory trajectory"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#parameters_1","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s] th_arrived_distance_m double 1.0 threshold distance to check if vehicle has arrived at target point [m]"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#member-functions_1","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
bool isVehicleStoppedAtStopPoint(const double stop_duration)\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#example-usage_1","title":"Example Usage","text":"

Necessary includes:

#include <autoware/universe_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_arrival_checker_ = std::make_unique<VehicleArrivalChecker>(this);\n}\n\nstd::unique_ptr<VehicleArrivalChecker> vehicle_arrival_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_arrival_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_arrival_checker_->isVehicleStopped(3.0);\n\n...\n\nconst auto result_3 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint();\n\n...\n\nconst auto result_4 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint(3.0);\n\n...\n}\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

vehicle_stop_checker and vehicle_arrival_checker cannot check whether the vehicle is stopped more than velocity_buffer_time_sec second.

"},{"location":"common/autoware_object_recognition_utils/","title":"autoware_object_recognition_utils","text":""},{"location":"common/autoware_object_recognition_utils/#autoware_object_recognition_utils","title":"autoware_object_recognition_utils","text":""},{"location":"common/autoware_object_recognition_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions that are useful across the object recognition module. This package may include functions for converting between different data types, msg types, and performing common operations on them.

"},{"location":"common/autoware_path_distance_calculator/Readme/","title":"autoware_path_distance_calculator","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#autoware_path_distance_calculator","title":"autoware_path_distance_calculator","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes a distance from the closest path point from the self-position to the end point of the path. Note that the distance means the arc-length along the path, not the Euclidean distance between the two points.

"},{"location":"common/autoware_path_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/scenario_planning/lane_driving/behavior_planning/path autoware_planning_msgs::msg::Path Reference path /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/autoware_path_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description ~/distance tier4_debug_msgs::msg::Float64Stamped Publish a distance from the closest path point from the self-position to the end point of the path[m]"},{"location":"common/autoware_path_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"common/autoware_path_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"common/autoware_path_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_polar_grid/Readme/","title":"Polar Grid","text":""},{"location":"common/autoware_polar_grid/Readme/#polar-grid","title":"Polar Grid","text":""},{"location":"common/autoware_polar_grid/Readme/#purpose","title":"Purpose","text":"

This plugin displays polar grid around ego vehicle in Rviz.

"},{"location":"common/autoware_polar_grid/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation Max Range float 200.0f max range for polar grid. [m] Wave Velocity float 100.0f wave ring velocity. [m/s] Delta Range float 10.0f wave ring distance for polar grid. [m]"},{"location":"common/autoware_polar_grid/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_pyplot/","title":"autoware_pyplot","text":""},{"location":"common/autoware_pyplot/#autoware_pyplot","title":"autoware_pyplot","text":"

This package provides C++ interface for the notable matplotlib using pybind11 backend for

"},{"location":"common/autoware_pyplot/#usage","title":"usage","text":"

In your main function, setup the python context and import matplotlib

#include <autoware/pyplot/pyplot.hpp>\n#include <pybind11/embed.h>\n\n// in main...\npy::scoped_interpreter guard{};\nauto plt = autoware::pyplot::import();\n

Then you can use major functionalities of matplotlib almost in the same way as native python code.

{\nplt.plot(Args(std::vector<int>({1, 3, 2, 4})), Kwargs(\"color\"_a = \"blue\", \"linewidth\"_a = 1.0));\nplt.xlabel(Args(\"x-title\"));\nplt.ylabel(Args(\"y-title\"));\nplt.title(Args(\"title\"));\nplt.xlim(Args(0, 5));\nplt.ylim(Args(0, 5));\nplt.grid(Args(true));\nplt.savefig(Args(\"test_single_plot.png\"));\n}\n\n{\nauto [fig, axes] = plt.subplots(1, 2);\nauto & ax1 = axes[0];\nauto & ax2 = axes[1];\n\nax1.set_aspect(Args(\"equal\"));\nax2.set_aspect(Args(\"equal\"));\n}\n
"},{"location":"common/autoware_signal_processing/","title":"Signal Processing Methods","text":""},{"location":"common/autoware_signal_processing/#signal-processing-methods","title":"Signal Processing Methods","text":"

In this package, we present signal processing related methods for the Autoware applications. The following functionalities are available in the current version.

low-pass filter currently supports only the 1-D low pass filtering.

"},{"location":"common/autoware_signal_processing/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/","title":"ButterworthFilter","text":""},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#butterworth-low-pass-filter-design-tool-class","title":"Butterworth Low-pass Filter Design Tool Class","text":"

This Butterworth low-pass filter design tool can be used to design a Butterworth filter in continuous and discrete-time from the given specifications of the filter performance. The Butterworth filter is a class implementation. A default constructor creates the object without any argument.

The filter can be prepared in three ways. If the filter specifications are known, such as the pass-band, and stop-band frequencies (Wp and Ws) together with the pass-band and stop-band ripple magnitudes (Ap and As), one can call the filter's buttord method with these arguments to obtain the recommended filter order (N) and cut-off frequency (Wc_rad_sec [rad/s]).

Figure 1. Butterworth Low-pass filter specification from [1].

An example call is demonstrated below;

ButterworthFilter bf();\n\nWp = 2.0; // pass-band frequency [rad/sec]\nWs = 3.0; // stop-band frequency [rad/sec]\nAp = 6.0; // pass-band ripple mag or loss [dB]\nAs = 20.0; // stop band ripple attenuation [dB]\n\n// Computing filter coefficients from the specs\nbf.Buttord(Wp, Ws, Ap, As);\n\n// Get the computed order and Cut-Off frequency\nsOrderCutOff NWc = bf.getOrderCutOff();]\n\ncout << \" The computed order is ;\" << NWc.N << endl;\ncout << \" The computed cut-off frequency is ;\" << NWc.Wc_rad_sec << endl;\n

The filter order and cut-off frequency can be obtained in a struct using bf.getOrderCutoff() method. These specs can be printed on the screen by calling PrintFilterSpecs() method. If the user would like to define the order and cut-off frequency manually, the setter methods for these variables can be called to set the filter order (N) and the desired cut-off frequency (Wc_rad_sec [rad/sec]) for the filter.

"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#obtaining-filter-transfer-functions","title":"Obtaining Filter Transfer Functions","text":"

The discrete transfer function of the filter requires the roots and gain of the continuous-time transfer function. Therefore, it is a must to call the first computeContinuousTimeTF() to create the continuous-time transfer function of the filter using;

bf.computeContinuousTimeTF();\n

The computed continuous-time transfer function roots can be printed on the screen using the methods;

bf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n

The resulting screen output for a 5th order filter is demonstrated below.

 Roots of Continuous Time Filter Transfer Function Denominator are :\n-0.585518 + j 1.80204\n-1.53291 + j 1.11372\n-1.89478 + j 2.32043e-16\n-1.53291 + j -1.11372\n-0.585518 + j -1.80204\n\n\nThe Continuous-Time Transfer Function of the Filter is ;\n\n                                   24.422\n-------------------------------------------------------------------------------\n1.000 *s[5] + 6.132 *s[4] + 18.798 *s[3] + 35.619 *s[2] + 41.711 *s[1] + 24.422\n
"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#discrete-time-transfer-function-difference-equations","title":"Discrete Time Transfer Function (Difference Equations)","text":"

The digital filter equivalent of the continuous-time definitions is produced by using the bi-linear transformation. When creating the discrete-time function of the ButterworthFilter object, its Numerator (Bn) and Denominator (An ) coefficients are stored in a vector of filter order size N.

The discrete transfer function method is called using ;

bf.computeDiscreteTimeTF();\nbf.PrintDiscreteTimeTF();\n

The results are printed on the screen like; The Discrete-Time Transfer Function of the Filter is ;

0.191 *z[-5] + 0.956 *z[-4] + 1.913 *z[-3] + 1.913 *z[-2] + 0.956 *z[-1] + 0.191\n--------------------------------------------------------------------------------\n1.000 *z[-5] + 1.885 *z[-4] + 1.888 *z[-3] + 1.014 *z[-2] + 0.298 *z[-1] + 0.037\n

and the associated difference coefficients An and Bn by withing a struct ;

sDifferenceAnBn AnBn = bf.getAnBn();\n

The difference coefficients appear in the filtering equation in the form of.

An * Y_filtered = Bn * Y_unfiltered\n

To filter a signal given in a vector form ;

"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#calling-filter-by-a-specified-cut-off-and-sampling-frequencies-in-hz","title":"Calling Filter by a specified cut-off and sampling frequencies [in Hz]","text":"

The Butterworth filter can also be created by defining the desired order (N), a cut-off frequency (fc in [Hz]), and a sampling frequency (fs in [Hz]). In this method, the cut-off frequency is pre-warped with respect to the sampling frequency [1, 2] to match the continuous and digital filter frequencies.

The filter is prepared by the following calling options;

 // 3rd METHOD defining a sampling frequency together with the cut-off fc, fs\n bf.setOrder(2);\n bf.setCutOffFrequency(10, 100);\n

At this step, we define a boolean variable whether to use the pre-warping option or not.

// Compute Continuous Time TF\nbool use_sampling_frequency = true;\nbf.computeContinuousTimeTF(use_sampling_frequency);\nbf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n\n// Compute Discrete Time TF\nbf.computeDiscreteTimeTF(use_sampling_frequency);\nbf.PrintDiscreteTimeTF();\n

References:

  1. Manolakis, Dimitris G., and Vinay K. Ingle. Applied digital signal processing: theory and practice. Cambridge University Press, 2011.

  2. https://en.wikibooks.org/wiki/Digital_Signal_Processing/Bilinear_Transform

"},{"location":"common/autoware_test_utils/","title":"Test Utils","text":""},{"location":"common/autoware_test_utils/#test-utils","title":"Test Utils","text":""},{"location":"common/autoware_test_utils/#background","title":"Background","text":"

Several Autoware's components and modules have already adopted unit testing, so a common library to ease the process of writing unit tests is necessary.

"},{"location":"common/autoware_test_utils/#purpose","title":"Purpose","text":"

The objective of the test_utils is to develop a unit testing library for the Autoware components. This library will include

"},{"location":"common/autoware_test_utils/#available-maps","title":"Available Maps","text":"

The following maps are available here

"},{"location":"common/autoware_test_utils/#common","title":"Common","text":"

The common map contains multiple types of usable inputs, including shoulder lanes, intersections, and some regulatory elements. The common map is named lanelet2_map.osm in the folder.

"},{"location":"common/autoware_test_utils/#2-km-straight","title":"2 km Straight","text":"

The 2 km straight lanelet map consists of two lanes that run in the same direction. The map is named 2km_test.osm.

The following illustrates the design of the map.

"},{"location":"common/autoware_test_utils/#road_shoulders","title":"road_shoulders","text":"

The road_shoulders lanelet map consist of a variety of pick-up/drop-off site maps with road_shoulder tags including:

You can easily launch planning_simulator by

ros2 launch autoware_test_utils psim_road_shoulder.launch.xml vehicle_model:=<> sensor_model:=<> use_sim_time:=true\n
"},{"location":"common/autoware_test_utils/#intersection","title":"intersection","text":"

The intersections lanelet map consist of a variety of intersections including:

You can easily launch planning_simulator by

ros2 launch autoware_test_utils psim_intersection.launch.xml vehicle_model:=<> sensor_model:=<> use_sim_time:=true\n
"},{"location":"common/autoware_test_utils/#example-use-cases","title":"Example use cases","text":""},{"location":"common/autoware_test_utils/#autoware-planning-test-manager","title":"Autoware Planning Test Manager","text":"

The goal of the Autoware Planning Test Manager is to test planning module nodes. The PlanningInterfaceTestManager class (source code) creates wrapper functions based on the test_utils functions.

"},{"location":"common/autoware_test_utils/#generate-test-data-for-unit-testing","title":"Generate test data for unit testing","text":"

As presented in this PR description, the user can save a snapshot of the scene to a yaml file while running Planning Simulation on the test map.

ros2 launch autoware_test_utils psim_road_shoulder.launch.xml\nros2 launch autoware_test_utils psim_intersection.launch.xml\n

It uses the autoware sample_vehicle_description and sample_sensor_kit by default, and autoware_test_utils/config/test_vehicle_info.param.yaml is exactly the same as that of sample_vehicle_description. If specified, vehicle_model/sensor_model argument is available.

ros2 service call /autoware_test_utils/topic_snapshot_saver std_srvs/srv/Empty \\{\\}\n

The list and field names of the topics to be saved are specified in config/sample_topic_snapshot.yaml.

# setting\nfields:\n- name: self_odometry # this is the field name for this topic\ntype: Odometry # the abbreviated type name of this topic\ntopic: /localization/kinematic_state # the name of this topic\n\n# output\nself_odometry:\n- header: ...\n...\n

Each field can be parsed to ROS message type using the functions defined in autoware_test_utils/mock_data_parser.hpp

"},{"location":"common/autoware_testing/design/autoware_testing-design/","title":"autoware_testing","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#autoware_testing","title":"autoware_testing","text":"

This is the design document for the autoware_testing package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

The package aims to provide a unified way to add standard testing functionality to the package, currently supporting:

"},{"location":"common/autoware_testing/design/autoware_testing-design/#design","title":"Design","text":"

Uses ros_testing (which is an extension of launch_testing) and provides some parametrized, reusable standard tests to run.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Parametrization is limited to package, executable names, parameters filename and executable arguments. Test namespace is set as 'test'. Parameters file for the package is expected to be in param directory inside package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

To add a smoke test to your package tests, add test dependency on autoware_testing to package.xml

<test_depend>autoware_testing</test_depend>\n

and add the following two lines to CMakeLists.txt in the IF (BUILD_TESTING) section:

find_package(autoware_testing REQUIRED)\nadd_smoke_test(<package_name> <executable_name> [PARAM_FILENAME <param_filename>] [EXECUTABLE_ARGUMENTS <arguments>])\n

Where

<package_name> - [required] tested node package name.

<executable_name> - [required] tested node executable name.

<param_filename> - [optional] param filename. Default value is test.param.yaml. Required mostly in situation where there are multiple smoke tests in a package and each requires different parameters set

<arguments> - [optional] arguments passed to executable. By default no arguments are passed.

which adds <executable_name>_smoke_test test to suite.

Example test result:

build/<package_name>/test_results/<package_name>/<executable_name>_smoke_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
"},{"location":"common/autoware_testing/design/autoware_testing-design/#references-external-links","title":"References / External links","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#related-issues","title":"Related issues","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/","title":"Traffic Light Recognition Marker Publisher","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#traffic-light-recognition-marker-publisher","title":"Traffic Light Recognition Marker Publisher","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#purpose","title":"Purpose","text":"

This node publishes a marker array for visualizing traffic signal recognition results on Rviz.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#input","title":"Input","text":"Name Type Description /map/vector_map autoware_map_msgs::msg::LaneletMapBin Vector map for getting traffic signal information /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The result of traffic signal recognition"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals_marker visualization_msgs::msg::MarkerArray Publish a marker array for visualization of traffic signal recognition results"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#parameters","title":"Parameters","text":"

None.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_traffic_light_utils/","title":"autoware_traffic_light_utils","text":""},{"location":"common/autoware_traffic_light_utils/#autoware_traffic_light_utils","title":"autoware_traffic_light_utils","text":""},{"location":"common/autoware_traffic_light_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions that are useful across the traffic light recognition module. This package may include functions for handling ROI types, converting between different data types and message types, as well as common functions related to them.

"},{"location":"common/autoware_trajectory/","title":"Autoware Trajectory","text":""},{"location":"common/autoware_trajectory/#autoware-trajectory","title":"Autoware Trajectory","text":"

This package provides classes to manage/manipulate Trajectory.

"},{"location":"common/autoware_trajectory/#example-usage","title":"Example Usage","text":"

This section describes Example Usage of Trajectory<autoware_planning_msgs::msg::PathPoint>

"},{"location":"common/autoware_universe_utils/","title":"autoware_universe_utils","text":""},{"location":"common/autoware_universe_utils/#autoware_universe_utils","title":"autoware_universe_utils","text":""},{"location":"common/autoware_universe_utils/#purpose","title":"Purpose","text":"

This package contains many common functions used by other packages, so please refer to them as needed.

"},{"location":"common/autoware_universe_utils/#for-developers","title":"For developers","text":"

autoware_universe_utils.hpp header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.

"},{"location":"common/autoware_universe_utils/#autowareuniverse_utils","title":"autoware::universe_utils","text":""},{"location":"common/autoware_universe_utils/#systems","title":"systems","text":""},{"location":"common/autoware_universe_utils/#autowareuniverse_utilstimekeeper","title":"autoware::universe_utils::TimeKeeper","text":""},{"location":"common/autoware_universe_utils/#constructor","title":"Constructor","text":"
template <typename... Reporters>\nexplicit TimeKeeper(Reporters... reporters);\n
"},{"location":"common/autoware_universe_utils/#methods","title":"Methods","text":" "},{"location":"common/autoware_universe_utils/#note","title":"Note","text":" "},{"location":"common/autoware_universe_utils/#example","title":"Example","text":"
#include <rclcpp/rclcpp.hpp>\n\n#include <std_msgs/msg/string.hpp>\n\n#include <chrono>\n#include <iostream>\n#include <memory>\n#include <thread>\n\nclass ExampleNode : public rclcpp::Node\n{\npublic:\nExampleNode() : Node(\"time_keeper_example\")\n{\npublisher_ =\ncreate_publisher<autoware::universe_utils::ProcessingTimeDetail>(\"processing_time\", 1);\n\ntime_keeper_ = std::make_shared<autoware::universe_utils::TimeKeeper>(publisher_, &std::cerr);\n// You can also add a reporter later by add_reporter.\n// time_keeper_->add_reporter(publisher_);\n// time_keeper_->add_reporter(&std::cerr);\n\ntimer_ =\ncreate_wall_timer(std::chrono::seconds(1), std::bind(&ExampleNode::func_a, this));\n}\n\nprivate:\nstd::shared_ptr<autoware::universe_utils::TimeKeeper> time_keeper_;\nrclcpp::Publisher<autoware::universe_utils::ProcessingTimeDetail>::SharedPtr publisher_;\nrclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_str_;\nrclcpp::TimerBase::SharedPtr timer_;\n\nvoid func_a()\n{\n// Start constructing ProcessingTimeTree (because func_a is the root function)\nautoware::universe_utils::ScopedTimeTrack st(\"func_a\", *time_keeper_);\nstd::this_thread::sleep_for(std::chrono::milliseconds(1));\ntime_keeper_->comment(\"This is a comment for func_a\");\nfunc_b();\n// End constructing ProcessingTimeTree. After this, the tree will be reported (publishing\n// message and outputting to std::cerr)\n}\n\nvoid func_b()\n{\nautoware::universe_utils::ScopedTimeTrack st(\"func_b\", *time_keeper_);\nstd::this_thread::sleep_for(std::chrono::milliseconds(2));\ntime_keeper_->comment(\"This is a comment for func_b\");\nfunc_c();\n}\n\nvoid func_c()\n{\nautoware::universe_utils::ScopedTimeTrack st(\"func_c\", *time_keeper_);\nstd::this_thread::sleep_for(std::chrono::milliseconds(3));\ntime_keeper_->comment(\"This is a comment for func_c\");\n}\n};\n\nint main(int argc, char ** argv)\n{\nrclcpp::init(argc, argv);\nauto node = std::make_shared<ExampleNode>();\nrclcpp::spin(node);\nrclcpp::shutdown();\nreturn 0;\n}\n
"},{"location":"common/autoware_universe_utils/#autowareuniverse_utilsscopedtimetrack","title":"autoware::universe_utils::ScopedTimeTrack","text":""},{"location":"common/autoware_universe_utils/#description","title":"Description","text":"

Class for automatically tracking the processing time of a function within a scope.

"},{"location":"common/autoware_universe_utils/#constructor_1","title":"Constructor","text":"
ScopedTimeTrack(const std::string & func_name, TimeKeeper & time_keeper);\n
"},{"location":"common/autoware_universe_utils/#destructor","title":"Destructor","text":"
~ScopedTimeTrack();\n
"},{"location":"common/autoware_universe_utils/third_party_licenses/opencv-license/","title":"Opencv license","text":"
                             Apache License\n                       Version 2.0, January 2004\n                    http://www.apache.org/licenses/\n

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

  1. Definitions.

    \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

    \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

    \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

    \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.

    \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

    \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

    \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

    \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

    \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"

    \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

    (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and

    (b) You must cause any modified files to carry prominent notices stating that You changed the files; and

    (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and

    (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

    You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

  To apply the Apache License to your work, attach the following\n  boilerplate notice, with the fields enclosed by brackets \"[]\"\n  replaced with your own identifying information. (Don't include\n  the brackets!)  The text should be enclosed in the appropriate\n  comment syntax for the file format. We also recommend that a\n  file or class name and description of purpose be included on the\n  same \"printed page\" as the copyright notice for easier\n  identification within third-party archives.\n

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0\n

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"common/autoware_vehicle_info_utils/Readme/","title":"Vehicle Info Util","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#vehicle-info-util","title":"Vehicle Info Util","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#purpose","title":"Purpose","text":"

This package is to get vehicle info parameters.

"},{"location":"common/autoware_vehicle_info_utils/Readme/#description","title":"Description","text":"

In here, you can check the vehicle dimensions with more detail. The current format supports only the Ackermann model. This file defines the model assumed in autoware path planning, control, etc. and does not represent the exact physical model. If a model other than the Ackermann model is used, it is assumed that a vehicle interface will be designed to change the control output for the model.

"},{"location":"common/autoware_vehicle_info_utils/Readme/#versioning-policy","title":"Versioning Policy","text":"

We have implemented a versioning system for the vehicle_info.param.yaml file to ensure clarity and consistency in file format across different versions of Autoware and its external applications. Please see discussion for the details.

"},{"location":"common/autoware_vehicle_info_utils/Readme/#how-to-operate","title":"How to Operate","text":"
/**:\nros__parameters:\n# version: 0.1.0 # Uncomment and update this line for future format changes.\nwheel_radius: 0.383\n...\n
"},{"location":"common/autoware_vehicle_info_utils/Readme/#why-versioning","title":"Why Versioning?","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#scripts","title":"Scripts","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#minimum-turning-radius","title":"Minimum turning radius","text":"
$ ros2 run autoware_vehicle_info_utils min_turning_radius_calculator.py\nyaml path is /home/autoware/pilot-auto/install/autoware_vehicle_info_utils/share/autoware_vehicle_info_utils/config/vehicle_info.param.yaml\nMinimum turning radius is 3.253042620027102 [m] for rear, 4.253220695862465 [m] for front.\n

You can designate yaml file with -y option as follows.

ros2 run autoware_vehicle_info_utils min_turning_radius_calculator.py -y <path-to-yaml>\n
"},{"location":"common/autoware_vehicle_info_utils/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_api_utils/","title":"tier4_api_utils","text":""},{"location":"common/tier4_api_utils/#tier4_api_utils","title":"tier4_api_utils","text":"

This is an old implementation of a class that logs when calling a service. Please use component_interface_utils instead.

"},{"location":"control/autoware_autonomous_emergency_braking/","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autoware_autonomous_emergency_braking/#autonomous-emergency-braking-aeb","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autoware_autonomous_emergency_braking/#purpose-role","title":"Purpose / Role","text":"

autonomous_emergency_braking is a module that prevents collisions with obstacles on the predicted path created by a control module or sensor values estimated from the control module.

"},{"location":"control/autoware_autonomous_emergency_braking/#assumptions","title":"Assumptions","text":"

This module has following assumptions.

"},{"location":"control/autoware_autonomous_emergency_braking/#imu-path-generation-steering-angle-vs-imus-angular-velocity","title":"IMU path generation: steering angle vs IMU's angular velocity","text":"

Currently, the IMU-based path is generated using the angular velocity obtained by the IMU itself. It has been suggested that the steering angle could be used instead onf the angular velocity.

The pros and cons of both approaches are:

IMU angular velocity:

Steering angle:

For the moment, there are no plans to implement the steering angle on the path creation process of the AEB module.

"},{"location":"control/autoware_autonomous_emergency_braking/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

AEB has the following steps before it outputs the emergency stop signal.

  1. Activate AEB if necessary.

  2. Generate a predicted path of the ego vehicle.

  3. Get target obstacles from the input point cloud and/or predicted object data.

  4. Estimate the closest obstacle speed.

  5. Collision check with target obstacles.

  6. Send emergency stop signals to /diagnostics.

We give more details of each section below.

"},{"location":"control/autoware_autonomous_emergency_braking/#1-activate-aeb-if-necessary","title":"1. Activate AEB if necessary","text":"

We do not activate AEB module if it satisfies the following conditions.

"},{"location":"control/autoware_autonomous_emergency_braking/#2-generate-a-predicted-path-of-the-ego-vehicle","title":"2. Generate a predicted path of the ego vehicle","text":""},{"location":"control/autoware_autonomous_emergency_braking/#21-overview-of-imu-path-generation","title":"2.1 Overview of IMU Path Generation","text":"

AEB generates a predicted footprint path based on current velocity and current angular velocity obtained from attached sensors. Note that if use_imu_path is false, it skips this step. This predicted path is generated as:

\\[ x_{k+1} = x_k + v cos(\\theta_k) dt \\\\ y_{k+1} = y_k + v sin(\\theta_k) dt \\\\ \\theta_{k+1} = \\theta_k + \\omega dt \\]

where \\(v\\) and \\(\\omega\\) are current longitudinal velocity and angular velocity respectively. \\(dt\\) is time interval that users can define in advance with the imu_prediction_time_interval parameter. The IMU path is generated considering a time horizon, defined by the imu_prediction_time_horizon parameter.

"},{"location":"control/autoware_autonomous_emergency_braking/#22-constraints-and-countermeasures-in-imu-path-generation","title":"2.2 Constraints and Countermeasures in IMU Path Generation","text":"

Since the IMU path generation only uses the ego vehicle's current angular velocity, disregarding the MPC's planner steering, the shape of the IMU path tends to get distorted quite easily and protrude out of the ego vehicle's current lane, possibly causing unwanted emergency stops. There are two countermeasures for this issue:

  1. Control using the max_generated_imu_path_length parameter

  2. Control based on lateral deviation

"},{"location":"control/autoware_autonomous_emergency_braking/#23-advantages-and-limitations-of-lateral-deviation-control","title":"2.3 Advantages and Limitations of Lateral Deviation Control","text":"

The advantage of setting a lateral deviation limit with the limit_imu_path_lat_dev parameter is that the imu_prediction_time_horizon and the max_generated_imu_path_length can be increased without worries about the IMU predicted path deforming beyond a certain threshold. The downside is that the IMU path will be cut short when the ego has a high angular velocity, in said cases, the AEB module would mostly rely on the MPC path to prevent or mitigate collisions.

If it is assumed the ego vehicle will mostly travel along the centerline of its lanelets, it can be useful to set the lateral deviation threshold parameter imu_path_lat_dev_threshold to be equal to or smaller than the average lanelet width divided by 2, that way, the chance of the IMU predicted path leaving the current ego lanelet is smaller, and it is possible to increase the imu_prediction_time_horizon to prevent frontal collisions when the ego is mostly traveling in a straight line.

The lateral deviation is measured using the ego vehicle's current position as a reference, and it measures the distance of the furthermost vertex of the predicted ego footprint to the predicted path. The following image illustrates how the lateral deviation of a given ego pose is measured.

"},{"location":"control/autoware_autonomous_emergency_braking/#24-imu-path-generation-algorithm","title":"2.4 IMU Path Generation Algorithm","text":""},{"location":"control/autoware_autonomous_emergency_braking/#241-selection-of-lateral-deviation-check-points","title":"2.4.1 Selection of Lateral Deviation Check Points","text":"

Select vehicle vertices for lateral deviation checks based on the following conditions:

"},{"location":"control/autoware_autonomous_emergency_braking/#242-path-generation-process","title":"2.4.2 Path Generation Process","text":"

Execute the following steps at each time step:

  1. State Update

  2. Vehicle Footprint Generation

  3. Lateral Deviation Calculation

  4. Evaluation of Termination Conditions

"},{"location":"control/autoware_autonomous_emergency_braking/#243-termination-conditions","title":"2.4.3 Termination Conditions","text":"

Path generation terminates when any of the following conditions are met:

  1. Basic Termination Conditions (both must be satisfied)

  2. Path Length Termination Condition

  3. Lateral Deviation Termination Condition (when limit_imu_path_lat_dev = true)

"},{"location":"control/autoware_autonomous_emergency_braking/#mpc-path-generation","title":"MPC path generation","text":"

If the use_predicted_trajectory parameter is set to true, the AEB module will directly use the predicted path from the MPC as a base to generate a footprint path. It will copy the ego poses generated by the MPC until a given time horizon. The mpc_prediction_time_horizon parameter dictates how far ahead in the future the MPC path will predict the ego vehicle's movement. Both the IMU footprint path and the MPC footprint path can be used at the same time.

"},{"location":"control/autoware_autonomous_emergency_braking/#3-get-target-obstacles","title":"3. Get target obstacles","text":"

After generating the ego footprint path(s), the target obstacles are identified. There are two methods to find target obstacles: using the input point cloud, or using the predicted object information coming from perception modules.

"},{"location":"control/autoware_autonomous_emergency_braking/#pointcloud-obstacle-filtering","title":"Pointcloud obstacle filtering","text":"

The AEB module can filter the input pointcloud to find target obstacles with which the ego vehicle might collide. This method can be enable if the use_pointcloud_data parameter is set to true. The pointcloud obstacle filtering has three major steps, which are rough filtering, noise filtering with clustering and rigorous filtering.

"},{"location":"control/autoware_autonomous_emergency_braking/#rough-filtering","title":"Rough filtering","text":"

In rough filtering step, we select target obstacle with simple filter. Create a search area up to a certain distance (default is half of the ego vehicle width plus the path_footprint_extra_margin parameter plus the expand_width parameter) away from the predicted path of the ego vehicle and ignore the point cloud that are not within it. The rough filtering step is illustrated below.

"},{"location":"control/autoware_autonomous_emergency_braking/#noise-filtering-with-clustering-and-convex-hulls","title":"Noise filtering with clustering and convex hulls","text":"

To prevent the AEB from considering noisy points, euclidean clustering is performed on the filtered point cloud. The points in the point cloud that are not close enough to other points to form a cluster are discarded. Furthermore, each point in a cluster is compared against the cluster_minimum_height parameter, if no point inside a cluster has a height/z value greater than cluster_minimum_height, the whole cluster of points is discarded. The parameters cluster_tolerance, minimum_cluster_size and maximum_cluster_size can be used to tune the clustering and the size of objects to be ignored, for more information about the clustering method used by the AEB module, please check the official documentation on euclidean clustering of the PCL library: https://pcl.readthedocs.io/projects/tutorials/en/master/cluster_extraction.html.

Furthermore, a 2D convex hull is created around each detected cluster, the vertices of each hull represent the most extreme/outside points of the cluster. These vertices are then checked in the next step.

"},{"location":"control/autoware_autonomous_emergency_braking/#rigorous-filtering","title":"Rigorous filtering","text":"

After Noise filtering, the module performs a geometric collision check to determine whether the filtered obstacles/hull vertices actually have possibility to collide with the ego vehicle. In this check, the ego vehicle is represented as a rectangle, and the point cloud obstacles are represented as points. Only the vertices with a possibility of collision are labeled as target obstacles.

"},{"location":"control/autoware_autonomous_emergency_braking/#obstacle-labeling","title":"Obstacle labeling","text":"

After rigorous filtering, the remaining obstacles are labeled. An obstacle is given a \"target\" label for collision checking only if it falls within the ego vehicle's defined footprint (made using the ego vehicle's width and the expand_width parameter). For an emergency stop to occur, at least one obstacle needs to be labeled as a target.

"},{"location":"control/autoware_autonomous_emergency_braking/#using-predicted-objects-to-get-target-obstacles","title":"Using predicted objects to get target obstacles","text":"

If the use_predicted_object_data parameter is set to true, the AEB can use predicted object data coming from the perception modules, to get target obstacle points. This is done by obtaining the 2D intersection points between the ego's predicted footprint path (made using the ego vehicle's width and the expand_width parameter) and each of the predicted objects enveloping polygon or bounding box. if there is no intersection, all points are discarded.

"},{"location":"control/autoware_autonomous_emergency_braking/#finding-the-closest-target-obstacle","title":"Finding the closest target obstacle","text":"

After identifying all possible obstacles using pointcloud data and/or predicted object data, the AEB module selects the closest point to the ego vehicle as the candidate for collision checking. The \"closest object\" is defined as an obstacle within the ego vehicle's footprint, determined by its width and the expand_width parameter, that is closest to the ego vehicle along the longitudinal axis, using the IMU or MPC path as a reference. Target obstacles are prioritized over those outside the ego path, even if the latter are longitudinally closer. This prioritization ensures that the collision check focuses on objects that pose the highest risk based on the vehicle's trajectory.

If no target obstacles are found, the AEB module considers other nearby obstacles outside the path. In such cases, it skips the collision check but records the position of the closest obstacle to calculate its speed (Step #4). Note that, obstacles obtained with predicted object data are all target obstacles since they are within the ego footprint path and it is not necessary to calculate their speed (it is already calculated by the perception module). Such obstacles are excluded from step #4.

"},{"location":"control/autoware_autonomous_emergency_braking/#4-obstacle-velocity-estimation","title":"4. Obstacle velocity estimation","text":"

To begin calculating the target point's velocity, the point must enter the speed calculation area, which is defined by the speed_calculation_expansion_margin parameter plus the ego vehicles width and the expand_width parameter. Depending on the operational environment, this margin can reduce unnecessary autonomous emergency braking caused by velocity miscalculations during the initial calculation steps.

Once the position of the closest obstacle/point is determined, the AEB modules uses the history of previously detected objects to estimate the closest object relative speed using the following equations:

\\[ d_{t} = t_{1} - t_{0} \\] \\[ d_{x} = norm(o_{x} - prev_{x}) \\] \\[ v_{norm} = d_{x} / d_{t} \\]

Where \\(t_{1}\\) and \\(t_{0}\\) are the timestamps of the point clouds used to detect the current closest object and the closest object of the previous point cloud frame, and \\(o_{x}\\) and \\(prev_{x}\\) are the positions of those objects, respectively.

Note that, when the closest obstacle/point comes from using predicted object data, \\(v_{norm}\\) is calculated by directly computing the norm of the predicted object's velocity in the x and y axes.

The velocity vector is then compared against the ego's predicted path to get the longitudinal velocity \\(v_{obj}\\):

\\[ v_{obj} = v_{norm} * Cos(yaw_{diff}) + v_{ego} \\]

where \\(yaw_{diff}\\) is the difference in yaw between the ego path and the displacement vector and \\(v_{ego}\\) is the ego's current speed, which accounts for the movement of points caused by the ego moving and not the object. All these equations are performed disregarding the z axis (in 2D).

Note that the object velocity is calculated against the ego's current movement direction. If the object moves in the opposite direction to the ego's movement, the object velocity will be negative, which will reduce the rss distance on the next step.

The resulting estimated object speed is added to a queue of speeds with timestamps. The AEB then checks for expiration of past speed estimations and eliminates expired speed measurements from the queue, the object expiration is determined by checking if the time elapsed since the speed was first added to the queue is larger than the parameter previous_obstacle_keep_time. Finally, the median speed of the queue is calculated. The median speed will be used to calculate the RSS distance used for collision checking.

"},{"location":"control/autoware_autonomous_emergency_braking/#5-collision-check-with-target-obstacles-using-rss-distance","title":"5. Collision check with target obstacles using RSS distance","text":"

In the fifth step, the AEB module checks for collision with the closest target obstacle using RSS distance. Only the closest target object is evaluated because RSS distance is used to determine collision risk. If the nearest target point is deemed safe, all other potential obstacles within the path are also assumed to be safe.

RSS distance is formulated as:

\\[ d = v_{ego}*t_{response} + v_{ego}^2/(2*a_{min}) -(sign(v_{obj})) * v_{obj}^2/(2*a_{obj_{min}}) + offset \\]

where \\(v_{ego}\\) and \\(v_{obj}\\) is current ego and obstacle velocity, \\(a_{min}\\) and \\(a_{obj_{min}}\\) is ego and object minimum acceleration (maximum deceleration), \\(t_{response}\\) is response time of the ego vehicle to start deceleration. Therefore the distance from the ego vehicle to the obstacle is smaller than this RSS distance \\(d\\), the ego vehicle send emergency stop signals.

Only obstacles classified as \"targets\" (as defined in Step #3) are considered for RSS distance calculations. Among these \"target\" obstacles, the one closest to the ego vehicle is used for the calculation. If no \"target\" obstacles are present\u2014meaning no obstacles fall within the ego vehicle's predicted path (determined by its width and an expanded margin)\u2014this step is skipped. Instead, the position of the closest obstacle is recorded for future speed calculations (Step #4). In this scenario, no emergency stop diagnostic message is generated. The process is illustrated in the accompanying diagram.

"},{"location":"control/autoware_autonomous_emergency_braking/#6-send-emergency-stop-signals-to-diagnostics","title":"6. Send emergency stop signals to /diagnostics","text":"

If AEB detects collision with point cloud obstacles in the previous step, it sends emergency signal to /diagnostics in this step. Note that in order to enable emergency stop, it has to send ERROR level emergency. Moreover, AEB user should modify the setting file to keep the emergency level, otherwise Autoware does not hold the emergency state.

"},{"location":"control/autoware_autonomous_emergency_braking/#use-cases","title":"Use cases","text":""},{"location":"control/autoware_autonomous_emergency_braking/#front-vehicle-suddenly-brakes","title":"Front vehicle suddenly brakes","text":"

The AEB can activate when a vehicle in front suddenly brakes, and a collision is detected by the AEB module. Provided the distance between the ego vehicle and the front vehicle is large enough and the ego\u2019s emergency acceleration value is high enough, it is possible to avoid or soften collisions with vehicles in front that suddenly brake. NOTE: the acceleration used by the AEB to calculate rss_distance is NOT necessarily the acceleration used by the ego while doing an emergency brake. The acceleration used by the real vehicle can be tuned by changing the mrm_emergency stop jerk and acceleration values.

"},{"location":"control/autoware_autonomous_emergency_braking/#stop-for-objects-that-appear-suddenly","title":"Stop for objects that appear suddenly","text":"

When an object appears suddenly, the AEB can act as a fail-safe to stop the ego vehicle when other modules fail to detect the object on time. If sudden object cut ins are expected, it might be useful for the AEB module to detect collisions of objects BEFORE they enter the real ego vehicle path by increasing the expand_width parameter.

"},{"location":"control/autoware_autonomous_emergency_braking/#preventing-collisions-with-rear-objects","title":"Preventing Collisions with rear objects","text":"

The AEB module can also prevent collisions when the ego vehicle is moving backwards.

"},{"location":"control/autoware_autonomous_emergency_braking/#preventing-collisions-in-case-of-wrong-odometry-imu-path-only","title":"Preventing collisions in case of wrong Odometry (IMU path only)","text":"

When vehicle odometry information is faulty, it is possible that the MPC fails to predict a correct path for the ego vehicle. If the MPC predicted path is wrong, collision avoidance will not work as intended on the planning modules. However, the AEB\u2019s IMU path does not depend on the MPC and could be able to predict a collision when the other modules cannot. As an example you can see a figure of a hypothetical case in which the MPC path is wrong and only the AEB\u2019s IMU path detects a collision.

"},{"location":"control/autoware_autonomous_emergency_braking/#parameters","title":"Parameters","text":"Name Unit Type Description Default value publish_debug_markers [-] bool flag to publish debug markers true publish_debug_pointcloud [-] bool flag to publish the point cloud used for debugging false use_predicted_trajectory [-] bool flag to use the predicted path from the control module true use_imu_path [-] bool flag to use the predicted path generated by sensor data true use_object_velocity_calculation [-] bool flag to use the object velocity calculation. If set to false, object velocity is set to 0 [m/s] true check_autoware_state [-] bool flag to enable or disable autoware state check. If set to false, the AEB module will run even when the ego vehicle is not in AUTONOMOUS state. true detection_range_min_height [m] double minimum hight of detection range used for avoiding the ghost brake by false positive point clouds 0.0 detection_range_max_height_margin [m] double margin for maximum hight of detection range used for avoiding the ghost brake by false positive point clouds. detection_range_max_height = vehicle_height + detection_range_max_height_margin 0.0 voxel_grid_x [m] double down sampling parameters of x-axis for voxel grid filter 0.05 voxel_grid_y [m] double down sampling parameters of y-axis for voxel grid filter 0.05 voxel_grid_z [m] double down sampling parameters of z-axis for voxel grid filter 100000.0 cluster tolerance [m] double maximum allowable distance between any two points to be considered part of the same cluster 0.15 cluster_minimum_height [m] double at least one point in a cluster must be higher than this value for the cluster to be included in the set of possible collision targets 0.1 minimum_cluster_size [-] int minimum required amount of points contained by a cluster for it to be considered as a possible target obstacle 10 maximum_cluster_size [-] int maximum amount of points contained by a cluster for it to be considered as a possible target obstacle 10000 min_generated_imu_path_length [m] double minimum distance for a predicted path generated by sensors 0.5 max_generated_imu_path_length [m] double maximum distance for a predicted path generated by sensors 10.0 expand_width [m] double expansion width of the ego vehicle for collision checking, path cropping and speed calculation 0.1 longitudinal_offset_margin [m] double longitudinal offset distance for collision check 2.0 t_response [s] double response time for the ego to detect the front vehicle starting deceleration 1.0 a_ego_min [m/ss] double maximum deceleration value of the ego vehicle -3.0 a_obj_min [m/ss] double maximum deceleration value of objects -3.0 imu_prediction_time_horizon [s] double time horizon of the predicted path generated by sensors 1.5 imu_prediction_time_interval [s] double time interval of the predicted path generated by sensors 0.1 mpc_prediction_time_horizon [s] double time horizon of the predicted path generated by mpc 1.5 mpc_prediction_time_interval [s] double time interval of the predicted path generated by mpc 0.1 aeb_hz [-] double frequency at which AEB operates per second 10 speed_calculation_expansion_margin [m] double expansion width of the ego vehicle footprint used when calculating the closest object's speed 0.7 path_footprint_extra_margin [m] double this parameters expands the ego footprint used to crop the AEB input pointcloud 1.0"},{"location":"control/autoware_autonomous_emergency_braking/#limitations","title":"Limitations","text":" "},{"location":"control/autoware_collision_detector/","title":"Collision Detector","text":""},{"location":"control/autoware_collision_detector/#collision-detector","title":"Collision Detector","text":""},{"location":"control/autoware_collision_detector/#purpose","title":"Purpose","text":"

This module subscribes required data (ego-pose, obstacles, etc), and publishes diagnostics if an object is within a specific distance.

"},{"location":"control/autoware_collision_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_collision_detector/#flow-chart","title":"Flow chart","text":""},{"location":"control/autoware_collision_detector/#algorithms","title":"Algorithms","text":""},{"location":"control/autoware_collision_detector/#check-data","title":"Check data","text":"

Check that collision_detector receives no ground pointcloud, dynamic objects.

"},{"location":"control/autoware_collision_detector/#object-filtering","title":"Object Filtering","text":""},{"location":"control/autoware_collision_detector/#recognition-assumptions","title":"Recognition Assumptions","text":"
  1. If the classification changes but it's considered the same object, the uuid does not change.
  2. It's possible for the same uuid to be recognized after being lost for a few frames.
  3. Once an object is determined to be excluded, it continues to be excluded for a certain period of time.
"},{"location":"control/autoware_collision_detector/#filtering-process","title":"Filtering Process","text":"
  1. Initial Recognition and Exclusion:

  2. New Object Determination:

  3. Exclusion Mechanism:

"},{"location":"control/autoware_collision_detector/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.

"},{"location":"control/autoware_collision_detector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_collision_detector/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_perception_msgs::msg::PredictedObjects Dynamic objects /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"control/autoware_collision_detector/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics"},{"location":"control/autoware_collision_detector/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true collision_distance double Distance threshold at which an object is considered a collision. [m] 0.15 nearby_filter_radius double Distance range for filtering objects. Objects within this radius are considered. [m] 5.0 keep_ignoring_time double Time to keep filtering objects that first appeared in the vicinity [sec] 10.0 nearby_object_type_filters object of bool values Specifies which object types to filter. Only objects with true value will be filtered. {unknown: true, others: false}"},{"location":"control/autoware_collision_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"control/autoware_control_validator/","title":"Control Validator","text":""},{"location":"control/autoware_control_validator/#control-validator","title":"Control Validator","text":"

The control_validator is a module that checks the validity of the output of the control component. The status of the validation can be viewed in the /diagnostics topic.

"},{"location":"control/autoware_control_validator/#supported-features","title":"Supported features","text":"

The following features are supported for the validation and can have thresholds set by parameters. The listed features below does not always correspond to the latest implementation.

Description Arguments Diagnostic equation Inverse velocity: Measured velocity has a different sign from the target velocity. measured velocity \\(v\\), target velocity \\(\\hat{v}\\), and velocity parameter \\(c\\) \\(v \\hat{v} < 0, \\quad \\lvert v \\rvert > c\\) Overspeed: Measured speed exceeds target speed significantly. measured velocity \\(v\\), target velocity \\(\\hat{v}\\), ratio parameter \\(r\\), and offset parameter \\(c\\) \\(\\lvert v \\rvert > (1 + r) \\lvert \\hat{v} \\rvert + c\\)

"},{"location":"control/autoware_control_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"control/autoware_control_validator/#inputs","title":"Inputs","text":"

The control_validator takes in the following inputs:

Name Type Description ~/input/kinematics nav_msgs/Odometry ego pose and twist ~/input/reference_trajectory autoware_planning_msgs/Trajectory reference trajectory which is outputted from planning module to to be followed ~/input/predicted_trajectory autoware_planning_msgs/Trajectory predicted trajectory which is outputted from control module"},{"location":"control/autoware_control_validator/#outputs","title":"Outputs","text":"

It outputs the following:

Name Type Description ~/output/validation_status control_validator/ControlValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"control/autoware_control_validator/#parameters","title":"Parameters","text":"

The following parameters can be set for the control_validator:

"},{"location":"control/autoware_control_validator/#system-parameters","title":"System parameters","text":"Name Type Description Default value publish_diag bool if true, diagnostics msg is published. true diag_error_count_threshold int the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true display_on_terminal bool show error msg on terminal true"},{"location":"control/autoware_control_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"control/autoware_control_validator/#thresholds","title":"Thresholds","text":"

The input trajectory is detected as invalid if the index exceeds the following thresholds.

Name Type Description Default value thresholds.max_distance_deviation double invalid threshold of the max distance deviation between the predicted path and the reference trajectory [m] 1.0 thresholds.rolling_back_velocity double threshold velocity to valid the vehicle velocity [m/s] 0.5 thresholds.over_velocity_offset double threshold velocity offset to valid the vehicle velocity [m/s] 2.0 thresholds.over_velocity_ratio double threshold ratio to valid the vehicle velocity [*] 0.2"},{"location":"control/autoware_external_cmd_selector/","title":"autoware_external_cmd_selector","text":""},{"location":"control/autoware_external_cmd_selector/#autoware_external_cmd_selector","title":"autoware_external_cmd_selector","text":""},{"location":"control/autoware_external_cmd_selector/#purpose","title":"Purpose","text":"

autoware_external_cmd_selector is the package to publish external_control_cmd, gear_cmd, hazard_lights_cmd, heartbeat and turn_indicators_cmd, according to the current mode, which is remote or local.

The current mode is set via service, remote is remotely operated, local is to use the values calculated by Autoware.

"},{"location":"control/autoware_external_cmd_selector/#input-output","title":"Input / Output","text":""},{"location":"control/autoware_external_cmd_selector/#input-topics","title":"Input topics","text":"Name Type Description /api/external/set/command/local/control TBD Local. Calculated control value. /api/external/set/command/local/heartbeat TBD Local. Heartbeat. /api/external/set/command/local/shift TBD Local. Gear shift like drive, rear and etc. /api/external/set/command/local/turn_signal TBD Local. Turn signal like left turn, right turn and etc. /api/external/set/command/remote/control TBD Remote. Calculated control value. /api/external/set/command/remote/heartbeat TBD Remote. Heartbeat. /api/external/set/command/remote/shift TBD Remote. Gear shift like drive, rear and etc. /api/external/set/command/remote/turn_signal TBD Remote. Turn signal like left turn, right turn and etc."},{"location":"control/autoware_external_cmd_selector/#output-topics","title":"Output topics","text":"Name Type Description /control/external_cmd_selector/current_selector_mode TBD Current selected mode, remote or local. /diagnostics diagnostic_msgs::msg::DiagnosticArray Check if node is active or not. /external/selected/external_control_cmd TBD Pass through control command with current mode. /external/selected/gear_cmd autoware_vehicle_msgs::msg::GearCommand Pass through gear command with current mode. /external/selected/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand Pass through hazard light with current mode. /external/selected/heartbeat TBD Pass through heartbeat with current mode. /external/selected/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand Pass through turn indicator with current mode."},{"location":"control/autoware_external_cmd_selector/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float The rate in Hz at which the external command selector node updates. 10.0 N/A initial_selector_mode string The initial mode for command selection, either 'local' or 'remote'. local ['local', 'remote']"},{"location":"control/autoware_joy_controller/","title":"autoware_joy_controller","text":""},{"location":"control/autoware_joy_controller/#autoware_joy_controller","title":"autoware_joy_controller","text":""},{"location":"control/autoware_joy_controller/#role","title":"Role","text":"

autoware_joy_controller is the package to convert a joy msg to autoware commands (e.g. steering wheel, shift, turn signal, engage) for a vehicle.

"},{"location":"control/autoware_joy_controller/#usage","title":"Usage","text":""},{"location":"control/autoware_joy_controller/#ros-2-launch","title":"ROS 2 launch","text":"
# With default config (ds4)\nros2 launch autoware_joy_controller joy_controller.launch.xml\n\n# Default config but select from the existing parameter files\nros2 launch autoware_joy_controller joy_controller_param_selection.launch.xml joy_type:=ds4 # or g29, p65, xbox\n\n# Override the param file\nros2 launch autoware_joy_controller joy_controller.launch.xml config_file:=/path/to/your/param.yaml\n
"},{"location":"control/autoware_joy_controller/#input-output","title":"Input / Output","text":""},{"location":"control/autoware_joy_controller/#input-topics","title":"Input topics","text":"Name Type Description ~/input/joy sensor_msgs::msg::Joy joy controller command ~/input/odometry nav_msgs::msg::Odometry ego vehicle odometry to get twist"},{"location":"control/autoware_joy_controller/#output-topics","title":"Output topics","text":"Name Type Description ~/output/control_command autoware_control_msgs::msg::Control lateral and longitudinal control command ~/output/external_control_command tier4_external_api_msgs::msg::ControlCommandStamped lateral and longitudinal control command ~/output/shift tier4_external_api_msgs::msg::GearShiftStamped gear command ~/output/turn_signal tier4_external_api_msgs::msg::TurnSignalStamped turn signal command ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (Auto or External) ~/output/heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/output/vehicle_engage autoware_vehicle_msgs::msg::Engage vehicle engage"},{"location":"control/autoware_joy_controller/#parameters","title":"Parameters","text":"Parameter Type Description joy_type string joy controller type (default: DS4) update_rate double update rate to publish control commands accel_ratio double ratio to calculate acceleration (commanded acceleration is ratio * operation) brake_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation) steer_ratio double ratio to calculate deceleration (commanded steer is ratio * operation) steering_angle_velocity double steering angle velocity for operation accel_sensitivity double sensitivity to calculate acceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) brake_sensitivity double sensitivity to calculate deceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) raw_control bool skip input odometry if true velocity_gain double ratio to calculate velocity by acceleration max_forward_velocity double absolute max velocity to go forward max_backward_velocity double absolute max velocity to go backward backward_accel_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation)"},{"location":"control/autoware_joy_controller/#p65-joystick-key-map","title":"P65 Joystick Key Map","text":"Action Button Acceleration R2 Brake L2 Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal A Gate Mode B Emergency Stop Select Clear Emergency Stop Start Autoware Engage X Autoware Disengage Y Vehicle Engage PS Vehicle Disengage Right Trigger"},{"location":"control/autoware_joy_controller/#ds4-joystick-key-map","title":"DS4 Joystick Key Map","text":"Action Button Acceleration R2, \u00d7, or Right Stick Up Brake L2, \u25a1, or Right Stick Down Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal SHARE Gate Mode OPTIONS Emergency Stop PS Clear Emergency Stop PS Autoware Engage \u25cb Autoware Disengage \u25cb Vehicle Engage \u25b3 Vehicle Disengage \u25b3"},{"location":"control/autoware_joy_controller/#xbox-joystick-key-map","title":"XBOX Joystick Key Map","text":"Action Button Acceleration RT Brake LT Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left LB Turn Signal Right RB Clear Turn Signal A Gate Mode B Emergency Stop View Clear Emergency Stop Menu Autoware Engage X Autoware Disengage Y Vehicle Engage Left Stick Button Vehicle Disengage Right Stick Button"},{"location":"control/autoware_lane_departure_checker/","title":"Lane Departure Checker","text":""},{"location":"control/autoware_lane_departure_checker/#lane-departure-checker","title":"Lane Departure Checker","text":"

The Lane Departure Checker checks if vehicle follows a trajectory. If it does not follow the trajectory, it reports its status via diagnostic_updater.

"},{"location":"control/autoware_lane_departure_checker/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"control/autoware_lane_departure_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_lane_departure_checker/#how-to-extend-footprint-by-covariance","title":"How to extend footprint by covariance","text":"
  1. Calculate the standard deviation of error ellipse(covariance) in vehicle coordinate.

    1.Transform covariance into vehicle coordinate.

    Calculate covariance in vehicle coordinate.

    2.The longitudinal length we want to expand is correspond to marginal distribution of \\(x_{vehicle}\\), which is represented in \\(Cov_{vehicle}(0,0)\\). In the same way, the lateral length is represented in \\(Cov_{vehicle}(1,1)\\). Wikipedia reference here.

  2. Expand footprint based on the standard deviation multiplied with footprint_margin_scale.

"},{"location":"control/autoware_lane_departure_checker/#interface","title":"Interface","text":""},{"location":"control/autoware_lane_departure_checker/#input","title":"Input","text":""},{"location":"control/autoware_lane_departure_checker/#output","title":"Output","text":""},{"location":"control/autoware_lane_departure_checker/#parameters","title":"Parameters","text":""},{"location":"control/autoware_lane_departure_checker/#node-parameters","title":"Node Parameters","text":""},{"location":"control/autoware_lane_departure_checker/#general-parameters","title":"General Parameters","text":"Name Type Description Default value will_out_of_lane_checker bool Enable checker whether ego vehicle footprint will depart from lane True out_of_lane_checker bool Enable checker whether ego vehicle footprint is out of lane True boundary_departure_checker bool Enable checker whether ego vehicle footprint wil depart from boundary specified by boundary_types_to_detect False update_rate double Frequency for publishing [Hz] 10.0 visualize_lanelet bool Flag for visualizing lanelet False"},{"location":"control/autoware_lane_departure_checker/#parameters-for-lane-departure","title":"Parameters For Lane Departure","text":"Name Type Description Default value include_right_lanes bool Flag for including right lanelet in borders False include_left_lanes bool Flag for including left lanelet in borders False include_opposite_lanes bool Flag for including opposite lanelet in borders False include_conflicting_lanes bool Flag for including conflicting lanelet in borders False"},{"location":"control/autoware_lane_departure_checker/#parameters-for-road-border-departure","title":"Parameters For Road Border Departure","text":"Name Type Description Default value boundary_types_to_detect std::vector\\<std::string> line_string types to detect with boundary_departure_checker [road_border]"},{"location":"control/autoware_lane_departure_checker/#core-parameters","title":"Core Parameters","text":"Name Type Description Default value footprint_margin_scale double Coefficient for expanding footprint margin. Multiplied by 1 standard deviation. 1.0 footprint_extra_margin double Coefficient for expanding footprint margin. When checking for lane departure 0.0 resample_interval double Minimum Euclidean distance between points when resample trajectory.[m] 0.3 max_deceleration double Maximum deceleration when calculating braking distance. 2.8 delay_time double Delay time which took to actuate brake when calculating braking distance. [second] 1.3 max_lateral_deviation double Maximum lateral deviation in vehicle coordinate. [m] 2.0 max_longitudinal_deviation double Maximum longitudinal deviation in vehicle coordinate. [m] 2.0 max_yaw_deviation_deg double Maximum ego yaw deviation from trajectory. [deg] 60.0"},{"location":"control/autoware_mpc_lateral_controller/","title":"MPC Lateral Controller","text":""},{"location":"control/autoware_mpc_lateral_controller/#mpc-lateral-controller","title":"MPC Lateral Controller","text":"

This is the design document for the lateral controller node in the autoware_trajectory_follower_node package.

"},{"location":"control/autoware_mpc_lateral_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node is used to general lateral control commands (steering angle and steering rate) when following a path.

"},{"location":"control/autoware_mpc_lateral_controller/#design","title":"Design","text":"

The node uses an implementation of linear model predictive control (MPC) for accurate path tracking. The MPC uses a model of the vehicle to simulate the trajectory resulting from the control command. The optimization of the control command is formulated as a Quadratic Program (QP).

Different vehicle models are implemented:

For the optimization, a Quadratic Programming (QP) solver is used and two options are currently implemented:

"},{"location":"control/autoware_mpc_lateral_controller/#filtering","title":"Filtering","text":"

Filtering is required for good noise reduction. A Butterworth filter is employed for processing the yaw and lateral errors, which are used as inputs for the MPC, as well as for refining the output steering angle. Other filtering methods can be considered as long as the noise reduction performances are good enough. The moving average filter for example is not suited and can yield worse results than without any filtering.

"},{"location":"control/autoware_mpc_lateral_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The tracking is not accurate if the first point of the reference trajectory is at or in front of the current ego pose.

"},{"location":"control/autoware_mpc_lateral_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_mpc_lateral_controller/#inputs","title":"Inputs","text":"

Set the following from the controller_node

"},{"location":"control/autoware_mpc_lateral_controller/#outputs","title":"Outputs","text":"

Return LateralOutput which contains the following to the controller node

Publish the following messages.

Name Type Description ~/output/predicted_trajectory autoware_planning_msgs::Trajectory Predicted trajectory calculated by MPC. The trajectory size will be empty when the controller is in an emergency such as too large deviation from the planning trajectory."},{"location":"control/autoware_mpc_lateral_controller/#mpc-class","title":"MPC class","text":"

The MPC class (defined in mpc.hpp) provides the interface with the MPC algorithm. Once a vehicle model, a QP solver, and the reference trajectory to follow have been set (using setVehicleModel(), setQPSolver(), setReferenceTrajectory()), a lateral control command can be calculated by providing the current steer, velocity, and pose to function calculateMPC().

"},{"location":"control/autoware_mpc_lateral_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

"},{"location":"control/autoware_mpc_lateral_controller/#system","title":"System","text":"Name Type Description Default value traj_resample_dist double distance of waypoints in resampling [m] 0.1 use_steer_prediction boolean flag for using steer prediction (do not use steer measurement) false use_delayed_initial_state boolean flag to use x0_delayed as initial state for predicted trajectory true"},{"location":"control/autoware_mpc_lateral_controller/#path-smoothing","title":"Path Smoothing","text":"Name Type Description Default value enable_path_smoothing boolean path smoothing flag. This should be true when uses path resampling to reduce resampling noise. false path_filter_moving_ave_num int number of data points moving average filter for path smoothing 25 curvature_smoothing_num_traj int index distance of points used in curvature calculation for trajectory: p(i-num), p(i), p(i+num). larger num makes less noisy values. 15 curvature_smoothing_num_ref_steer int index distance of points used in curvature calculation for reference steering command: p(i-num), p(i), p(i+num). larger num makes less noisy values. 15"},{"location":"control/autoware_mpc_lateral_controller/#trajectory-extending","title":"Trajectory Extending","text":"Name Type Description Default value extend_trajectory_for_end_yaw_control boolean trajectory extending flag for end yaw control true"},{"location":"control/autoware_mpc_lateral_controller/#mpc-optimization","title":"MPC Optimization","text":"Name Type Description Default value qp_solver_type string QP solver option. described below in detail. \"osqp\" mpc_prediction_horizon int total prediction step for MPC 50 mpc_prediction_dt double prediction period for one step [s] 0.1 mpc_weight_lat_error double weight for lateral error 1.0 mpc_weight_heading_error double weight for heading error 0.0 mpc_weight_heading_error_squared_vel double weight for heading error * velocity 0.3 mpc_weight_steering_input double weight for steering error (steer command - reference steer) 1.0 mpc_weight_steering_input_squared_vel double weight for steering error (steer command - reference steer) * velocity 0.25 mpc_weight_lat_jerk double weight for lateral jerk (steer(i) - steer(i-1)) * velocity 0.1 mpc_weight_steer_rate double weight for steering rate [rad/s] 0.0 mpc_weight_steer_acc double weight for derivatives of the steering rate [rad/ss] 0.000001 mpc_low_curvature_weight_lat_error double [used in a low curvature trajectory] weight for lateral error 0.1 mpc_low_curvature_weight_heading_error double [used in a low curvature trajectory] weight for heading error 0.0 mpc_low_curvature_weight_heading_error_squared_vel double [used in a low curvature trajectory] weight for heading error * velocity 0.3 mpc_low_curvature_weight_steering_input double [used in a low curvature trajectory] weight for steering error (steer command - reference steer) 1.0 mpc_low_curvature_weight_steering_input_squared_vel double [used in a low curvature trajectory] weight for steering error (steer command - reference steer) * velocity 0.25 mpc_low_curvature_weight_lat_jerk double [used in a low curvature trajectory] weight for lateral jerk (steer(i) - steer(i-1)) * velocity 0.0 mpc_low_curvature_weight_steer_rate double [used in a low curvature trajectory] weight for steering rate [rad/s] 0.0 mpc_low_curvature_weight_steer_acc double [used in a low curvature trajectory] weight for derivatives of the steering rate [rad/ss] 0.000001 mpc_low_curvature_thresh_curvature double threshold of curvature to use \"low_curvature\" parameter 0.0 mpc_weight_terminal_lat_error double terminal lateral error weight in matrix Q to improve mpc stability 1.0 mpc_weight_terminal_heading_error double terminal heading error weight in matrix Q to improve mpc stability 0.1 mpc_zero_ff_steer_deg double threshold that feed-forward angle becomes zero 0.5 mpc_acceleration_limit double limit on the vehicle's acceleration 2.0 mpc_velocity_time_constant double time constant used for velocity smoothing 0.3 mpc_min_prediction_length double minimum prediction length 5.0"},{"location":"control/autoware_mpc_lateral_controller/#vehicle-model","title":"Vehicle Model","text":"Name Type Description Default value vehicle_model_type string vehicle model type for mpc prediction \"kinematics\" input_delay double steering input delay time for delay compensation 0.24 vehicle_model_steer_tau double steering dynamics time constant (1d approximation) [s] 0.3 steer_rate_lim_dps_list_by_curvature [double] steering angle rate limit list depending on curvature [deg/s] [40.0, 50.0, 60.0] curvature_list_for_steer_rate_lim [double] curvature list for steering angle rate limit interpolation in ascending order [/m] [0.001, 0.002, 0.01] steer_rate_lim_dps_list_by_velocity [double] steering angle rate limit list depending on velocity [deg/s] [60.0, 50.0, 40.0] velocity_list_for_steer_rate_lim [double] velocity list for steering angle rate limit interpolation in ascending order [m/s] [10.0, 15.0, 20.0] acceleration_limit double acceleration limit for trajectory velocity modification [m/ss] 2.0 velocity_time_constant double velocity dynamics time constant for trajectory velocity modification [s] 0.3"},{"location":"control/autoware_mpc_lateral_controller/#lowpass-filter-for-noise-reduction","title":"Lowpass Filter for Noise Reduction","text":"Name Type Description Default value steering_lpf_cutoff_hz double cutoff frequency of lowpass filter for steering output command [hz] 3.0 error_deriv_lpf_cutoff_hz double cutoff frequency of lowpass filter for error derivative [Hz] 5.0"},{"location":"control/autoware_mpc_lateral_controller/#stop-state","title":"Stop State","text":"Name Type Description Default value stop_state_entry_ego_speed *1 double threshold value of the ego vehicle speed used to the stop state entry condition 0.001 stop_state_entry_target_speed *1 double threshold value of the target speed used to the stop state entry condition 0.001 converged_steer_rad double threshold value of the steer convergence 0.1 keep_steer_control_until_converged boolean keep steer control until steer is converged true new_traj_duration_time double threshold value of the time to be considered as new trajectory 1.0 new_traj_end_dist double threshold value of the distance between trajectory ends to be considered as new trajectory 0.3 mpc_converged_threshold_rps double threshold value to be sure output of the optimization is converged, it is used in stopped state 0.01

(*1) To prevent unnecessary steering movement, the steering command is fixed to the previous value in the stop state.

"},{"location":"control/autoware_mpc_lateral_controller/#steer-offset","title":"Steer Offset","text":"

Defined in the steering_offset namespace. This logic is designed as simple as possible, with minimum design parameters.

Name Type Description Default value enable_auto_steering_offset_removal boolean Estimate the steering offset and apply compensation true update_vel_threshold double If the velocity is smaller than this value, the data is not used for the offset estimation 5.56 update_steer_threshold double If the steering angle is larger than this value, the data is not used for the offset estimation. 0.035 average_num int The average of this number of data is used as a steering offset. 1000 steering_offset_limit double The angle limit to be applied to the offset compensation. 0.02"},{"location":"control/autoware_mpc_lateral_controller/#for-dynamics-model-wip","title":"For dynamics model (WIP)","text":"Name Type Description Default value cg_to_front_m double distance from baselink to the front axle[m] 1.228 cg_to_rear_m double distance from baselink to the rear axle [m] 1.5618 mass_fl double mass applied to front left tire [kg] 600 mass_fr double mass applied to front right tire [kg] 600 mass_rl double mass applied to rear left tire [kg] 600 mass_rr double mass applied to rear right tire [kg] 600 cf double front cornering power [N/rad] 155494.663 cr double rear cornering power [N/rad] 155494.663"},{"location":"control/autoware_mpc_lateral_controller/#debug","title":"Debug","text":"Name Type Description Default value publish_debug_trajectories boolean publish predicted trajectory and resampled reference trajectory for debug purpose true"},{"location":"control/autoware_mpc_lateral_controller/#how-to-tune-mpc-parameters","title":"How to tune MPC parameters","text":""},{"location":"control/autoware_mpc_lateral_controller/#set-kinematics-information","title":"Set kinematics information","text":"

First, it's important to set the appropriate parameters for vehicle kinematics. This includes parameters like wheelbase, which represents the distance between the front and rear wheels, and max_steering_angle, which indicates the maximum tire steering angle. These parameters should be set in the vehicle_info.param.yaml.

"},{"location":"control/autoware_mpc_lateral_controller/#set-dynamics-information","title":"Set dynamics information","text":"

Next, you need to set the proper parameters for the dynamics model. These include the time constant steering_tau and time delay steering_delay for steering dynamics, and the maximum acceleration mpc_acceleration_limit and the time constant mpc_velocity_time_constant for velocity dynamics.

"},{"location":"control/autoware_mpc_lateral_controller/#confirmation-of-the-input-information","title":"Confirmation of the input information","text":"

It's also important to make sure the input information is accurate. Information such as the velocity of the center of the rear wheel [m/s] and the steering angle of the tire [rad] is required. Please note that there have been frequent reports of performance degradation due to errors in input information. For instance, there are cases where the velocity of the vehicle is offset due to an unexpected difference in tire radius, or the tire angle cannot be accurately measured due to a deviation in the steering gear ratio or midpoint. It is suggested to compare information from multiple sensors (e.g., integrated vehicle speed and GNSS position, steering angle and IMU angular velocity), and ensure the input information for MPC is appropriate.

"},{"location":"control/autoware_mpc_lateral_controller/#mpc-weight-tuning","title":"MPC weight tuning","text":"

Then, tune the weights of the MPC. One simple approach of tuning is to keep the weight for the lateral deviation (weight_lat_error) constant, and vary the input weight (weight_steering_input) while observing the trade-off between steering oscillation and control accuracy.

Here, weight_lat_error acts to suppress the lateral error in path following, while weight_steering_input works to adjust the steering angle to a standard value determined by the path's curvature. When weight_lat_error is large, the steering moves significantly to improve accuracy, which can cause oscillations. On the other hand, when weight_steering_input is large, the steering doesn't respond much to tracking errors, providing stable driving but potentially reducing tracking accuracy.

The steps are as follows:

  1. Set weight_lat_error = 0.1, weight_steering_input = 1.0 and other weights to 0.
  2. If the vehicle oscillates when driving, set weight_steering_input larger.
  3. If the tracking accuracy is low, set weight_steering_input smaller.

If you want to adjust the effect only in the high-speed range, you can use weight_steering_input_squared_vel. This parameter corresponds to the steering weight in the high-speed range.

"},{"location":"control/autoware_mpc_lateral_controller/#descriptions-for-weights","title":"Descriptions for weights","text":""},{"location":"control/autoware_mpc_lateral_controller/#other-tips-for-tuning","title":"Other tips for tuning","text":"

Here are some tips for adjusting other parameters:

"},{"location":"control/autoware_mpc_lateral_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/autoware_mpc_lateral_controller/#related-issues","title":"Related issues","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/","title":"MPC Algorithm","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#mpc-algorithm","title":"MPC Algorithm","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#introduction","title":"Introduction","text":"

Model Predictive Control (MPC) is a control method that solves an optimization problem during each control cycle to determine an optimal control sequence based on a given vehicle model. The calculated sequence of control inputs is used to control the system.

In simpler terms, an MPC controller calculates a series of control inputs that optimize the state and output trajectories to achieve the desired behavior. The key characteristics of an MPC control system can be summarized as follows:

  1. Prediction of Future Trajectories: MPC computes a control sequence by predicting the future state and output trajectories. The first control input is applied to the system, and this process repeats in a receding horizon manner at each control cycle.
  2. Handling of Constraints: MPC is capable of handling constraints on the state and input variables during the optimization phase. This ensures that the system operates within specified limits.
  3. Handling of Complex Dynamics: MPC algorithms can handle complex dynamics, whether they are linear or nonlinear in nature.

The choice between a linear or nonlinear model or constraint equation depends on the specific formulation of the MPC problem. If any nonlinear expressions are present in the motion equation or constraints, the optimization problem becomes nonlinear. In the following sections, we provide a step-by-step explanation of how linear and nonlinear optimization problems are solved within the MPC framework. Note that in this documentation, we utilize the linearization method to accommodate the nonlinear model.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#linear-mpc-formulation","title":"Linear MPC formulation","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#formulate-as-an-optimization-problem","title":"Formulate as an optimization problem","text":"

This section provides an explanation of MPC specifically for linear systems. In the following section, it also demonstrates the formulation of a vehicle path following problem as an application.

In the linear MPC formulation, all motion and constraint expressions are linear. For the path following problem, let's assume that the system's motion can be described by a set of equations, denoted as (1). The state evolution and measurements are presented in a discrete state space format, where matrices \\(A\\), \\(B\\), and \\(C\\) represent the state transition, control, and measurement matrices, respectively.

\\[ \\begin{gather} x_{k+1}=Ax_{k}+Bu_{k}+w_{k}, y_{k}=Cx_{k} \\tag{1} \\\\\\ x_{k}\\in R^{n},u_{k}\\in R^{m},w_{k}\\in R^{n}, y_{k}\\in R^{l}, A\\in R^{n\\times n}, B\\in R^{n\\times m}, C\\in R^{l \\times n} \\end{gather} \\]

Equation (1) represents the state-space equation, where \\(x_k\\) represents the internal states, \\(u_k\\) denotes the input, and \\(w_k\\) represents a known disturbance caused by linearization or problem structure. The measurements are indicated by the variable \\(y_k\\).

It's worth noting that another advantage of MPC is its ability to effectively handle the disturbance term \\(w\\). While it is referred to as a disturbance here, it can take various forms as long as it adheres to the equation's structure.

The state transition and measurement equations in (1) are iterative, moving from time \\(k\\) to time \\(k+1\\). By propagating the equation starting from an initial state and control pair \\((x_0, u_0)\\) along with a specified horizon of \\(N\\) steps, one can predict the trajectories of states and measurements.

For simplicity, let's assume the initial state is \\(x_0\\) with \\(k=0\\).

To begin, we can compute the state \\(x_1\\) at \\(k=1\\) using equation (1) by substituting the initial state into the equation. Since we are seeking a solution for the input sequence, we represent the inputs as decision variables in the symbolic expressions.

\\[ \\begin{align} x_{1} = Ax_{0} + Bu_{0} + w_{0} \\tag{2} \\end{align} \\]

Then, when \\(k=2\\), using also equation (2), we get

\\[ \\begin{align} x_{2} & = Ax_{1} + Bu_{1} + w_{1} \\\\\\ & = A(Ax_{0} + Bu_{0} + w_{0}) + Bu_{1} + w_{1} \\\\\\ & = A^{2}x_{0} + ABu_{0} + Aw_{0} + Bu_{1} + w_{1} \\\\\\ & = A^{2}x_{0} + \\begin{bmatrix}AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\end{bmatrix} + \\begin{bmatrix}A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\end{bmatrix} \\tag{3} \\end{align} \\]

When \\(k=3\\) , from equation (3)

\\[ \\begin{align} x_{3} & = Ax_{2} + Bu_{2} + w_{2} \\\\\\ & = A(A^{2}x_{0} + ABu_{0} + Bu_{1} + Aw_{0} + w_{1} ) + Bu_{2} + w_{2} \\\\\\ & = A^{3}x_{0} + A^{2}Bu_{0} + ABu_{1} + A^{2}w_{0} + Aw_{1} + Bu_{2} + w_{2} \\\\\\ & = A^{3}x_{0} + \\begin{bmatrix}A^{2}B & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\\\\\ u_{2} \\end{bmatrix} + \\begin{bmatrix} A^{2} & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\\\\\ w_{2} \\end{bmatrix} \\tag{4} \\end{align} \\]

If \\(k=n\\) , then

\\[ \\begin{align} x_{n} = A^{n}x_{0} + \\begin{bmatrix}A^{n-1}B & A^{n-2}B & \\dots & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\\\\\ \\vdots \\\\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} A^{n-1} & A^{n-2} & \\dots & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\\\\\ \\vdots \\\\\\ w_{n-1} \\end{bmatrix} \\tag{5} \\end{align} \\]

Putting all of them together with (2) to (5) yields the following matrix equation;

\\[ \\begin{align} \\begin{bmatrix}x_{1}\\\\\\ x_{2} \\\\\\ x_{3} \\\\\\ \\vdots \\\\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix}A^{1}\\\\\\ A^{2} \\\\\\ A^{3} \\\\\\ \\vdots \\\\\\ A^{n} \\end{bmatrix}x_{0} + \\begin{bmatrix}B & 0 & \\dots & & 0 \\\\\\ AB & B & 0 & \\dots & 0 \\\\\\ A^{2}B & AB & B & \\dots & 0 \\\\\\ \\vdots & \\vdots & & & 0 \\\\\\ A^{n-1}B & A^{n-2}B & \\dots & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\\\\\ u_{2} \\\\\\ \\vdots \\\\\\ u_{n-1} \\end{bmatrix} \\\\\\ + \\begin{bmatrix}I & 0 & \\dots & & 0 \\\\\\ A & I & 0 & \\dots & 0 \\\\\\ A^{2} & A & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & & & 0 \\\\\\ A^{n-1} & A^{n-2} & \\dots & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\\\\\ w_{2} \\\\\\ \\vdots \\\\\\ w_{n-1} \\end{bmatrix} \\tag{6} \\end{align} \\]

In this case, the measurements (outputs) become; \\(y_{k}=Cx_{k}\\), so

\\[ \\begin{align} \\begin{bmatrix}y_{1}\\\\\\ y_{2} \\\\\\ y_{3} \\\\\\ \\vdots \\\\\\ y_{n} \\end{bmatrix} = \\begin{bmatrix}C & 0 & \\dots & & 0 \\\\\\ 0 & C & 0 & \\dots & 0 \\\\\\ 0 & 0 & C & \\dots & 0 \\\\\\ \\vdots & & & \\ddots & 0 \\\\\\ 0 & \\dots & 0 & 0 & C \\end{bmatrix}\\begin{bmatrix}x_{1}\\\\\\ x_{2} \\\\\\ x_{3} \\\\\\ \\vdots \\\\\\ x_{n} \\end{bmatrix} \\tag{7} \\end{align} \\]

We can combine equations (6) and (7) into the following form:

\\[ \\begin{align} X = Fx_{0} + GU +SW, Y=HX \\tag{8} \\end{align} \\]

This form is similar to the original state-space equations (1), but it introduces new matrices: the state transition matrix \\(F\\), control matrix \\(G\\), disturbance matrix \\(W\\), and measurement matrix \\(H\\). In these equations, \\(X\\) represents the predicted states, given by \\(\\begin{bmatrix}x_{1} & x_{2} & \\dots & x_{n} \\end{bmatrix}^{T}\\).

Now that \\(G\\), \\(S\\), \\(W\\), and \\(H\\) are known, we can express the output behavior \\(Y\\) for the next \\(n\\) steps as a function of the input \\(U\\). This allows us to calculate the control input \\(U\\) so that \\(Y(U)\\) follows the target trajectory \\(Y_{ref}\\).

The next step is to define a cost function. The cost function generally uses the following quadratic form;

\\[ \\begin{align} J = (Y - Y_{ref})^{T}Q(Y - Y_{ref}) + (U - U_{ref})^{T}R(U - U_{ref}) \\tag{9} \\end{align} \\]

where \\(U_{ref}\\) is the target or steady-state input around which the system is linearized for \\(U\\).

This cost function is the same as that of the LQR controller. The first term of \\(J\\) penalizes the deviation from the reference trajectory. The second term penalizes the deviation from the reference (or steady-state) control trajectory. The \\(Q\\) and \\(R\\) are the cost weights Positive and Positive semi-semidefinite matrices.

Note: in some cases, \\(U_{ref}=0\\) is used, but this can mean the steering angle should be set to \\(0\\) even if the vehicle is turning a curve. Thus \\(U_{ref}\\) is used for the explanation here. This \\(U_{ref}\\) can be pre-calculated from the curvature of the target trajectory or the steady-state analyses.

As the resulting trajectory output is now \\(Y=Y(x_{0}, U)\\), the cost function depends only on U and the initial state conditions which yields the cost \\(J=J(x_{0}, U)\\). Let\u2019s find the \\(U\\) that minimizes this.

Substituting equation (8) into equation (9) and tidying up the equation for \\(U\\).

\\[ \\begin{align} J(U) &= (H(Fx_{0}+GU+SW)-Y_{ref})^{T}Q(H(Fx_{0}+GU+SW)-Y_{ref})+(U-U_{ref})^{T}R(U-U_{ref}) \\\\\\ & =U^{T}(G^{T}H^{T}QHG+R)U+2\\lbrace\\{(H(Fx_{0}+SW)-Y_{ref})^{T}QHG-U_{ref}^{T}R\\rbrace\\}U +(\\rm{constant}) \\tag{10} \\end{align} \\]

This equation is a quadratic form of \\(U\\) (i.e. \\(U^{T}AU+B^{T}U\\))

The coefficient matrix of the quadratic term of \\(U\\), \\(G^{T}C^{T}QCG+R\\) , is positive definite due to the positive and semi-positive definiteness requirement for \\(Q\\) and \\(R\\). Therefore, the cost function is a convex quadratic function in U, which can efficiently be solved by convex optimization.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#apply-to-vehicle-path-following-problem-nonlinear-problem","title":"Apply to vehicle path-following problem (nonlinear problem)","text":"

Because the path-following problem with a kinematic vehicle model is nonlinear, we cannot directly use the linear MPC methods described in the preceding section. There are several ways to deal with a nonlinearity such as using the nonlinear optimization solver. Here, the linearization is applied to the nonlinear vehicle model along the reference trajectory, and consequently, the nonlinear model is converted into a linear time-varying model.

For a nonlinear kinematic vehicle model, the discrete-time update equations are as follows:

\\[ \\begin{align} x_{k+1} &= x_{k} + v\\cos\\theta_{k} \\text{d}t \\\\\\ y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t \\tag{11} \\\\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]

The vehicle reference is the center of the rear axle and all states are measured at this point. The states, parameters, and control variables are shown in the following table.

Symbol Represent \\(v\\) Vehicle speed measured at the center of rear axle \\(\\theta\\) Yaw (heading angle) in global coordinate system \\(\\delta\\) Vehicle steering angle \\(\\delta_{des}\\) Vehicle target steering angle \\(L\\) Vehicle wheelbase (distance between the rear and front axles) \\(\\tau\\) Time constant for the first order steering dynamics

We assume in this example that the MPC only generates the steering control, and the trajectory generator gives the vehicle speed along the trajectory.

The kinematic vehicle model discrete update equations contain trigonometric functions; sin and cos, and the vehicle coordinates \\(x\\), \\(y\\), and yaw angles are global coordinates. In path tracking applications, it is common to reformulate the model in error dynamics to convert the control into a regulator problem in which the targets become zero (zero error).

We make small angle assumptions for the following derivations of linear equations. Given the nonlinear dynamics and omitting the longitudinal coordinate \\(x\\), the resulting set of equations become;

\\[ \\begin{align} y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t - \\kappa_{r}v\\cos\\theta_{k}\\text{d}t \\tag{12} \\\\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]

Where \\(\\kappa_{r}\\left(s\\right)\\) is the curvature along the trajectory parametrized by the arc length.

There are three expressions in the update equations that are subject to linear approximation: the lateral deviation (or lateral coordinate) \\(y\\), the heading angle (or the heading angle error) \\(\\theta\\), and the steering \\(\\delta\\). We can make a small angle assumption on the heading angle \\(\\theta\\).

In the path tracking problem, the curvature of the trajectory \\(\\kappa_{r}\\) is known in advance. At the lower speeds, the Ackermann formula approximates the reference steering angle \\(\\theta_{r}\\)(this value corresponds to the \\(U_{ref}\\) mentioned above). The Ackermann steering expression can be written as;

\\[ \\begin{align} \\delta_{r} = \\arctan\\left(L\\kappa_{r}\\right) \\end{align} \\]

When the vehicle is turning a path, its steer angle \\(\\delta\\) should be close to the value \\(\\delta_{r}\\). Therefore, \\(\\delta\\) can be expressed,

\\[ \\begin{align} \\delta = \\delta_{r} + \\Delta \\delta, \\Delta\\delta \\ll 1 \\end{align} \\]

Substituting this equation into equation (12), and approximate \\(\\Delta\\delta\\) to be small.

\\[ \\begin{align} \\tan\\delta &\\simeq \\tan\\delta_{r} + \\frac{\\text{d}\\tan\\delta}{\\text{d}\\delta} \\Biggm|_{\\delta=\\delta_{r}}\\Delta\\delta \\\\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\Delta\\delta \\\\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\left(\\delta-\\delta_{r}\\right) \\\\\\ &= \\tan \\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta \\end{align} \\]

Using this, \\(\\theta_{k+1}\\) can be expressed

\\[ \\begin{align} \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L}\\text{d}t - \\kappa_{r}v\\cos\\delta_{k}\\text{d}t \\\\\\ &\\simeq \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(\\tan\\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\\\ &= \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(L\\kappa_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\\\ &= \\theta_{k} + \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}}\\delta_{k} - \\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\end{align} \\]

Finally, the linearized time-varying model equation becomes;

\\[ \\begin{align} \\begin{bmatrix} y_{k+1} \\\\\\ \\theta_{k+1} \\\\\\ \\delta_{k+1} \\end{bmatrix} = \\begin{bmatrix} 1 & v\\text{d}t & 0 \\\\\\ 0 & 1 & \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\\\ 0 & 0 & 1 - \\tau^{-1}\\text{d}t \\end{bmatrix} \\begin{bmatrix} y_{k} \\\\\\ \\theta_{k} \\\\\\ \\delta_{k} \\end{bmatrix} + \\begin{bmatrix} 0 \\\\\\ 0 \\\\\\ \\tau^{-1}\\text{d}t \\end{bmatrix}\\delta_{des} + \\begin{bmatrix} 0 \\\\\\ -\\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\\\ 0 \\end{bmatrix} \\end{align} \\]

This equation has the same form as equation (1) of the linear MPC assumption, but the matrices \\(A\\), \\(B\\), and \\(w\\) change depending on the coordinate transformation. To make this explicit, the entire equation is written as follows

\\[ \\begin{align} x_{k+1} = A_{k}x_{k} + B_{k}u_{k}+w_{k} \\end{align} \\]

Comparing equation (1), \\(A \\rightarrow A_{k}\\). This means that the \\(A\\) matrix is a linear approximation in the vicinity of the trajectory after \\(k\\) steps (i.e., \\(k* \\text{d}t\\) seconds), and it can be obtained if the trajectory is known in advance.

Using this equation, write down the update equation likewise (2) ~ (6)

\\[ \\begin{align} \\begin{bmatrix} x_{1} \\\\\\ x_{2} \\\\\\ x_{3} \\\\\\ \\vdots \\\\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix} A_{1} \\\\\\ A_{1}A_{0} \\\\\\ A_{2}A_{1}A_{0} \\\\\\ \\vdots \\\\\\ \\prod_{i=0}^{n-1} A_{k} \\end{bmatrix} x_{0} + \\begin{bmatrix} B_{0} & 0 & \\dots & & 0 \\\\\\ A_{1}B_{0} & B_{1} & 0 & \\dots & 0 \\\\\\ A_{2}A_{1}B_{0} & A_{2}B_{1} & B_{2} & \\dots & 0 \\\\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\\\ \\prod_{i=1}^{n-1} A_{k}B_{0} & \\prod_{i=2}^{n-1} A_{k}B_{1} & \\dots & A_{n-1}B_{n-1} & B_{n-1} \\end{bmatrix} \\begin{bmatrix} u_{0} \\\\\\ u_{1} \\\\\\ u_{2} \\\\\\ \\vdots \\\\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} I & 0 & \\dots & & 0 \\\\\\ A_{1} & I & 0 & \\dots & 0 \\\\\\ A_{2}A_{1} & A_{2} & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\\\ \\prod_{i=1}^{n-1} A_{k} & \\prod_{i=2}^{n-1} A_{k} & \\dots & A_{n-1} & I \\end{bmatrix} \\begin{bmatrix} w_{0} \\\\\\ w_{1} \\\\\\ w_{2} \\\\\\ \\vdots \\\\\\ w_{n-1} \\end{bmatrix} \\end{align} \\]

As it has the same form as equation (6), convex optimization is applicable for as much as the model in the former section.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-functions-and-constraints","title":"The cost functions and constraints","text":"

In this section, we give the details on how to set up the cost function and constraint conditions.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-function","title":"The cost function","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-error-and-input","title":"Weight for error and input","text":"

MPC states and control weights appear in the cost function in a similar way as LQR (9). In the vehicle path following the problem described above, if C is the unit matrix, the output \\(y = x = \\left[y, \\theta, \\delta\\right]\\). (To avoid confusion with the y-directional deviation, here \\(e\\) is used for the lateral deviation.)

As an example, let's determine the weight matrix \\(Q_{1}\\) of the evaluation function for the number of prediction steps \\(n=2\\) system as follows.

\\[ \\begin{align} Q_{1} = \\begin{bmatrix} q_{e} & 0 & 0 & 0 & 0& 0 \\\\\\ 0 & q_{\\theta} & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & q_{e} & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & q_{\\theta} & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\end{bmatrix} \\end{align} \\]

The first term in the cost function (9) with \\(n=2\\), is shown as follow (\\(Y_{ref}\\) is set to \\(0\\))

\\[ \\begin{align} q_{e}\\left(e_{0}^{2} + e_{1}^{2} \\right) + q_{\\theta}\\left(\\theta_{0}^{2} + \\theta_{1}^{2} \\right) \\end{align} \\]

This shows that \\(q_{e}\\) is the weight for the lateral error and \\(q\\) is for the angular error. In this example, \\(q_{e}\\) acts as the proportional - P gain and \\(q_{\\theta}\\) as the derivative - D gain for the lateral tracking error. The balance of these factors (including R) will be determined through actual experiments.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-non-diagonal-term","title":"Weight for non-diagonal term","text":"

MPC can handle the non-diagonal term in its calculation (as long as the resulting matrix is positive definite).

For instance, write \\(Q_{2}\\) as follows for the \\(n=2\\) system.

\\[ \\begin{align} Q_{2} = \\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & q_{d} & 0 & 0 & -q_{d} \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & -q_{d} & 0 & 0 & q_{d} \\end{bmatrix} \\end{align} \\]

Expanding the first term of the evaluation function using \\(Q_{2}\\)

\\[ \\begin{align} q_{d}\\left(\\delta_{0}^{2} -2\\delta_{0}\\delta_{1} + \\delta_{1}^{2} \\right) = q_{d}\\left( \\delta_{0} - \\delta_{1}\\right)^{2} \\end{align} \\]

The value of \\(q_{d}\\) is weighted by the amount of change in \\(\\delta\\), which will prevent the tire from moving quickly. By adding this section, the system can evaluate the balance between tracking accuracy and change of steering wheel angle.

Since the weight matrix can be added linearly, the final weight can be set as \\(Q = Q_{1} + Q_{2}\\).

Furthermore, MPC optimizes over a period of time, the time-varying weight can be considered in the optimization.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#constraints","title":"Constraints","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#input-constraint","title":"Input constraint","text":"

The main advantage of MPC controllers is the capability to deal with any state or input constraints. The constraints can be expressed as box constraints, such as \"the tire angle must be within \u00b130 degrees\", and can be put in the following form;

\\[ \\begin{align} u_{min} < u < u_{max} \\end{align} \\]

The constraints must be linear and convex in the linear MPC applications.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#constraints-on-the-derivative-of-the-input","title":"Constraints on the derivative of the input","text":"

We can also put constraints on the input deviations. As the derivative of steering angle is \\(\\dot{u}\\), its box constraint is

\\[ \\begin{align} \\dot u_{min} < \\dot u < \\dot u_{max} \\end{align} \\]

We discretize \\(\\dot{u}\\) as \\(\\left(u_{k} - u_{k-1}\\right)/\\text{d}t\\) and multiply both sides by dt, and the resulting constraint become linear and convex

\\[ \\begin{align} \\dot u_{min}\\text{d}t < u_{k} - u_{k-1} < \\dot u_{max}\\text{d}t \\end{align} \\]

Along the prediction or control horizon, i.e for setting \\(n=3\\)

\\[ \\begin{align} \\dot u_{min}\\text{d}t < u_{1} - u_{0} < \\dot u_{max}\\text{d}t \\\\\\ \\dot u_{min}\\text{d}t < u_{2} - u_{1} < \\dot u_{max}\\text{d}t \\end{align} \\]

and aligning the inequality signs

\\[ \\begin{align} u_{1} - u_{0} &< \\dot u_{max}\\text{d}t \\\\\\ - u_{1} + u_{0} &< -\\dot u_{min}\\text{d}t \\\\\\ u_{2} - u_{1} &< \\dot u_{max}\\text{d}t \\\\\\ - u_{2} + u_{1} &< - \\dot u_{min}\\text{d}t \\end{align} \\]

We can obtain a matrix expression for the resulting constraint equation in the form of

\\[ \\begin{align} Ax \\leq b \\end{align} \\]

Thus, putting this inequality to fit the form above, the constraints against \\(\\dot{u}\\) can be included at the first-order approximation level.

\\[ \\begin{align} \\begin{bmatrix} -1 & 1 & 0 \\\\\\ 1 & -1 & 0 \\\\\\ 0 & -1 & 1 \\\\\\ 0 & 1 & -1 \\end{bmatrix}\\begin{bmatrix} u_{0} \\\\\\ u_{1} \\\\\\ u_{2} \\end{bmatrix} \\leq \\begin{bmatrix} \\dot u_{max}\\text{d}t \\\\\\ -\\dot u_{min}\\text{d}t \\\\\\ \\dot u_{max}\\text{d}t \\\\\\ -\\dot u_{min}\\text{d}t \\end{bmatrix} \\end{align} \\]"},{"location":"control/autoware_obstacle_collision_checker/","title":"obstacle_collision_checker","text":""},{"location":"control/autoware_obstacle_collision_checker/#obstacle_collision_checker","title":"obstacle_collision_checker","text":""},{"location":"control/autoware_obstacle_collision_checker/#purpose","title":"Purpose","text":"

obstacle_collision_checker is a module to check obstacle collision for predicted trajectory and publish diagnostic errors if collision is found.

"},{"location":"control/autoware_obstacle_collision_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_obstacle_collision_checker/#flow-chart","title":"Flow chart","text":""},{"location":"control/autoware_obstacle_collision_checker/#algorithms","title":"Algorithms","text":""},{"location":"control/autoware_obstacle_collision_checker/#check-data","title":"Check data","text":"

Check that obstacle_collision_checker receives no ground pointcloud, predicted_trajectory, reference trajectory, and current velocity data.

"},{"location":"control/autoware_obstacle_collision_checker/#diagnostic-update","title":"Diagnostic update","text":"

If any collision is found on predicted path, this module sets ERROR level as diagnostic status else sets OK.

"},{"location":"control/autoware_obstacle_collision_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_obstacle_collision_checker/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::msg::Trajectory Reference trajectory ~/input/trajectory autoware_planning_msgs::msg::Trajectory Predicted trajectory /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"control/autoware_obstacle_collision_checker/#output","title":"Output","text":"Name Type Description ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"control/autoware_obstacle_collision_checker/#parameters","title":"Parameters","text":"Name Type Description Default value delay_time double Delay time of vehicle [s] 0.3 footprint_margin double Foot print margin [m] 0.0 max_deceleration double Max deceleration for ego vehicle to stop [m/s^2] 2.0 resample_interval double Interval for resampling trajectory [m] 0.3 search_radius double Search distance from trajectory to point cloud [m] 5.0"},{"location":"control/autoware_obstacle_collision_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform proper collision check, it is necessary to get probably predicted trajectory and obstacle pointclouds without noise.

"},{"location":"control/autoware_operation_mode_transition_manager/","title":"autoware_operation_mode_transition_manager","text":""},{"location":"control/autoware_operation_mode_transition_manager/#autoware_operation_mode_transition_manager","title":"autoware_operation_mode_transition_manager","text":""},{"location":"control/autoware_operation_mode_transition_manager/#purpose-use-cases","title":"Purpose / Use cases","text":"

This module is responsible for managing the different modes of operation for the Autoware system. The possible modes are:

There is also an In Transition state that occurs during each mode transitions. During this state, the transition to the new operator is not yet complete, and the previous operator is still responsible for controlling the system until the transition is complete. Some actions may be restricted during the In Transition state, such as sudden braking or steering. (This is restricted by the vehicle_cmd_gate).

"},{"location":"control/autoware_operation_mode_transition_manager/#features","title":"Features","text":" "},{"location":"control/autoware_operation_mode_transition_manager/#design","title":"Design","text":"

A rough design of the relationship between `autoware_operation_mode_transition_manager`` and the other nodes is shown below.

A more detailed structure is below.

Here we see that autoware_operation_mode_transition_manager has multiple state transitions as follows

"},{"location":"control/autoware_operation_mode_transition_manager/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_operation_mode_transition_manager/#inputs","title":"Inputs","text":"

For the mode transition:

For the transition availability/completion check:

For the backward compatibility (to be removed):

"},{"location":"control/autoware_operation_mode_transition_manager/#outputs","title":"Outputs","text":" "},{"location":"control/autoware_operation_mode_transition_manager/#parameters","title":"Parameters","text":"Name Type Description Default Range transition_timeout float If the state transition is not completed within this time, it is considered a transition failure. 10.0 \u22650.0 frequency_hz float running hz 10.0 \u22650.0 enable_engage_on_driving boolean Set true if you want to engage the autonomous driving mode while the vehicle is driving. If set to false, it will deny Engage in any situation where the vehicle speed is not zero. Note that if you use this feature without adjusting the parameters, it may cause issues like sudden deceleration. Before using, please ensure the engage condition and the vehicle_cmd_gate transition filter are appropriately adjusted. false N/A check_engage_condition boolean If false, autonomous transition is always available. false N/A nearest_dist_deviation_threshold float distance threshold used to find nearest trajectory point [m] 3.0 \u22650.0 nearest_yaw_deviation_threshold float angle threshold used to find nearest trajectory point [rad] 1.57 \u2265-3.142 engage_acceptable_limits.allow_autonomous_in_stopped boolean If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true N/A engage_acceptable_limits.dist_threshold float The distance between the trajectory and ego vehicle must be within this distance for Autonomous transition. 1.5 \u22650.0 engage_acceptable_limits.yaw_threshold float The yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous transition. 0.524 \u2265-3.142 engage_acceptable_limits.speed_upper_threshold float The velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. 10.0 N/A engage_acceptable_limits.speed_lower_threshold float The velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. -10.0 N/A engage_acceptable_limits.acc_threshold float The control command acceleration must be less than this threshold for Autonomous transition. 1.5 \u22650.0 engage_acceptable_limits.lateral_acc_threshold float The control command lateral acceleration must be less than this threshold for Autonomous transition. 1.0 \u22650.0 engage_acceptable_limits.lateral_acc_diff_threshold float The lateral acceleration deviation between the control command must be less than this threshold for Autonomous transition. 0.5 \u22650.0 stable_check.duration float The stable condition must be satisfied for this duration to complete the transition. 0.1 \u22650.0 stable_check.dist_threshold float The distance between the trajectory and ego vehicle must be within this distance to complete Autonomous transition. 1.5 \u22650.0 stable_check.speed_upper_threshold float The velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0 N/A stable_check.speed_lower_threshold float The velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. -2.0 N/A stable_check.yaw_threshold float The yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous transition. 0,262 \u2265-3.142 Name Type Description Default value transition_timeout double If the state transition is not completed within this time, it is considered a transition failure. 10.0 frequency_hz double running hz 10.0 enable_engage_on_driving bool Set true if you want to engage the autonomous driving mode while the vehicle is driving. If set to false, it will deny Engage in any situation where the vehicle speed is not zero. Note that if you use this feature without adjusting the parameters, it may cause issues like sudden deceleration. Before using, please ensure the engage condition and the vehicle_cmd_gate transition filter are appropriately adjusted. 0.1 check_engage_condition bool If false, autonomous transition is always available 0.1 nearest_dist_deviation_threshold double distance threshold used to find nearest trajectory point 3.0 nearest_yaw_deviation_threshold double angle threshold used to find nearest trajectory point 1.57

For engage_acceptable_limits related parameters:

Name Type Description Default value allow_autonomous_in_stopped bool If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true dist_threshold double the distance between the trajectory and ego vehicle must be within this distance for Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous transition. 0.524 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. 10.0 speed_lower_threshold double the velocity deviation between the control command and ego vehicle must be within this threshold for Autonomous transition. -10.0 acc_threshold double the control command acceleration must be less than this threshold for Autonomous transition. 1.5 lateral_acc_threshold double the control command lateral acceleration must be less than this threshold for Autonomous transition. 1.0 lateral_acc_diff_threshold double the lateral acceleration deviation between the control command must be less than this threshold for Autonomous transition. 0.5

For stable_check related parameters:

Name Type Description Default value duration double the stable condition must be satisfied for this duration to complete the transition. 0.1 dist_threshold double the distance between the trajectory and ego vehicle must be within this distance to complete Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous transition. 0.262 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0 speed_lower_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0"},{"location":"control/autoware_operation_mode_transition_manager/#engage-check-behavior-on-each-parameter-setting","title":"Engage check behavior on each parameter setting","text":"

This matrix describes the scenarios in which the vehicle can be engaged based on the combinations of parameter settings:

enable_engage_on_driving check_engage_condition allow_autonomous_in_stopped Scenarios where engage is permitted x x x Only when the vehicle is stationary. x x o Only when the vehicle is stationary. x o x When the vehicle is stationary and all engage conditions are met. x o o Only when the vehicle is stationary. o x x At any time (Caution: Not recommended). o x o At any time (Caution: Not recommended). o o x When all engage conditions are met, regardless of vehicle status. o o o When all engage conditions are met or the vehicle is stationary."},{"location":"control/autoware_operation_mode_transition_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/autoware_pid_longitudinal_controller/","title":"PID Longitudinal Controller","text":""},{"location":"control/autoware_pid_longitudinal_controller/#pid-longitudinal-controller","title":"PID Longitudinal Controller","text":""},{"location":"control/autoware_pid_longitudinal_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

The longitudinal_controller computes the target acceleration to achieve the target velocity set at each point of the target trajectory using a feed-forward/back control.

It also contains a slope force correction that takes into account road slope information, and a delay compensation function. It is assumed that the target acceleration calculated here will be properly realized by the vehicle interface.

Note that the use of this module is not mandatory for Autoware if the vehicle supports the \"target speed\" interface.

"},{"location":"control/autoware_pid_longitudinal_controller/#design-inner-workings-algorithms","title":"Design / Inner-workings / Algorithms","text":""},{"location":"control/autoware_pid_longitudinal_controller/#states","title":"States","text":"

This module has four state transitions as shown below in order to handle special processing in a specific situation.

The state transition diagram is shown below.

"},{"location":"control/autoware_pid_longitudinal_controller/#logics","title":"Logics","text":""},{"location":"control/autoware_pid_longitudinal_controller/#control-block-diagram","title":"Control Block Diagram","text":""},{"location":"control/autoware_pid_longitudinal_controller/#feedforward-ff","title":"FeedForward (FF)","text":"

The reference acceleration set in the trajectory and slope compensation terms are output as a feedforward. Under ideal conditions with no modeling error, this FF term alone should be sufficient for velocity tracking.

Tracking errors causing modeling or discretization errors are removed by the feedback control (now using PID).

"},{"location":"control/autoware_pid_longitudinal_controller/#brake-keeping","title":"Brake keeping","text":"

From the viewpoint of ride comfort, stopping with 0 acceleration is important because it reduces the impact of braking. However, if the target acceleration when stopping is 0, the vehicle may cross over the stop line or accelerate a little in front of the stop line due to vehicle model error or gradient estimation error.

For reliable stopping, the target acceleration calculated by the FeedForward system is limited to a negative acceleration when stopping.

"},{"location":"control/autoware_pid_longitudinal_controller/#slope-compensation","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

We also offer the options to switch between these, depending on driving conditions.

Notation: This function works correctly only in a vehicle system that does not have acceleration feedback in the low-level control system.

This compensation adds gravity correction to the target acceleration, resulting in an output value that is no longer equal to the target acceleration that the autonomous driving system desires. Therefore, it conflicts with the role of the acceleration feedback in the low-level controller. For instance, if the vehicle is attempting to start with an acceleration of 1.0 m/s^2 and a gravity correction of -1.0 m/s^2 is applied, the output value will be 0. If this output value is mistakenly treated as the target acceleration, the vehicle will not start.

A suitable example of a vehicle system for the slope compensation function is one in which the output acceleration from the longitudinal_controller is converted into target accel/brake pedal input without any feedbacks. In this case, the output acceleration is just used as a feedforward term to calculate the target pedal, and hence the issue mentioned above does not arise.

Note: The angle of the slope is defined as positive for an uphill slope, while the pitch angle of the ego pose is defined as negative when facing upward. They have an opposite definition.

"},{"location":"control/autoware_pid_longitudinal_controller/#pid-control","title":"PID control","text":"

For deviations that cannot be handled by FeedForward control, such as model errors, PID control is used to construct a feedback system.

This PID control calculates the target acceleration from the deviation between the current ego-velocity and the target velocity.

This PID logic has a maximum value for the output of each term. This is to prevent the following:

Note: by default, the integral term in the control system is not accumulated when the vehicle is stationary. This precautionary measure aims to prevent unintended accumulation of the integral term in scenarios where Autoware assumes the vehicle is engaged, but an external system has immobilized the vehicle to initiate startup procedures.

However, certain situations may arise, such as when the vehicle encounters a depression in the road surface during startup or if the slope compensation is inaccurately estimated (lower than necessary), leading to a failure to initiate motion. To address these scenarios, it is possible to activate error integration even when the vehicle is at rest by setting the enable_integration_at_low_speed parameter to true.

When enable_integration_at_low_speed is set to true, the PID controller will initiate integration of the acceleration error after a specified duration defined by the time_threshold_before_pid_integration parameter has elapsed without the vehicle surpassing a minimum velocity set by the current_vel_threshold_pid_integration parameter.

The presence of the time_threshold_before_pid_integration parameter is important for practical PID tuning. Integrating the error when the vehicle is stationary or at low speed can complicate PID tuning. This parameter effectively introduces a delay before the integral part becomes active, preventing it from kicking in immediately. This delay allows for more controlled and effective tuning of the PID controller.

At present, PID control is implemented from the viewpoint of trade-off between development/maintenance cost and performance. This may be replaced by a higher performance controller (adaptive control or robust control) in future development.

"},{"location":"control/autoware_pid_longitudinal_controller/#time-delay-compensation","title":"Time delay compensation","text":"

At high speeds, the delay of actuator systems such as gas pedals and brakes has a significant impact on driving accuracy. Depending on the actuating principle of the vehicle, the mechanism that physically controls the gas pedal and brake typically has a delay of about a hundred millisecond.

In this controller, the predicted ego-velocity and the target velocity after the delay time are calculated and used for the feedback to address the time delay problem.

"},{"location":"control/autoware_pid_longitudinal_controller/#slope-compensation_1","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

"},{"location":"control/autoware_pid_longitudinal_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Smoothed target velocity and its acceleration shall be set in the trajectory
    1. The velocity command is not smoothed inside the controller (only noise may be removed).
    2. For step-like target signal, tracking is performed as fast as possible.
  2. The vehicle velocity must be an appropriate value
    1. The ego-velocity must be a signed-value corresponding to the forward/backward direction
    2. The ego-velocity should be given with appropriate noise processing.
    3. If there is a large amount of noise in the ego-velocity, the tracking performance will be significantly reduced.
  3. The output of this controller must be achieved by later modules (e.g. vehicle interface).
    1. If the vehicle interface does not have the target velocity or acceleration interface (e.g., the vehicle only has a gas pedal and brake interface), an appropriate conversion must be done after this controller.
"},{"location":"control/autoware_pid_longitudinal_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_pid_longitudinal_controller/#input","title":"Input","text":"

Set the following from the controller_node

"},{"location":"control/autoware_pid_longitudinal_controller/#output","title":"Output","text":"

Return LongitudinalOutput which contains the following to the controller node

"},{"location":"control/autoware_pid_longitudinal_controller/#pidcontroller-class","title":"PIDController class","text":"

The PIDController class is straightforward to use. First, gains and limits must be set (using setGains() and setLimits()) for the proportional (P), integral (I), and derivative (D) components. Then, the velocity can be calculated by providing the current error and time step duration to the calculate() function.

"},{"location":"control/autoware_pid_longitudinal_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

Name Type Description Default value delay_compensation_time double delay for longitudinal control [s] 0.17 enable_smooth_stop bool flag to enable transition to STOPPING true enable_overshoot_emergency bool flag to enable transition to EMERGENCY when the ego is over the stop line with a certain distance. See emergency_state_overshoot_stop_dist. true enable_large_tracking_error_emergency bool flag to enable transition to EMERGENCY when the closest trajectory point search is failed due to a large deviation between trajectory and ego pose. true enable_slope_compensation bool flag to modify output acceleration for slope compensation. The source of the slope angle can be selected from ego-pose or trajectory angle. See use_trajectory_for_pitch_calculation. true enable_brake_keeping_before_stop bool flag to keep a certain acceleration during DRIVE state before the ego stops. See Brake keeping. false enable_keep_stopped_until_steer_convergence bool flag to keep stopped condition until until the steer converges. true max_acc double max value of output acceleration [m/s^2] 3.0 min_acc double min value of output acceleration [m/s^2] -5.0 max_jerk double max value of jerk of output acceleration [m/s^3] 2.0 min_jerk double min value of jerk of output acceleration [m/s^3] -5.0 use_trajectory_for_pitch_calculation bool If true, the slope is estimated from trajectory z-level. Otherwise the pitch angle of the ego pose is used. false lpf_pitch_gain double gain of low-pass filter for pitch estimation 0.95 max_pitch_rad double max value of estimated pitch [rad] 0.1 min_pitch_rad double min value of estimated pitch [rad] -0.1"},{"location":"control/autoware_pid_longitudinal_controller/#state-transition","title":"State transition","text":"Name Type Description Default value drive_state_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 0.5 drive_state_offset_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 1.0 stopping_state_stop_dist double The state will transit to STOPPING when the distance to the stop point is shorter than stopping_state_stop_dist [m] 0.5 stopped_state_entry_vel double threshold of the ego velocity in transition to the STOPPED state [m/s] 0.01 stopped_state_entry_acc double threshold of the ego acceleration in transition to the STOPPED state [m/s^2] 0.1 emergency_state_overshoot_stop_dist double If enable_overshoot_emergency is true and the ego is emergency_state_overshoot_stop_dist-meter ahead of the stop point, the state will transit to EMERGENCY. [m] 1.5"},{"location":"control/autoware_pid_longitudinal_controller/#drive-parameter","title":"DRIVE Parameter","text":"Name Type Description Default value kp double p gain for longitudinal control 1.0 ki double i gain for longitudinal control 0.1 kd double d gain for longitudinal control 0.0 max_out double max value of PID's output acceleration during DRIVE state [m/s^2] 1.0 min_out double min value of PID's output acceleration during DRIVE state [m/s^2] -1.0 max_p_effort double max value of acceleration with p gain 1.0 min_p_effort double min value of acceleration with p gain -1.0 max_i_effort double max value of acceleration with i gain 0.3 min_i_effort double min value of acceleration with i gain -0.3 max_d_effort double max value of acceleration with d gain 0.0 min_d_effort double min value of acceleration with d gain 0.0 lpf_vel_error_gain double gain of low-pass filter for velocity error 0.9 enable_integration_at_low_speed bool Whether to enable integration of acceleration errors when the vehicle speed is lower than current_vel_threshold_pid_integration or not. false current_vel_threshold_pid_integration double Velocity error is integrated for I-term only when the absolute value of current velocity is larger than this parameter. [m/s] 0.5 time_threshold_before_pid_integration double How much time without the vehicle moving must past to enable PID error integration. [s] 5.0 brake_keeping_acc double If enable_brake_keeping_before_stop is true, a certain acceleration is kept during DRIVE state before the ego stops [m/s^2] See Brake keeping. 0.2"},{"location":"control/autoware_pid_longitudinal_controller/#stopping-parameter-smooth-stop","title":"STOPPING Parameter (smooth stop)","text":"

Smooth stop is enabled if enable_smooth_stop is true. In smooth stop, strong acceleration (strong_acc) will be output first to decrease the ego velocity. Then weak acceleration (weak_acc) will be output to stop smoothly by decreasing the ego jerk. If the ego does not stop in a certain time or some-meter over the stop point, weak acceleration to stop right (weak_stop_acc) now will be output. If the ego is still running, strong acceleration (strong_stop_acc) to stop right now will be output.

Name Type Description Default value smooth_stop_max_strong_acc double max strong acceleration [m/s^2] -0.5 smooth_stop_min_strong_acc double min strong acceleration [m/s^2] -0.8 smooth_stop_weak_acc double weak acceleration [m/s^2] -0.3 smooth_stop_weak_stop_acc double weak acceleration to stop right now [m/s^2] -0.8 smooth_stop_strong_stop_acc double strong acceleration to be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m/s^2] -3.4 smooth_stop_max_fast_vel double max fast vel to judge the ego is running fast [m/s]. If the ego is running fast, strong acceleration will be output. 0.5 smooth_stop_min_running_vel double min ego velocity to judge if the ego is running or not [m/s] 0.01 smooth_stop_min_running_acc double min ego acceleration to judge if the ego is running or not [m/s^2] 0.01 smooth_stop_weak_stop_time double max time to output weak acceleration [s]. After this, strong acceleration will be output. 0.8 smooth_stop_weak_stop_dist double Weak acceleration will be output when the ego is smooth_stop_weak_stop_dist-meter before the stop point. [m] -0.3 smooth_stop_strong_stop_dist double Strong acceleration will be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m] -0.5"},{"location":"control/autoware_pid_longitudinal_controller/#stopped-parameter","title":"STOPPED Parameter","text":"

The STOPPED state assumes that the vehicle is completely stopped with the brakes fully applied. Therefore, stopped_acc should be set to a value that allows the vehicle to apply the strongest possible brake. If stopped_acc is not sufficiently low, there is a possibility of sliding down on steep slopes.

Name Type Description Default value stopped_vel double target velocity in STOPPED state [m/s] 0.0 stopped_acc double target acceleration in STOPPED state [m/s^2] -3.4 stopped_jerk double target jerk in STOPPED state [m/s^3] -5.0"},{"location":"control/autoware_pid_longitudinal_controller/#emergency-parameter","title":"EMERGENCY Parameter","text":"Name Type Description Default value emergency_vel double target velocity in EMERGENCY state [m/s] 0.0 emergency_acc double target acceleration in an EMERGENCY state [m/s^2] -5.0 emergency_jerk double target jerk in an EMERGENCY state [m/s^3] -3.0"},{"location":"control/autoware_pid_longitudinal_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/autoware_pid_longitudinal_controller/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/autoware_pid_longitudinal_controller/#related-issues","title":"Related issues","text":""},{"location":"control/autoware_pure_pursuit/","title":"Pure Pursuit Controller","text":""},{"location":"control/autoware_pure_pursuit/#pure-pursuit-controller","title":"Pure Pursuit Controller","text":"

The Pure Pursuit Controller module calculates the steering angle for tracking a desired trajectory using the pure pursuit algorithm. This is used as a lateral controller plugin in the autoware_trajectory_follower_node.

"},{"location":"control/autoware_pure_pursuit/#inputs","title":"Inputs","text":"

Set the following from the controller_node

"},{"location":"control/autoware_pure_pursuit/#outputs","title":"Outputs","text":"

Return LateralOutput which contains the following to the controller node

"},{"location":"control/autoware_shift_decider/","title":"Shift Decider","text":""},{"location":"control/autoware_shift_decider/#shift-decider","title":"Shift Decider","text":""},{"location":"control/autoware_shift_decider/#purpose","title":"Purpose","text":"

autoware_shift_decider is a module to decide shift from ackermann control command.

"},{"location":"control/autoware_shift_decider/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_shift_decider/#flow-chart","title":"Flow chart","text":""},{"location":"control/autoware_shift_decider/#algorithms","title":"Algorithms","text":""},{"location":"control/autoware_shift_decider/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_shift_decider/#input","title":"Input","text":"Name Type Description ~/input/control_cmd autoware_control_msgs::msg::Control Control command for vehicle."},{"location":"control/autoware_shift_decider/#output","title":"Output","text":"Name Type Description ~output/gear_cmd autoware_vehicle_msgs::msg::GearCommand Gear for drive forward / backward."},{"location":"control/autoware_shift_decider/#parameters","title":"Parameters","text":"

none.

"},{"location":"control/autoware_shift_decider/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/","title":"Index","text":""},{"location":"control/autoware_smart_mpc_trajectory_follower/#smart-mpc-trajectory-follower","title":"Smart MPC Trajectory Follower","text":"

Smart MPC (Model Predictive Control) is a control algorithm that combines model predictive control and machine learning. While inheriting the advantages of model predictive control, it solves its disadvantage of modeling difficulty with a data-driven method using machine learning.

This technology makes it relatively easy to operate model predictive control, which is expensive to implement, as long as an environment for collecting data can be prepared.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#provided-features","title":"Provided features","text":"

This package provides smart MPC logic for path-following control as well as mechanisms for learning and evaluation. These features are described below.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#trajectory-following-control-based-on-ilqrmppi","title":"Trajectory following control based on iLQR/MPPI","text":"

The control mode can be selected from \"ilqr\", \"mppi\", or \"mppi_ilqr\", and can be set as mpc_parameter:system:mode in mpc_param.yaml. In \"mppi_ilqr\" mode, the initial value of iLQR is given by the MPPI solution.

[!NOTE] With the default settings, the performance of \"mppi\" mode is limited due to an insufficient number of samples. This issue is being addressed with ongoing work to introduce GPU support.

To perform a simulation, run the following command:

ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/autoware_map/sample-map-planning vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit trajectory_follower_mode:=smart_mpc_trajectory_follower\n

[!NOTE] When running with the nominal model set in nominal_param.yaml, set trained_model_parameter:control_application:use_trained_model to false in trained_model_param.yaml. To run using the trained model, set trained_model_parameter:control_application:use_trained_model to true, but the trained model must have been generated according to the following procedure.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#training-of-model-and-reflection-in-control","title":"Training of model and reflection in control","text":"

To obtain training data, start autoware, perform a drive, and record rosbag data with the following commands.

ros2 bag record /localization/kinematic_state /localization/acceleration /vehicle/status/steering_status /control/command/control_cmd /control/trajectory_follower/control_cmd /control/trajectory_follower/lane_departure_checker_node/debug/deviation/lateral /control/trajectory_follower/lane_departure_checker_node/debug/deviation/yaw /system/operation_mode/state /vehicle/status/control_mode /sensing/imu/imu_data /debug_mpc_x_des /debug_mpc_y_des /debug_mpc_v_des /debug_mpc_yaw_des /debug_mpc_acc_des /debug_mpc_steer_des /debug_mpc_X_des_converted /debug_mpc_x_current /debug_mpc_error_prediction /debug_mpc_max_trajectory_err /debug_mpc_emergency_stop_mode /debug_mpc_goal_stop_mode /debug_mpc_total_ctrl_time /debug_mpc_calc_u_opt_time\n

Move rosbag2.bash to the rosbag directory recorded above and execute the following command on the directory

bash rosbag2.bash\n

This converts rosbag data into CSV format for training models.

[!NOTE] Note that a large number of terminals are automatically opened at runtime, but they are automatically closed after rosbag data conversion is completed. From the time you begin this process until all terminals are closed, autoware should not be running.

Instead, the same result can be obtained by executing the following command in a python environment:

from autoware_smart_mpc_trajectory_follower.training_and_data_check import train_drive_NN_model\nmodel_trainer = train_drive_NN_model.train_drive_NN_model()\nmodel_trainer.transform_rosbag_to_csv(rosbag_dir)\n

Here, rosbag_dir represents the rosbag directory. At this time, all CSV files in rosbag_dir are automatically deleted first.

We move on to an explanation of how the model is trained. If trained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to true, training is performed on models that include LSTM, and if it is set to false, training is performed on models that do not include LSTM. When using LSTM, cell states and hidden states are updated based on historical time series data and reflected in the prediction.

The paths of the rosbag directories used for training and validation, dir_0, dir_1, dir_2,..., dir_val_0, dir_val_1, dir_val_2,... and the directory save_dir where you save the models, the model can be saved in the python environment as follows:

from autoware_smart_mpc_trajectory_follower.training_and_data_check import train_drive_NN_model\nmodel_trainer = train_drive_NN_model.train_drive_NN_model()\nmodel_trainer.add_data_from_csv(dir_0, add_mode=\"as_train\")\nmodel_trainer.add_data_from_csv(dir_1, add_mode=\"as_train\")\nmodel_trainer.add_data_from_csv(dir_2, add_mode=\"as_train\")\n...\nmodel_trainer.add_data_from_csv(dir_val_0, add_mode=\"as_val\")\nmodel_trainer.add_data_from_csv(dir_val_1, add_mode=\"as_val\")\nmodel_trainer.add_data_from_csv(dir_val_2, add_mode=\"as_val\")\n...\nmodel_trainer.get_trained_model()\nmodel_trainer.save_models(save_dir)\n

If add_mode is not specified or validation data is not added, the training data is split to be used for training and validation.

After performing the polynomial regression, the NN can be trained on the residuals as follows:

model_trainer.get_trained_model(use_polynomial_reg=True)\n

[!NOTE] In the default setting, regression is performed by several preselected polynomials. When use_selected_polynomial=False is set as the argument of get_trained_model, the deg argument allows setting the maximum degree of the polynomial to be used.

If only polynomial regression is performed and no NN model is used, run the following command:

model_trainer.get_trained_model(use_polynomial_reg=True,force_NN_model_to_zero=True)\n

Move model_for_test_drive.pth and polynomial_reg_info.npz saved in save_dir to the home directory and set trained_model_parameter:control_application:use_trained_model in trained_model_param.yaml to true to reflect the trained model in the control.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#performance-evaluation","title":"Performance evaluation","text":"

Here, as an example, we describe the verification of the adaptive performance when the wheel base of the sample_vehicle is 2.79 m, but an incorrect value of 2.0 m is given to the controller side. To give the controller 2.0 m as the wheel base, set the value of nominal_parameter:vehicle_info:wheel_base in nominal_param.yaml to 2.0, and run the following command:

python3 -m smart_mpc_trajectory_follower.clear_pycache\n
"},{"location":"control/autoware_smart_mpc_trajectory_follower/#test-on-autoware","title":"Test on autoware","text":"

To perform a control test on autoware with the nominal model before training, make sure that trained_model_parameter:control_application:use_trained_model in trained_model_param.yaml is false and launch autoware in the manner described in \"Trajectory following control based on iLQR/MPPI\". This time, the following route will be used for the test:

Record rosbag and train the model in the manner described in \"Training of model and reflection in control\", and move the generated files model_for_test_drive.pth and polynomial_reg_info.npz to the home directory. Sample models, which work under the condition thattrained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to true, can be obtained at sample_models/wheel_base_changed.

[!NOTE] Although the data used for training is small, for the sake of simplicity, we will see how much performance can be improved with this amount of data.

To control using the trained model obtained here, set trained_model_parameter:control_application:use_trained_model to true, start autoware in the same way, and drive the same route recording rosbag. After the driving is complete, convert the rosbag file to CSV format using the method described in \"Training of model and reflection in control\". A plot of the lateral deviation is obtained by running the lateral_error_visualize function in control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/training_and_data_check/data_checker.ipynb for the nominal and training model rosbag files rosbag_nominal and rosbag_trained, respectively, as follows:

lateral_error_visualize(dir_name=rosbag_nominal,ylim=[-1.2,1.2])\nlateral_error_visualize(dir_name=rosbag_trained,ylim=[-1.2,1.2])\n

The following results were obtained.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#test-on-python-simulator","title":"Test on python simulator","text":"

First, to give wheel base 2.79 m in the python simulator, create the following file and save it in control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator with the name sim_setting.json:

{ \"wheel_base\": 2.79 }\n

Next, after moving to control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator, run the following commands to test the slalom driving on the python simulator with the nominal control:

python3 run_python_simulator.py nominal_test\n

The result of the driving is stored in test_python_nominal_sim.

The following results were obtained.

The center of the upper row represents the lateral deviation.

Run the following commands to perform training using figure eight driving data under the control of pure pursuit.

To perform training using a figure eight driving and driving based on the obtained model, run the following commands:

python3 run_python_simulator.py\n

The result of the driving is stored in test_python_trined_sim.

When trained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to true, the following results were obtained.

When trained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to false, the following results were obtained.

It can be seen that the lateral deviation has improved significantly. However, the difference in driving with and without LSTM is not very apparent.

To see the difference, for example, we can experiment with parameters such as steer_time_delay.

First, to restore nominal model settings to default values, set the value of nominal_parameter:vehicle_info:wheel_base in nominal_param.yaml to 2.79, and run the following command:

python3 -m smart_mpc_trajectory_follower.clear_pycache\n

Next, modify sim_setting.json as follows:

{ \"steer_time_delay\": 1.01 }\n

In this way, an experiment is performed when steer_time_delay is set to 1.01 sec.

The result of the driving using the nominal model is as follows:

The result of the driving using the trained model with LSTM is as follows:

The result of the driving using the trained model without LSTM is as follows:

It can be seen that the performance with the model that includes LSTM is significantly better than with the model that does not.

The parameters that can be passed to the python simulator are as follows.

Parameter Type Description steer_bias float steer bias [rad] steer_rate_lim float steer rate limit [rad/s] vel_rate_lim float acceleration limit [m/s^2] wheel_base float wheel base [m] steer_dead_band float steer dead band [rad] adaptive_gear_ratio_coef list[float] List of floats of length 6 specifying information on speed-dependent gear ratios from tire angle to steering wheel angle. acc_time_delay float acceleration time delay [s] steer_time_delay float steer time delay [s] acc_time_constant float acceleration time constant [s] steer_time_constant float steer time constant [s] accel_map_scale float Parameter that magnifies the corresponding distortion from acceleration input values to actual acceleration realizations. Correspondence information is kept in control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator/accel_map.csv. acc_scaling float acceleration scaling steer_scaling float steer scaling vehicle_type int Take values from 0 to 4 for pre-designed vehicle types. A description of each vehicle type is given below.

For example, to give the simulation side 0.01 [rad] of steer bias and 0.001 [rad] of steer dead band, edit the sim_setting.json as follows.

{ \"steer_bias\": 0.01, \"steer_dead_band\": 0.001 }\n
"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_0","title":"vehicle_type_0","text":"

This vehicle type matches the default vehicle type used in the control.

Parameter value wheel_base 2.79 acc_time_delay 0.1 steer_time_delay 0.27 acc_time_constant 0.1 steer_time_constant 0.24 acc_scaling 1.0"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_1","title":"vehicle_type_1","text":"

This vehicle type is intended for a heavy bus.

Parameter value wheel_base 4.76 acc_time_delay 1.0 steer_time_delay 1.0 acc_time_constant 1.0 steer_time_constant 1.0 acc_scaling 0.2"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_2","title":"vehicle_type_2","text":"

This vehicle type is intended for a light bus.

Parameter value wheel_base 4.76 acc_time_delay 0.5 steer_time_delay 0.5 acc_time_constant 0.5 steer_time_constant 0.5 acc_scaling 0.5"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_3","title":"vehicle_type_3","text":"

This vehicle type is intended for a small vehicle.

Parameter value wheel_base 1.335 acc_time_delay 0.3 steer_time_delay 0.3 acc_time_constant 0.3 steer_time_constant 0.3 acc_scaling 1.5"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_4","title":"vehicle_type_4","text":"

This vehicle type is intended for a small robot.

Parameter value wheel_base 0.395 acc_time_delay 0.2 steer_time_delay 0.2 acc_time_constant 0.2 steer_time_constant 0.2 acc_scaling 1.0"},{"location":"control/autoware_smart_mpc_trajectory_follower/#auto-test-on-python-simulator","title":"Auto test on python simulator","text":"

Here, we describe a method for testing adaptive performance by giving the simulation side a predefined range of model parameters while the control side is given constant model parameters.

To run a driving experiment within the parameter change range set in run_sim.py, for example, move to control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator and run the following command:

python3 run_sim.py --param_name steer_bias\n

Here we described the experimental procedure for steer bias, and the same method can be used for other parameters.

To run the test for all parameters except limits at once, run the following command:

python3 run_auto_test.py\n

The results are stored in the auto_test directory. After the executions were completed, the following results were obtained by running plot_auto_test_result.ipynb:

The orange line shows the intermediate model trained using pure pursuit figure eight drive, and the blue line shows the final model trained using data from both the intermediate model and the figure eight drive. In most cases, sufficient performance is obtained, but for vehicle_type_1, which is intended for a heavy bus, a lateral deviation of about 2 m was observed, which is not satisfactory.

In run_sim.py, the following parameters can be set:

Parameter Type Description USE_TRAINED_MODEL_DIFF bool Whether the derivative of the trained model is reflected in the control DATA_COLLECTION_MODE DataCollectionMode Which method will be used to collect the training data\u3000 \"DataCollectionMode.ff\": Straight line driving with feed-forward input \"DataCollectionMode.pp\": Figure eight driving with pure pursuit control \"DataCollectionMode.mpc\": Slalom driving with mpc USE_POLYNOMIAL_REGRESSION bool Whether to perform polynomial regression before NN USE_SELECTED_POLYNOMIAL bool When USE_POLYNOMIAL_REGRESSION is True, perform polynomial regression using only some preselected polynomials. The choice of polynomials is intended to be able to absorb the contribution of some parameter shifts based on the nominal model of the vehicle. FORCE_NN_MODEL_TO_ZERO bool Whether to force the NN model to zero (i.e., erase the contribution of the NN model). When USE_POLYNOMIAL_REGRESSION is True, setting FORCE_MODEL_TO_ZERO to True allows the control to reflect the results of polynomial regression only, without using NN models. FIT_INTERCEPT bool Whether to include bias in polynomial regression. If it is False, perform the regression with a polynomial of the first degree or higher. USE_INTERCEPT bool When a polynomial regression including bias is performed, whether to use or discard the resulting bias information. It is meaningful only if FIT_INTERCEPT is True. If it is False, discard the bias in the polynomial regression in the hope that the NN model can remove the bias term, even if the polynomial regression is performed with the bias included.

[!NOTE] When run_sim.py is run, the use_trained_model_diff set in run_sim.py takes precedence over the trained_model_parameter:control_application:use_trained_model_diff set in trained_model_param.yaml.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#kernel-density-estimation-of-pure-pursuit-driving-data","title":"Kernel density estimation of pure pursuit driving data","text":"

The distribution of data obtained from pure pursuit runs can be displayed using Kernel density estimation. To do this, run density_estimation.ipynb.

The correlation between the minimum value of the density estimate and the lateral deviation of the run results is low. A scalar indicator that better predicts the value of lateral deviation is under development.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#change-of-nominal-parameters-and-their-reloading","title":"Change of nominal parameters and their reloading","text":"

The nominal parameters of vehicle model can be changed by editing the file nominal_param.yaml. After changing the nominal parameters, the cache must be deleted by running the following command:

python3 -m smart_mpc_trajectory_follower.clear_pycache\n

The nominal parameters include the following:

Parameter Type Description nominal_parameter:vehicle_info:wheel_base float wheel base [m] nominal_parameter:acceleration:acc_time_delay float acceleration time delay [s] nominal_parameter:acceleration:acc_time_constant float acceleration time constant [s] nominal_parameter:steering:steer_time_delay float steer time delay [s] nominal_parameter:steering:steer_time_constant float steer time constant [s]"},{"location":"control/autoware_smart_mpc_trajectory_follower/#change-of-control-parameters-and-their-reloading","title":"Change of control parameters and their reloading","text":"

The control parameters can be changed by editing files mpc_param.yaml and trained_model_param.yaml. Although it is possible to reflect parameter changes by restarting autoware, the following command allows us to do so without leaving autoware running:

ros2 topic pub /pympc_reload_mpc_param_trigger std_msgs/msg/String \"data: ''\" --once\n

The main parameters among the control parameters are as follows.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#mpc_paramyaml","title":"mpc_param.yaml","text":"Parameter Type Description mpc_parameter:system:mode str control mode \"ilqr\": iLQR mode \"mppi\": MPPI mode \"mppi_ilqr\": the initial value of iLQR is given by the MPPI solution. mpc_parameter:cost_parameters:Q list[float] Stage cost for states. List of length 8, in order: straight deviation, lateral deviation, velocity deviation, yaw angle deviation, acceleration deviation, steer deviation, acceleration input deviation, steer input deviation cost weights. mpc_parameter:cost_parameters:Q_c list[float] Cost in the horizon corresponding to the following timing_Q_c for the states. The correspondence of the components of the list is the same as for Q. mpc_parameter:cost_parameters:Q_f list[float] Termination cost for the states. The correspondence of the components of the list is the same as for Q. mpc_parameter:cost_parameters:R list[float] A list of length 2 where R[0] is weight of cost for the change rate of acceleration input value and R[1] is weight of cost for the change rate of steer input value. mpc_parameter:mpc_setting:timing_Q_c list[int] Horizon numbers such that the stage cost for the states is set to Q_c. mpc_parameter:compensation:acc_fb_decay float Coefficient of damping in integrating the error between the observed and predicted acceleration values in the compensator outside the MPC. mpc_parameter:compensation:acc_fb_gain float Gain of acceleration compensation. mpc_parameter:compensation:max_error_acc float Maximum acceleration compensation (m/s^2) mpc_parameter:compensation:steer_fb_decay float Coefficient of damping in integrating the error between the observed and predicted steering values in the compensator outside the MPC. mpc_parameter:compensation:steer_fb_gain float Gain of steering compensation. mpc_parameter:compensation:max_error_steer float Maximum steering compensation (rad)"},{"location":"control/autoware_smart_mpc_trajectory_follower/#trained_model_paramyaml","title":"trained_model_param.yaml","text":"Parameter Type Description trained_model_parameter:control_application:use_trained_model bool Whether the trained model is reflected in the control or not. trained_model_parameter:control_application:use_trained_model_diff bool Whether the derivative of the trained model is reflected on the control or not. It is meaningful only when use_trained_model is True, and if False, the nominal model is used for the derivative of the dynamics, and trained model is used only for prediction. trained_model_parameter:memory_for_training:use_memory_for_training bool Whether to use the model that includes LSTM for learning or not. trained_model_parameter:memory_for_training:use_memory_diff bool Whether the derivative with respect to the cell state and hidden state at the previous time of LSTM is reflected in the control or not."},{"location":"control/autoware_smart_mpc_trajectory_follower/#request-to-release-the-slow-stop-mode","title":"Request to release the slow stop mode","text":"

If the predicted trajectory deviates too far from the target trajectory, the system enters a slow stop mode and the vehicle stops moving. To cancel the slow stop mode and make the vehicle ready to run again, run the following command:

ros2 topic pub /pympc_stop_mode_reset_request std_msgs/msg/String \"data: ''\" --once\n
"},{"location":"control/autoware_smart_mpc_trajectory_follower/#limitation","title":"Limitation","text":" "},{"location":"control/autoware_trajectory_follower_base/","title":"Trajectory Follower","text":""},{"location":"control/autoware_trajectory_follower_base/#trajectory-follower","title":"Trajectory Follower","text":"

This is the design document for the trajectory_follower package.

"},{"location":"control/autoware_trajectory_follower_base/#purpose-use-cases","title":"Purpose / Use cases","text":"

This package provides the interface of longitudinal and lateral controllers used by the node of the autoware_trajectory_follower_node package. We can implement a detailed controller by deriving the longitudinal and lateral base interfaces.

"},{"location":"control/autoware_trajectory_follower_base/#design","title":"Design","text":"

There are lateral and longitudinal base interface classes and each algorithm inherits from this class to implement. The interface class has the following base functions.

See the Design of Trajectory Follower Nodes for how these functions work in the node.

"},{"location":"control/autoware_trajectory_follower_base/#separated-lateral-steering-and-longitudinal-velocity-controls","title":"Separated lateral (steering) and longitudinal (velocity) controls","text":"

This longitudinal controller assumes that the roles of lateral and longitudinal control are separated as follows.

Ideally, dealing with the lateral and longitudinal control as a single mixed problem can achieve high performance. In contrast, there are two reasons to provide velocity controller as a stand-alone function, described below.

"},{"location":"control/autoware_trajectory_follower_base/#complex-requirements-for-longitudinal-motion","title":"Complex requirements for longitudinal motion","text":"

The longitudinal vehicle behavior that humans expect is difficult to express in a single logic. For example, the expected behavior just before stopping differs depending on whether the ego-position is ahead/behind of the stop line, or whether the current speed is higher/lower than the target speed to achieve a human-like movement.

In addition, some vehicles have difficulty measuring the ego-speed at extremely low speeds. In such cases, a configuration that can improve the functionality of the longitudinal control without affecting the lateral control is important.

There are many characteristics and needs that are unique to longitudinal control. Designing them separately from the lateral control keeps the modules less coupled and improves maintainability.

"},{"location":"control/autoware_trajectory_follower_base/#nonlinear-coupling-of-lateral-and-longitudinal-motion","title":"Nonlinear coupling of lateral and longitudinal motion","text":"

The lat-lon mixed control problem is very complex and uses nonlinear optimization to achieve high performance. Since it is difficult to guarantee the convergence of the nonlinear optimization, a simple control logic is also necessary for development.

Also, the benefits of simultaneous longitudinal and lateral control are small if the vehicle doesn't move at high speed.

"},{"location":"control/autoware_trajectory_follower_base/#related-issues","title":"Related issues","text":""},{"location":"control/autoware_trajectory_follower_node/","title":"Trajectory Follower Nodes","text":""},{"location":"control/autoware_trajectory_follower_node/#trajectory-follower-nodes","title":"Trajectory Follower Nodes","text":""},{"location":"control/autoware_trajectory_follower_node/#purpose","title":"Purpose","text":"

Generate control commands to follow a given Trajectory.

"},{"location":"control/autoware_trajectory_follower_node/#design","title":"Design","text":"

This is a node of the functionalities implemented in the controller class derived from autoware_trajectory_follower_base package. It has instances of those functionalities, gives them input data to perform calculations, and publishes control commands.

By default, the controller instance with the Controller class as follows is used.

The process flow of Controller class is as follows.

// 1. create input data\nconst auto input_data = createInputData(*get_clock());\nif (!input_data) {\nreturn;\n}\n\n// 2. check if controllers are ready\nconst bool is_lat_ready = lateral_controller_->isReady(*input_data);\nconst bool is_lon_ready = longitudinal_controller_->isReady(*input_data);\nif (!is_lat_ready || !is_lon_ready) {\nreturn;\n}\n\n// 3. run controllers\nconst auto lat_out = lateral_controller_->run(*input_data);\nconst auto lon_out = longitudinal_controller_->run(*input_data);\n\n// 4. sync with each other controllers\nlongitudinal_controller_->sync(lat_out.sync_data);\nlateral_controller_->sync(lon_out.sync_data);\n\n// 5. publish control command\ncontrol_cmd_pub_->publish(out);\n

Giving the longitudinal controller information about steer convergence allows it to control steer when stopped if following parameters are true

"},{"location":"control/autoware_trajectory_follower_node/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_trajectory_follower_node/#inputs","title":"Inputs","text":""},{"location":"control/autoware_trajectory_follower_node/#outputs","title":"Outputs","text":""},{"location":"control/autoware_trajectory_follower_node/#parameter","title":"Parameter","text":""},{"location":"control/autoware_trajectory_follower_node/#debugging","title":"Debugging","text":"

Debug information are published by the lateral and longitudinal controller using autoware_internal_debug_msgs/Float32MultiArrayStamped messages.

A configuration file for PlotJuggler is provided in the config folder which, when loaded, allow to automatically subscribe and visualize information useful for debugging.

In addition, the predicted MPC trajectory is published on topic output/lateral/predicted_trajectory and can be visualized in Rviz.

"},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/","title":"Simple Trajectory Follower","text":""},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#simple-trajectory-follower","title":"Simple Trajectory Follower","text":""},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#purpose","title":"Purpose","text":"

Provide a base trajectory follower code that is simple and flexible to use. This node calculates control command based on a reference trajectory and an ego vehicle kinematics.

"},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#design","title":"Design","text":""},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#inputs-outputs","title":"Inputs / Outputs","text":"

Inputs

"},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#parameters","title":"Parameters","text":"Name Type Description Default value use_external_target_vel bool use external target velocity defined by parameter when true, else follow the velocity on target trajectory points. false external_target_vel float target velocity used when use_external_target_vel is true. 0.0 lateral_deviation float target lateral deviation when following. 0.0"},{"location":"control/autoware_vehicle_cmd_gate/","title":"vehicle_cmd_gate","text":""},{"location":"control/autoware_vehicle_cmd_gate/#vehicle_cmd_gate","title":"vehicle_cmd_gate","text":""},{"location":"control/autoware_vehicle_cmd_gate/#purpose","title":"Purpose","text":"

vehicle_cmd_gate is the package to get information from emergency handler, planning module, external controller, and send a msg to vehicle.

"},{"location":"control/autoware_vehicle_cmd_gate/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_vehicle_cmd_gate/#input","title":"Input","text":"Name Type Description ~/input/steering autoware_vehicle_msgs::msg::SteeringReport steering status ~/input/auto/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity from planning module ~/input/auto/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from planning module ~/input/auto/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command from planning module ~/input/auto/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command from planning module ~/input/external/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity from external ~/input/external/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from external ~/input/external/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command from external ~/input/external/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command from external ~/input/external_emergency_stop_heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/input/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/input/emergency/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity from emergency handler ~/input/emergency/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command from emergency handler ~/input/emergency/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command from emergency handler ~/input/engage autoware_vehicle_msgs::msg::Engage engage signal ~/input/operation_mode autoware_adapi_v1_msgs::msg::OperationModeState operation mode of Autoware"},{"location":"control/autoware_vehicle_cmd_gate/#output","title":"Output","text":"Name Type Description ~/output/vehicle_cmd_emergency tier4_vehicle_msgs::msg::VehicleEmergencyStamped emergency state which was originally in vehicle command ~/output/command/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity to vehicle ~/output/command/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command to vehicle ~/output/command/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command to vehicle ~/output/command/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command to vehicle ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/output/engage autoware_vehicle_msgs::msg::Engage engage signal ~/output/external_emergency tier4_external_api_msgs::msg::Emergency external emergency signal ~/output/operation_mode tier4_system_msgs::msg::OperationMode current operation mode of the vehicle_cmd_gate"},{"location":"control/autoware_vehicle_cmd_gate/#parameters","title":"Parameters","text":"Parameter Type Description update_period double update period use_emergency_handling bool true when emergency handler is used check_external_emergency_heartbeat bool true when checking heartbeat for emergency stop system_emergency_heartbeat_timeout double timeout for system emergency external_emergency_stop_heartbeat_timeout double timeout for external emergency filter_activated_count_threshold int threshold for filter activation filter_activated_velocity_threshold double velocity threshold for filter activation stop_hold_acceleration double longitudinal acceleration cmd when vehicle should stop emergency_acceleration double longitudinal acceleration cmd when vehicle stop with emergency moderate_stop_service_acceleration double longitudinal acceleration cmd when vehicle stop with moderate stop service nominal.vel_lim double limit of longitudinal velocity (activated in AUTONOMOUS operation mode) nominal.reference_speed_point velocity point used as a reference when calculate control command limit (activated in AUTONOMOUS operation mode). The size of this array must be equivalent to the size of the limit array. nominal.lon_acc_lim array of limits of longitudinal acceleration (activated in AUTONOMOUS operation mode) nominal.lon_jerk_lim array of limits of longitudinal jerk (activated in AUTONOMOUS operation mode) nominal.lat_acc_lim array of limits of lateral acceleration (activated in AUTONOMOUS operation mode) nominal.lat_jerk_lim array of limits of lateral jerk (activated in AUTONOMOUS operation mode) on_transition.vel_lim double limit of longitudinal velocity (activated in TRANSITION operation mode) on_transition.reference_speed_point velocity point used as a reference when calculate control command limit (activated in TRANSITION operation mode). The size of this array must be equivalent to the size of the limit array. on_transition.lon_acc_lim array of limits of longitudinal acceleration (activated in TRANSITION operation mode) on_transition.lon_jerk_lim array of limits of longitudinal jerk (activated in TRANSITION operation mode) on_transition.lat_acc_lim array of limits of lateral acceleration (activated in TRANSITION operation mode) on_transition.lat_jerk_lim array of limits of lateral jerk (activated in TRANSITION operation mode)"},{"location":"control/autoware_vehicle_cmd_gate/#filter-function","title":"Filter function","text":"

This module incorporates a limitation filter to the control command right before its published. Primarily for safety, this filter restricts the output range of all control commands published through Autoware.

The limitation values are calculated based on the 1D interpolation of the limitation array parameters. Here is an example for the longitudinal jerk limit.

Notation: this filter is not designed to enhance ride comfort. Its main purpose is to detect and remove abnormal values in the control outputs during the final stages of Autoware. If this filter is frequently active, it implies the control module may need tuning. If you're aiming to smoothen the signal via a low-pass filter or similar techniques, that should be handled in the control module. When the filter is activated, the topic ~/is_filter_activated is published.

Notation 2: If you use vehicles in which the driving force is controlled by the accelerator/brake pedal, the jerk limit, denoting the pedal rate limit, must be sufficiently relaxed at low speeds. Otherwise, quick pedal changes at start/stop will not be possible, resulting in slow starts and creep down on hills. This functionality for starting/stopping was embedded in the source code but was removed because it was complex and could be achieved by parameters.

"},{"location":"control/autoware_vehicle_cmd_gate/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"control/autoware_vehicle_cmd_gate/#external-emergency-heartbeat","title":"External Emergency Heartbeat","text":"

The parameter check_external_emergency_heartbeat (true by default) enables an emergency stop request from external modules. This feature requires a ~/input/external_emergency_stop_heartbeat topic for health monitoring of the external module, and the vehicle_cmd_gate module will not start without the topic. The check_external_emergency_heartbeat parameter must be false when the \"external emergency stop\" function is not used.

"},{"location":"control/autoware_vehicle_cmd_gate/#commands-on-mode-changes","title":"Commands on Mode changes","text":"

Output commands' topics: turn_indicators_cmd, hazard_light and gear_cmd are selected based on gate_mode. However, to ensure the continuity of commands, these commands will not change until the topics of new input commands arrive, even if a mode change occurs.

"},{"location":"control/control_performance_analysis/","title":"control_performance_analysis","text":""},{"location":"control/control_performance_analysis/#control_performance_analysis","title":"control_performance_analysis","text":""},{"location":"control/control_performance_analysis/#purpose","title":"Purpose","text":"

control_performance_analysis is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.

This package is used as a tool to quantify the results of the control module. That's why it doesn't interfere with the core logic of autonomous driving.

Based on the various input from planning, control, and vehicle, it publishes the result of analysis as control_performance_analysis::msg::ErrorStamped defined in this package.

All results in ErrorStamped message are calculated in Frenet Frame of curve. Errors and velocity errors are calculated by using paper below.

Werling, Moritz & Groell, Lutz & Bretthauer, Georg. (2010). Invariant Trajectory Tracking With a Full-Size Autonomous Road Vehicle. IEEE Transactions on Robotics. 26. 758 - 765. 10.1109/TRO.2010.2052325.

If you are interested in calculations, you can see the error and error velocity calculations in section C. Asymptotical Trajectory Tracking With Orientation Control.

Error acceleration calculations are made based on the velocity calculations above. You can see below the calculation of error acceleration.

"},{"location":"control/control_performance_analysis/#input-output","title":"Input / Output","text":""},{"location":"control/control_performance_analysis/#input-topics","title":"Input topics","text":"Name Type Description /planning/scenario_planning/trajectory autoware_planning_msgs::msg::Trajectory Output trajectory from planning module. /control/command/control_cmd autoware_control_msgs::msg::Control Output control command from control module. /vehicle/status/steering_status autoware_vehicle_msgs::msg::SteeringReport Steering information from vehicle. /localization/kinematic_state nav_msgs::msg::Odometry Use twist from odometry. /tf tf2_msgs::msg::TFMessage Extract ego pose from tf."},{"location":"control/control_performance_analysis/#output-topics","title":"Output topics","text":"Name Type Description /control_performance/performance_vars control_performance_analysis::msg::ErrorStamped The result of the performance analysis. /control_performance/driving_status control_performance_analysis::msg::DrivingMonitorStamped Driving status (acceleration, jerk etc.) monitoring"},{"location":"control/control_performance_analysis/#outputs","title":"Outputs","text":""},{"location":"control/control_performance_analysis/#control_performance_analysismsgdrivingmonitorstamped","title":"control_performance_analysis::msg::DrivingMonitorStamped","text":"Name Type Description longitudinal_acceleration float \\([ \\mathrm{m/s^2} ]\\) longitudinal_jerk float \\([ \\mathrm{m/s^3} ]\\) lateral_acceleration float \\([ \\mathrm{m/s^2} ]\\) lateral_jerk float \\([ \\mathrm{m/s^3} ]\\) desired_steering_angle float \\([ \\mathrm{rad} ]\\) controller_processing_time float Timestamp between last two control command messages \\([ \\mathrm{ms} ]\\)"},{"location":"control/control_performance_analysis/#control_performance_analysismsgerrorstamped","title":"control_performance_analysis::msg::ErrorStamped","text":"Name Type Description lateral_error float \\([ \\mathrm{m} ]\\) lateral_error_velocity float \\([ \\mathrm{m/s} ]\\) lateral_error_acceleration float \\([ \\mathrm{m/s^2} ]\\) longitudinal_error float \\([ \\mathrm{m} ]\\) longitudinal_error_velocity float \\([ \\mathrm{m/s} ]\\) longitudinal_error_acceleration float \\([ \\mathrm{m/s^2} ]\\) heading_error float \\([ \\mathrm{rad} ]\\) heading_error_velocity float \\([ \\mathrm{rad/s} ]\\) control_effort_energy float \\([ \\mathbf{u}^\\top \\mathbf{R} \\mathbf{u} ]\\) simplified to \\([ R \\cdot u^2 ]\\) error_energy float \\(e_{\\text{lat}}^2 + e_\\theta^2\\) (squared lateral error + squared heading error) value_approximation float \\(V = \\mathbf{x}^\\top \\mathbf{P} \\mathbf{x}\\); Value function from DARE Lyapunov matrix \\(\\mathbf{P}\\) curvature_estimate float \\([ \\mathrm{1/m} ]\\) curvature_estimate_pp float \\([ \\mathrm{1/m} ]\\) vehicle_velocity_error float \\([ \\mathrm{m/s} ]\\) tracking_curvature_discontinuity_ability float Measures the ability to track curvature changes \\(\\frac{\\lvert \\Delta(\\text{curvature}) \\rvert}{1 + \\lvert \\Delta(e_{\\text{lat}}) \\rvert}\\)"},{"location":"control/control_performance_analysis/#parameters","title":"Parameters","text":"Name Type Description curvature_interval_length double Used for estimating current curvature prevent_zero_division_value double Value to avoid zero division. Default is 0.001 odom_interval unsigned integer Interval between odom messages, increase it for smoother curve. acceptable_max_distance_to_waypoint double Maximum distance between trajectory point and vehicle [m] acceptable_max_yaw_difference_rad double Maximum yaw difference between trajectory point and vehicle [rad] low_pass_filter_gain double Low pass filter gain"},{"location":"control/control_performance_analysis/#usage","title":"Usage","text":" "},{"location":"control/control_performance_analysis/#future-improvements","title":"Future Improvements","text":""},{"location":"control/predicted_path_checker/","title":"Predicted Path Checker","text":""},{"location":"control/predicted_path_checker/#predicted-path-checker","title":"Predicted Path Checker","text":""},{"location":"control/predicted_path_checker/#purpose","title":"Purpose","text":"

The Predicted Path Checker package is designed for autonomous vehicles to check the predicted path generated by control modules. It handles potential collisions that the planning module might not be able to handle and that in the brake distance. In case of collision in brake distance, the package will send a diagnostic message labeled \"ERROR\" to alert the system to send emergency and in the case of collisions in outside reference trajectory, it sends pause request to pause interface to make the vehicle stop.

"},{"location":"control/predicted_path_checker/#algorithm","title":"Algorithm","text":"

The package algorithm evaluates the predicted trajectory against the reference trajectory and the predicted objects in the environment. It checks for potential collisions and, if necessary, generates an appropriate response to avoid them ( emergency or pause request).

"},{"location":"control/predicted_path_checker/#inner-algorithm","title":"Inner Algorithm","text":"

cutTrajectory() -> It cuts the predicted trajectory with input length. Length is calculated by multiplying the velocity of ego vehicle with \"trajectory_check_time\" parameter and \"min_trajectory_length\".

filterObstacles() -> It filters the predicted objects in the environment. It filters the objects which are not in front of the vehicle and far away from predicted trajectory.

checkTrajectoryForCollision() -> It checks the predicted trajectory for collision with the predicted objects. It calculates both polygon of trajectory points and predicted objects and checks intersection of both polygons. If there is an intersection, it calculates the nearest collision point. It returns the nearest collision point of polygon and the predicted object. It also checks predicted objects history which are intersect with the footprint before to avoid unexpected behaviors. Predicted objects history stores the objects if it was detected below the \"chattering_threshold\" seconds ago.

If the \"enable_z_axis_obstacle_filtering\" parameter is set to true, it filters the predicted objects in the Z-axis by using \"z_axis_filtering_buffer\". If the object does not intersect with the Z-axis, it is filtered out.

calculateProjectedVelAndAcc() -> It calculates the projected velocity and acceleration of the predicted object on predicted trajectory's collision point's axes.

isInBrakeDistance() -> It checks if the stop point is in brake distance. It gets relative velocity and acceleration of ego vehicle with respect to the predicted object. It calculates the brake distance, if the point in brake distance, it returns true.

isItDiscretePoint() -> It checks if the stop point on predicted trajectory is discrete point or not. If it is not discrete point, planning should handle the stop.

isThereStopPointOnRefTrajectory() -> It checks if there is a stop point on reference trajectory. If there is a stop point before the stop index, it returns true. Otherwise, it returns false, and node is going to call pause interface to make the vehicle stop.

"},{"location":"control/predicted_path_checker/#inputs","title":"Inputs","text":"Name Type Description ~/input/reference_trajectory autoware_planning_msgs::msg::Trajectory Reference trajectory ~/input/predicted_trajectory autoware_planning_msgs::msg::Trajectory Predicted trajectory ~/input/objects autoware_perception_msgs::msg::PredictedObject Dynamic objects in the environment ~/input/odometry nav_msgs::msg::Odometry Odometry message of vehicle to get current velocity ~/input/current_accel geometry_msgs::msg::AccelWithCovarianceStamped Current acceleration /control/vehicle_cmd_gate/is_paused tier4_control_msgs::msg::IsPaused Current pause state of the vehicle"},{"location":"control/predicted_path_checker/#outputs","title":"Outputs","text":"Name Type Description ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization ~/debug/virtual_wall visualization_msgs::msg::MarkerArray Virtual wall marker for visualization /control/vehicle_cmd_gate/set_pause tier4_control_msgs::srv::SetPause Pause service to make the vehicle stop /diagnostics diagnostic_msgs::msg::DiagnosticStatus Diagnostic status of vehicle"},{"location":"control/predicted_path_checker/#parameters","title":"Parameters","text":""},{"location":"control/predicted_path_checker/#node-parameters","title":"Node Parameters","text":"Name Type Description Default value update_rate double The update rate [Hz] 10.0 delay_time double he time delay considered for the emergency response [s] 0.17 max_deceleration double Max deceleration for ego vehicle to stop [m/s^2] 1.5 resample_interval double Interval for resampling trajectory [m] 0.5 stop_margin double The stopping margin [m] 0.5 ego_nearest_dist_threshold double The nearest distance threshold for ego vehicle [m] 3.0 ego_nearest_yaw_threshold double The nearest yaw threshold for ego vehicle [rad] 1.046 min_trajectory_check_length double The minimum trajectory check length in meters [m] 1.5 trajectory_check_time double The trajectory check time in seconds. [s] 3.0 distinct_point_distance_threshold double The distinct point distance threshold [m] 0.3 distinct_point_yaw_threshold double The distinct point yaw threshold [deg] 5.0 filtering_distance_threshold double It ignores the objects if distance is higher than this [m] 1.5 use_object_prediction bool If true, node predicts current pose of the objects wrt delta time [-] true"},{"location":"control/predicted_path_checker/#collision-checker-parameters","title":"Collision Checker Parameters","text":"Name Type Description Default value width_margin double The width margin for collision checking [Hz] 0.2 chattering_threshold double The chattering threshold for collision detection [s] 0.2 z_axis_filtering_buffer double The Z-axis filtering buffer [m] 0.3 enable_z_axis_obstacle_filtering bool A boolean flag indicating if Z-axis obstacle filtering is enabled false"},{"location":"evaluator/autoware_control_evaluator/","title":"Control Evaluator","text":""},{"location":"evaluator/autoware_control_evaluator/#control-evaluator","title":"Control Evaluator","text":""},{"location":"evaluator/autoware_control_evaluator/#purpose","title":"Purpose","text":"

This package provides nodes that generate metrics to evaluate the quality of control.

It publishes metric information about control modules' outputs as well as the ego vehicle's current kinematics and position.

"},{"location":"evaluator/autoware_control_evaluator/#evaluated-metrics","title":"Evaluated metrics","text":"

The control evaluator uses the metrics defined in include/autoware/control_evaluator/metrics/metric.hppto calculate deviations in yaw and lateral distance from the ego's set-point. The control_evaluator can also be customized to offer metrics/evaluation about other control modules. Currently, the control_evaluator offers a simple metric output based on the autonomous_emergency_braking node's output, but this functionality can be extended to evaluate other control modules' performance.

"},{"location":"evaluator/autoware_control_evaluator/#kinematics-output","title":"Kinematics output","text":"

The control evaluator module also constantly publishes information regarding the ego vehicle's kinematics and position. It publishes the current ego lane id with the longitudinal s and lateral t arc coordinates. It also publishes the current ego speed, acceleration and jerk in its metric messages.

This information can be used by other nodes to establish automated evaluation using rosbags: by crosschecking the ego position and kinematics with the evaluated control module's output, it is possible to judge if the evaluated control modules reacted in a satisfactory way at certain interesting points of the rosbag reproduction.

"},{"location":"evaluator/autoware_planning_evaluator/","title":"Planning Evaluator","text":""},{"location":"evaluator/autoware_planning_evaluator/#planning-evaluator","title":"Planning Evaluator","text":""},{"location":"evaluator/autoware_planning_evaluator/#purpose","title":"Purpose","text":"

This package provides nodes that generate metrics to evaluate the quality of planning and control.

"},{"location":"evaluator/autoware_planning_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The evaluation node calculates metrics each time it receives a trajectory T(0). Metrics are calculated using the following information:

These information are maintained by an instance of class MetricsCalculator which is also responsible for calculating metrics.

"},{"location":"evaluator/autoware_planning_evaluator/#stat","title":"Stat","text":"

Each metric is calculated using a autoware::universe_utils::Accumulator instance which contains the minimum, maximum, and mean values calculated for the metric as well as the number of values measured.

"},{"location":"evaluator/autoware_planning_evaluator/#metric-calculation-and-adding-more-metrics","title":"Metric calculation and adding more metrics","text":"

All possible metrics are defined in the Metric enumeration defined include/planning_evaluator/metrics/metric.hpp. This file also defines conversions from/to string as well as human readable descriptions to be used as header of the output file.

The MetricsCalculator is responsible for calculating metric statistics through calls to function:

Accumulator<double> MetricsCalculator::calculate(const Metric metric, const Trajectory & traj) const;\n

Adding a new metric M requires the following steps:

"},{"location":"evaluator/autoware_planning_evaluator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"evaluator/autoware_planning_evaluator/#inputs","title":"Inputs","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::msg::Trajectory Main trajectory to evaluate ~/input/reference_trajectory autoware_planning_msgs::msg::Trajectory Reference trajectory to use for deviation metrics ~/input/objects autoware_perception_msgs::msg::PredictedObjects Obstacles"},{"location":"evaluator/autoware_planning_evaluator/#outputs","title":"Outputs","text":"

Each metric is published on a topic named after the metric name.

Name Type Description ~/metrics tier4_metric_msgs::msg::MetricArray MetricArray with many metrics of tier4_metric_msgs::msg::Metric

If output_metrics = true, the evaluation node writes the statics of the metrics measured during its lifetime to <ros2_logging_directory>/autoware_metrics/<node_name>-<time_stamp>.json when shut down.

"},{"location":"evaluator/autoware_planning_evaluator/#parameters","title":"Parameters","text":"Name Type Description Default Range output_metrics boolean If true, the evaluation node writes the metrics' statics to <ros2_logging_directory>/autoware_metrics/<node_name>-<time_stamp>.json when the node shut down, false N/A ego_frame string reference frame of ego base_link N/A selected_metrics array metrics to collect/record ['curvature', 'point_interval', 'relative_angle', 'length', 'duration', 'velocity', 'acceleration', 'jerk', 'lateral_deviation', 'yaw_deviation', 'velocity_deviation', 'lateral_trajectory_displacement', 'stability', 'stability_frechet', 'obstacle_distance', 'obstacle_ttc', 'modified_goal_longitudinal_deviation', 'modified_goal_lateral_deviation', 'modified_goal_yaw_deviation'] N/A trajectory.min_point_dist_m float minimum distance between two successive points to use for angle calculation 0.1 N/A lookahead.max_dist_m float maximum distance from ego along the trajectory to use for calculation 5.0 N/A lookahead.max_time_s float maximum time ahead of ego along the trajectory to use for calculation 3.0 N/A obstacle.dist_thr_m float distance between ego and the obstacle below which a collision is considered 1.0 N/A"},{"location":"evaluator/autoware_planning_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is a strong assumption that when receiving a trajectory T(0), it has been generated using the last received reference trajectory and objects. This can be wrong if a new reference trajectory or objects are published while T(0) is being calculated.

Precision is currently limited by the resolution of the trajectories. It is possible to interpolate the trajectory and reference trajectory to increase precision but would make computation significantly more expensive.

"},{"location":"evaluator/autoware_planning_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"evaluator/kinematic_evaluator/","title":"Kinematic Evaluator","text":""},{"location":"evaluator/kinematic_evaluator/#kinematic-evaluator","title":"Kinematic Evaluator","text":"

TBD

"},{"location":"evaluator/kinematic_evaluator/#parameters","title":"Parameters","text":"Name Type Description Default Range output_file string Name of the file to which kinematic metrics are written. If empty, metrics are not written to a file. kinematic_metrics.results N/A selected_metrics string The specific metrics selected for evaluation. velocity_stats N/A"},{"location":"evaluator/localization_evaluator/","title":"Localization Evaluator","text":""},{"location":"evaluator/localization_evaluator/#localization-evaluator","title":"Localization Evaluator","text":"

The Localization Evaluator evaluates the performance of the localization system and provides metrics

"},{"location":"evaluator/localization_evaluator/#parameters","title":"Parameters","text":"Name Type Description Default Range output_file string if empty, metrics are not written to file loc_metrics.results N/A selected_metrics array metrics to be calculated ['lateral_error', 'absolute_error'] N/A"},{"location":"evaluator/perception_online_evaluator/","title":"Perception Evaluator","text":""},{"location":"evaluator/perception_online_evaluator/#perception-evaluator","title":"Perception Evaluator","text":"

A node for evaluating the output of perception systems.

"},{"location":"evaluator/perception_online_evaluator/#purpose","title":"Purpose","text":"

This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.

"},{"location":"evaluator/perception_online_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The evaluated metrics are as follows:

"},{"location":"evaluator/perception_online_evaluator/#predicted-path-deviation-predicted-path-deviation-variance","title":"Predicted Path Deviation / Predicted Path Deviation Variance","text":"

Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from \\(T_N\\) seconds ago, where \\(T_N\\) is the maximum value of the prediction time horizon \\([T_1, T_2, ..., T_N]\\).

[!NOTE] The object from \\(T_N\\) seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.

\\[ \\begin{align} n_{points} = T / dt \\\\ ADE = \\Sigma_{i=1}^{n_{points}} d_i / n_{points}\u3000\\\\ Var = \\Sigma_{i=1}^{n_{points}} (d_i - ADE)^2 / n_{points} \\end{align} \\]

The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.

\\[ \\begin{align} ADE_{mean} = \\Sigma_{j=1}^{n_{objects}} ADE_j / n_{objects} \\\\ ADE_{max} = max(ADE_j) \\\\ ADE_{min} = min(ADE_j) \\end{align} \\] \\[ \\begin{align} Var_{mean} = \\Sigma_{j=1}^{n_{objects}} Var_j / n_{objects} \\\\ Var_{max} = max(Var_j) \\\\ Var_{min} = min(Var_j) \\end{align} \\]

The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00

"},{"location":"evaluator/perception_online_evaluator/#lateral-deviation","title":"Lateral Deviation","text":"

Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is \\(T=T_n\\) seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.

"},{"location":"evaluator/perception_online_evaluator/#yaw-deviation","title":"Yaw Deviation","text":"

Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is \\(T=T_n\\) seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.

"},{"location":"evaluator/perception_online_evaluator/#yaw-rate","title":"Yaw Rate","text":"

Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is \\(T_n\\) seconds ago.

"},{"location":"evaluator/perception_online_evaluator/#object-counts","title":"Object Counts","text":"

Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.

In the provided illustration, the range \\(R\\) is determined by a combination of lists of radii (e.g., \\(r_1, r_2, \\ldots\\)) and heights (e.g., \\(h_1, h_2, \\ldots\\)). For example,

"},{"location":"evaluator/perception_online_evaluator/#total-object-count","title":"Total Object Count","text":"

Counts the number of unique objects for each class within the specified detection range. The total object count is calculated as follows:

\\[ \\begin{align} \\text{Total Object Count (Class, Range)} = \\left| \\bigcup_{t=0}^{T_{\\text{now}}} \\{ \\text{uuid} \\mid \\text{class}(t, \\text{uuid}) = C \\wedge \\text{position}(t, \\text{uuid}) \\in R \\} \\right| \\end{align} \\]

where:

"},{"location":"evaluator/perception_online_evaluator/#average-object-count","title":"Average Object Count","text":"

Counts the average number of objects for each class within the specified detection range. This metric measures how many objects were detected in one frame, without considering uuids. The average object count is calculated as follows:

\\[ \\begin{align} \\text{Average Object Count (Class, Range)} = \\frac{1}{N} \\sum_{t=0}^{T_{\\text{now}}} \\left| \\{ \\text{object} \\mid \\text{class}(t, \\text{object}) = C \\wedge \\text{position}(t, \\text{object}) \\in R \\} \\right| \\end{align} \\]

where:

"},{"location":"evaluator/perception_online_evaluator/#interval-object-count","title":"Interval Object Count","text":"

Counts the average number of objects for each class within the specified detection range over the last objects_count_window_seconds. This metric measures how many objects were detected in one frame, without considering uuids. The interval object count is calculated as follows:

\\[ \\begin{align} \\text{Interval Object Count (Class, Range)} = \\frac{1}{W} \\sum_{t=T_{\\text{now}} - T_W}^{T_{\\text{now}}} \\left| \\{ \\text{object} \\mid \\text{class}(t, \\text{object}) = C \\wedge \\text{position}(t, \\text{object}) \\in R \\} \\right| \\end{align} \\]

where:

"},{"location":"evaluator/perception_online_evaluator/#inputs-outputs","title":"Inputs / Outputs","text":"Name Type Description ~/input/objects autoware_perception_msgs::msg::PredictedObjects The predicted objects to evaluate. ~/metrics tier4_metric_msgs::msg::MetricArray Metric information about perception accuracy. ~/markers visualization_msgs::msg::MarkerArray Visual markers for debugging and visualization."},{"location":"evaluator/perception_online_evaluator/#parameters","title":"Parameters","text":"Name Type Description selected_metrics List Metrics to be evaluated, such as lateral deviation, yaw deviation, and predicted path deviation. smoothing_window_size Integer Determines the window size for smoothing path, should be an odd number. prediction_time_horizons list[double] Time horizons for prediction evaluation in seconds. stopped_velocity_threshold double threshold velocity to check if vehicle is stopped detection_radius_list list[double] Detection radius for objects to be evaluated.(used for objects count only) detection_height_list list[double] Detection height for objects to be evaluated. (used for objects count only) detection_count_purge_seconds double Time window for purging object detection counts. objects_count_window_seconds double Time window for keeping object detection counts. The number of object detections within this time window is stored in detection_count_vector_ target_object.*.check_lateral_deviation bool Whether to check lateral deviation for specific object types (car, truck, etc.). target_object.*.check_yaw_deviation bool Whether to check yaw deviation for specific object types (car, truck, etc.). target_object.*.check_predicted_path_deviation bool Whether to check predicted path deviation for specific object types (car, truck, etc.). target_object.*.check_yaw_rate bool Whether to check yaw rate for specific object types (car, truck, etc.). target_object.*.check_total_objects_count bool Whether to check total object count for specific object types (car, truck, etc.). target_object.*.check_average_objects_count bool Whether to check average object count for specific object types (car, truck, etc.). target_object.*.check_interval_average_objects_count bool Whether to check interval average object count for specific object types (car, truck, etc.). debug_marker.* bool Debugging parameters for marker visualization (history path, predicted path, etc.)."},{"location":"evaluator/perception_online_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is assumed that the current positions of PredictedObjects are reasonably accurate.

"},{"location":"evaluator/perception_online_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"evaluator/scenario_simulator_v2_adapter/","title":"scenario_simulator_v2 Adapter","text":""},{"location":"evaluator/scenario_simulator_v2_adapter/#macro-rendering-error","title":"Macro Rendering Error","text":"

File: evaluator/scenario_simulator_v2_adapter/README.md

FileNotFoundError: [Errno 2] No such file or directory: 'src/autoware/universe/evaluator/scenario_simulator_v2_adapter/schema/scenario_simulator_v2_adapter.schema.json'

Traceback (most recent call last):\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 688, in render\n    return md_template.render(**page_variables)\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1295, in render\n    self.environment.handle_exception()\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 942, in handle_exception\n    raise rewrite_traceback_stack(source=source)\n  File \"<template>\", line 38, in top-level template code\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 70, in json_to_markdown\n    with open(json_schema_file_path) as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'src/autoware/universe/evaluator/scenario_simulator_v2_adapter/schema/scenario_simulator_v2_adapter.schema.json'\n
"},{"location":"launch/tier4_autoware_api_launch/","title":"tier4_autoware_api_launch","text":""},{"location":"launch/tier4_autoware_api_launch/#tier4_autoware_api_launch","title":"tier4_autoware_api_launch","text":""},{"location":"launch/tier4_autoware_api_launch/#description","title":"Description","text":"

This package contains launch files that run nodes to convert Autoware internal topics into consistent API used by external software (e.g., fleet management system, simulator).

"},{"location":"launch/tier4_autoware_api_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_autoware_api_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use autoware_api.launch.xml.

  <include file=\"$(find-pkg-share tier4_autoware_api_launch)/launch/autoware_api.launch.xml\"/>\n
"},{"location":"launch/tier4_autoware_api_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )

"},{"location":"launch/tier4_control_launch/","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#tier4_control_launch","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_control_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_control_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use control.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_control_launch)/launch/control.launch.py\">\n<!-- options for lateral_controller_mode: mpc_follower, pure_pursuit -->\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n  <arg name=\"lateral_controller_mode\" value=\"mpc_follower\" />\n</include>\n
"},{"location":"launch/tier4_control_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )

"},{"location":"launch/tier4_localization_launch/","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#tier4_localization_launch","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_localization_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_localization_launch/#usage","title":"Usage","text":"

Include localization.launch.xml in other launch files as follows.

You can select which methods in localization to launch as pose_estimator or twist_estimator by specifying pose_source and twist_source.

In addition, you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of localization.launch.xml.

  <include file=\"$(find-pkg-share tier4_localization_launch)/launch/localization.launch.xml\">\n<!-- Localization methods -->\n<arg name=\"pose_source\" value=\"...\"/>\n<arg name=\"twist_source\" value=\"...\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_map_launch/","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#tier4_map_launch","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_map_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_map_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use map.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of map.launch.xml.

<arg name=\"map_path\" description=\"point cloud and lanelet2 map directory path\"/>\n<arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n<arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n\n<include file=\"$(find-pkg-share tier4_map_launch)/launch/map.launch.py\">\n<arg name=\"lanelet2_map_path\" value=\"$(var map_path)/$(var lanelet2_map_file)\" />\n<arg name=\"pointcloud_map_path\" value=\"$(var map_path)/$(var pointcloud_map_file)\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_map_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )

"},{"location":"launch/tier4_perception_launch/","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#tier4_perception_launch","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_perception_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_perception_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file=\"$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml\">\n<!-- options for mode: camera_lidar_fusion, lidar, camera -->\n<arg name=\"mode\" value=\"lidar\" />\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_planning_launch/","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#tier4_planning_launch","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_planning_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_planning_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_planning_launch)/launch/planning.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_sensing_launch/","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#tier4_sensing_launch","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_sensing_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_sensing_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use sensing.launch.xml.

  <include file=\"$(find-pkg-share tier4_sensing_launch)/launch/sensing.launch.xml\">\n<arg name=\"launch_driver\" value=\"true\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n<arg name=\"vehicle_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/vehicle_info.param.yaml\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/mirror.param.yaml\"/>\n</include>\n
"},{"location":"launch/tier4_sensing_launch/#launch-directory-structure","title":"Launch Directory Structure","text":"

This package finds sensor settings of specified sensor model in launch.

launch/\n\u251c\u2500\u2500 aip_x1 # Sensor model name\n\u2502   \u251c\u2500\u2500 camera.launch.xml # Camera\n\u2502   \u251c\u2500\u2500 gnss.launch.xml # GNSS\n\u2502   \u251c\u2500\u2500 imu.launch.xml # IMU\n\u2502   \u251c\u2500\u2500 lidar.launch.xml # LiDAR\n\u2502   \u2514\u2500\u2500 pointcloud_preprocessor.launch.py # for preprocessing pointcloud\n...\n
"},{"location":"launch/tier4_sensing_launch/#notes","title":"Notes","text":"

This package finds settings with variables.

ex.)

<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/$(var sensor_model)/lidar.launch.xml\">\n
"},{"location":"launch/tier4_simulator_launch/","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#tier4_simulator_launch","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_simulator_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_simulator_launch/#usage","title":"Usage","text":"
  <include file=\"$(find-pkg-share tier4_simulator_launch)/launch/simulator.launch.xml\">\n<arg name=\"vehicle_info_param_file\" value=\"VEHICLE_INFO_PARAM_FILE\" />\n<arg name=\"vehicle_model\" value=\"VEHICLE_MODEL\"/>\n</include>\n

The simulator model used in simple_planning_simulator is loaded from \"config/simulator_model.param.yaml\" in the \"VEHICLE_MODEL_description\" package.

"},{"location":"launch/tier4_system_launch/","title":"tier4_system_launch","text":""},{"location":"launch/tier4_system_launch/#tier4_system_launch","title":"tier4_system_launch","text":""},{"location":"launch/tier4_system_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_system_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_system_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of system.launch.xml.

  <include file=\"$(find-pkg-share tier4_system_launch)/launch/system.launch.xml\">\n<arg name=\"run_mode\" value=\"online\"/>\n<arg name=\"sensor_model\" value=\"SENSOR_MODEL\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_vehicle_launch/","title":"tier4_vehicle_launch","text":""},{"location":"launch/tier4_vehicle_launch/#tier4_vehicle_launch","title":"tier4_vehicle_launch","text":""},{"location":"launch/tier4_vehicle_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_vehicle_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_vehicle_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use vehicle.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n\n<include file=\"$(find-pkg-share tier4_vehicle_launch)/launch/vehicle.launch.xml\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n</include>\n
"},{"location":"launch/tier4_vehicle_launch/#notes","title":"Notes","text":"

This package finds some external packages and settings with variables and package names.

ex.)

<let name=\"vehicle_model_pkg\" value=\"$(find-pkg-share $(var vehicle_model)_description)\"/>\n
<arg name=\"config_dir\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)\"/>\n
"},{"location":"launch/tier4_vehicle_launch/#vehiclexacro","title":"vehicle.xacro","text":""},{"location":"launch/tier4_vehicle_launch/#arguments","title":"Arguments","text":"Name Type Description Default sensor_model String sensor model name \"\" vehicle_model String vehicle model name \"\""},{"location":"launch/tier4_vehicle_launch/#usage_1","title":"Usage","text":"

You can write as follows in *.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n<arg name=\"model\" default=\"$(find-pkg-share tier4_vehicle_launch)/urdf/vehicle.xacro\"/>\n\n<node name=\"robot_state_publisher\" pkg=\"robot_state_publisher\" exec=\"robot_state_publisher\">\n<param name=\"robot_description\" value=\"$(command 'xacro $(var model) vehicle_model:=$(var vehicle_model) sensor_model:=$(var sensor_model)')\"/>\n</node>\n
"},{"location":"localization/autoware_ekf_localizer/","title":"Overview","text":""},{"location":"localization/autoware_ekf_localizer/#overview","title":"Overview","text":"

The Extend Kalman Filter Localizer estimates robust and less noisy robot pose and twist by integrating the 2D vehicle dynamics model with input ego-pose and ego-twist messages. The algorithm is designed especially for fast-moving robots such as autonomous driving systems.

"},{"location":"localization/autoware_ekf_localizer/#flowchart","title":"Flowchart","text":"

The overall flowchart of the autoware_ekf_localizer is described below.

"},{"location":"localization/autoware_ekf_localizer/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"localization/autoware_ekf_localizer/#node","title":"Node","text":""},{"location":"localization/autoware_ekf_localizer/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description measured_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Input pose source with the measurement covariance matrix. measured_twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped Input twist source with the measurement covariance matrix. initialpose geometry_msgs::msg::PoseWithCovarianceStamped Initial pose for EKF. The estimated pose is initialized with zeros at the start. It is initialized with this message whenever published."},{"location":"localization/autoware_ekf_localizer/#published-topics","title":"Published Topics","text":"Name Type Description ekf_odom nav_msgs::msg::Odometry Estimated odometry. ekf_pose geometry_msgs::msg::PoseStamped Estimated pose. ekf_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated pose with covariance. ekf_biased_pose geometry_msgs::msg::PoseStamped Estimated pose including the yaw bias ekf_biased_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated pose with covariance including the yaw bias ekf_twist geometry_msgs::msg::TwistStamped Estimated twist. ekf_twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped The estimated twist with covariance. diagnostics diagnostics_msgs::msg::DiagnosticArray The diagnostic information. debug/processing_time_ms autoware_internal_debug_msgs::msg::Float64Stamped The processing time [ms]."},{"location":"localization/autoware_ekf_localizer/#published-tf","title":"Published TF","text":""},{"location":"localization/autoware_ekf_localizer/#functions","title":"Functions","text":""},{"location":"localization/autoware_ekf_localizer/#predict","title":"Predict","text":"

The current robot state is predicted from previously estimated data using a given prediction model. This calculation is called at a constant interval (predict_frequency [Hz]). The prediction equation is described at the end of this page.

"},{"location":"localization/autoware_ekf_localizer/#measurement-update","title":"Measurement Update","text":"

Before the update, the Mahalanobis distance is calculated between the measured input and the predicted state, the measurement update is not performed for inputs where the Mahalanobis distance exceeds the given threshold.

The predicted state is updated with the latest measured inputs, measured_pose, and measured_twist. The updates are performed with the same frequency as prediction, usually at a high frequency, in order to enable smooth state estimation.

"},{"location":"localization/autoware_ekf_localizer/#parameter-description","title":"Parameter description","text":"

The parameters are set in launch/ekf_localizer.launch .

"},{"location":"localization/autoware_ekf_localizer/#for-node","title":"For Node","text":"Name Type Description Default Range show_debug_info boolean Flag to display debug info 0 N/A predict_frequency float Frequency for filtering and publishing [Hz] 50 N/A tf_rate float Frequency for tf broadcasting [Hz] 50 N/A extend_state_step integer Max delay step which can be dealt with in EKF. Large number increases computational cost. 50 N/A enable_yaw_bias_estimation boolean Flag to enable yaw bias estimation 1 N/A"},{"location":"localization/autoware_ekf_localizer/#for-pose-measurement","title":"For pose measurement","text":"Name Type Description Default Range pose_additional_delay float Additional delay time for pose measurement [s] 0 N/A pose_measure_uncertainty_time float Measured time uncertainty used for covariance calculation [s] 0.01 N/A pose_smoothing_steps integer A value for smoothing steps 5 N/A pose_gate_dist float Limit of Mahalanobis distance used for outliers detection 49.5 N/A"},{"location":"localization/autoware_ekf_localizer/#for-twist-measurement","title":"For twist measurement","text":"Name Type Description Default Range twist_additional_delay float Additional delay time for twist [s] 0 N/A twist_smoothing_steps integer A value for smoothing steps 2 N/A twist_gate_dist float Limit of Mahalanobis distance used for outliers detection 46.1 N/A"},{"location":"localization/autoware_ekf_localizer/#for-process-noise","title":"For process noise","text":"Name Type Description Default Range proc_stddev_vx_c float Standard deviation of process noise in time differentiation expression of linear velocity x, noise for d_vx = 0 10 N/A proc_stddev_wz_c float Standard deviation of process noise in time differentiation expression of angular velocity z, noise for d_wz = 0 5 N/A proc_stddev_yaw_c float Standard deviation of process noise in time differentiation expression of yaw, noise for d_yaw = omega 0.005 N/A

note: process noise for positions x & y are calculated automatically from nonlinear dynamics.

"},{"location":"localization/autoware_ekf_localizer/#simple-1d-filter-parameters","title":"Simple 1D Filter Parameters","text":"Name Type Description Default Range z_filter_proc_dev float Simple1DFilter - Z filter process deviation 1 N/A roll_filter_proc_dev float Simple1DFilter - Roll filter process deviation 0.1 N/A pitch_filter_proc_dev float Simple1DFilter - Pitch filter process deviation 0.1 N/A"},{"location":"localization/autoware_ekf_localizer/#for-diagnostics","title":"For diagnostics","text":"Name Type Description Default Range pose_no_update_count_threshold_warn integer The threshold at which a WARN state is triggered due to the Pose Topic update not happening continuously for a certain number of times 50 N/A pose_no_update_count_threshold_error integer The threshold at which an ERROR state is triggered due to the Pose Topic update not happening continuously for a certain number of times 100 N/A twist_no_update_count_threshold_warn integer The threshold at which a WARN state is triggered due to the Twist Topic update not happening continuously for a certain number of times 50 N/A twist_no_update_count_threshold_error integer The threshold at which an ERROR state is triggered due to the Twist Topic update not happening continuously for a certain number of times 100 N/A ellipse_scale float The scale factor to apply the error ellipse size 3 N/A error_ellipse_size float The long axis size of the error ellipse to trigger a ERROR state 1.5 N/A warn_ellipse_size float The long axis size of the error ellipse to trigger a WARN state 1.2 N/A error_ellipse_size_lateral_direction float The lateral direction size of the error ellipse to trigger a ERROR state 0.3 N/A warn_ellipse_size_lateral_direction float The lateral direction size of the error ellipse to trigger a WARN state 0.25 N/A"},{"location":"localization/autoware_ekf_localizer/#misc","title":"Misc","text":"Name Type Description Default Range threshold_observable_velocity_mps float Minimum value for velocity that will be used for EKF. Mainly used for dead zone in velocity sensor [m/s] (0.0 means disabled) 0.0 N/A pose_frame_id string Parent frame_id of EKF output pose map N/A"},{"location":"localization/autoware_ekf_localizer/#how-to-tune-ekf-parameters","title":"How to tune EKF parameters","text":""},{"location":"localization/autoware_ekf_localizer/#0-preliminaries","title":"0. Preliminaries","text":""},{"location":"localization/autoware_ekf_localizer/#1-tune-sensor-parameters","title":"1. Tune sensor parameters","text":"

Set standard deviation for each sensor. The pose_measure_uncertainty_time is for the uncertainty of the header timestamp data. You can also tune a number of steps for smoothing for each observed sensor data by tuning *_smoothing_steps. Increasing the number will improve the smoothness of the estimation, but may have an adverse effect on the estimation performance.

"},{"location":"localization/autoware_ekf_localizer/#2-tune-process-model-parameters","title":"2. Tune process model parameters","text":""},{"location":"localization/autoware_ekf_localizer/#3-tune-gate-parameters","title":"3. Tune gate parameters","text":"

EKF performs gating using Mahalanobis distance before updating by observation. The gate size is determined by the pose_gate_dist parameter and the twist_gate_dist. If the Mahalanobis distance is larger than this value, the observation is ignored.

This gating process is based on a statistical test using the chi-square distribution. As modeled, we assume that the Mahalanobis distance follows a chi-square distribution with 3 degrees of freedom for pose and 2 degrees of freedom for twist.

Currently, the accuracy of covariance estimation itself is not very good, so it is recommended to set the significance level to a very small value to reduce rejection due to false positives.

Significance level Threshold for 2 dof Threshold for 3 dof \\(10 ^ {-2}\\) 9.21 11.3 \\(10 ^ {-3}\\) 13.8 16.3 \\(10 ^ {-4}\\) 18.4 21.1 \\(10 ^ {-5}\\) 23.0 25.9 \\(10 ^ {-6}\\) 27.6 30.7 \\(10 ^ {-7}\\) 32.2 35.4 \\(10 ^ {-8}\\) 36.8 40.1 \\(10 ^ {-9}\\) 41.4 44.8 \\(10 ^ {-10}\\) 46.1 49.5"},{"location":"localization/autoware_ekf_localizer/#kalman-filter-model","title":"Kalman Filter Model","text":""},{"location":"localization/autoware_ekf_localizer/#kinematics-model-in-update-function","title":"kinematics model in update function","text":"

where, \\(\\theta_k\\) represents the vehicle's heading angle, including the mounting angle bias. \\(b_k\\) is a correction term for the yaw bias, and it is modeled so that \\((\\theta_k+b_k)\\) becomes the heading angle of the base_link. The pose_estimator is expected to publish the base_link in the map coordinate system. However, the yaw angle may be offset due to calibration errors. This model compensates this error and improves estimation accuracy.

"},{"location":"localization/autoware_ekf_localizer/#time-delay-model","title":"time delay model","text":"

The measurement time delay is handled by an augmented state [1] (See, Section 7.3 FIXED-LAG SMOOTHING).

Note that, although the dimension gets larger since the analytical expansion can be applied based on the specific structures of the augmented states, the computational complexity does not significantly change.

"},{"location":"localization/autoware_ekf_localizer/#test-result-with-autoware-ndt","title":"Test Result with Autoware NDT","text":""},{"location":"localization/autoware_ekf_localizer/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/autoware_ekf_localizer/#the-conditions-that-result-in-a-warn-state","title":"The conditions that result in a WARN state","text":""},{"location":"localization/autoware_ekf_localizer/#the-conditions-that-result-in-an-error-state","title":"The conditions that result in an ERROR state","text":""},{"location":"localization/autoware_ekf_localizer/#known-issues","title":"Known issues","text":""},{"location":"localization/autoware_ekf_localizer/#reference","title":"reference","text":"

[1] Anderson, B. D. O., & Moore, J. B. (1979). Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall.

"},{"location":"localization/autoware_geo_pose_projector/","title":"autoware_geo_pose_projector","text":""},{"location":"localization/autoware_geo_pose_projector/#autoware_geo_pose_projector","title":"autoware_geo_pose_projector","text":""},{"location":"localization/autoware_geo_pose_projector/#overview","title":"Overview","text":"

This node is a simple node that subscribes to the geo-referenced pose topic and publishes the pose in the map frame.

"},{"location":"localization/autoware_geo_pose_projector/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description input_geo_pose geographic_msgs::msg::GeoPoseWithCovarianceStamped geo-referenced pose /map/map_projector_info autoware_map_msgs::msg::MapProjectedObjectInfo map projector info"},{"location":"localization/autoware_geo_pose_projector/#published-topics","title":"Published Topics","text":"Name Type Description output_pose geometry_msgs::msg::PoseWithCovarianceStamped pose in map frame /tf tf2_msgs::msg::TFMessage tf from parent link to the child link"},{"location":"localization/autoware_geo_pose_projector/#parameters","title":"Parameters","text":"Name Type Description Default Range publish_tf boolean whether to publish tf True N/A parent_frame string parent frame for published tf map N/A child_frame string child frame for published tf pose_estimator_base_link N/A"},{"location":"localization/autoware_geo_pose_projector/#limitations","title":"Limitations","text":"

The covariance conversion may be incorrect depending on the projection type you are using. The covariance of input topic is expressed in (Latitude, Longitude, Altitude) as a diagonal matrix. Currently, we assume that the x axis is the east direction and the y axis is the north direction. Thus, the conversion may be incorrect when this assumption breaks, especially when the covariance of latitude and longitude is different.

"},{"location":"localization/autoware_gyro_odometer/","title":"autoware_gyro_odometer","text":""},{"location":"localization/autoware_gyro_odometer/#autoware_gyro_odometer","title":"autoware_gyro_odometer","text":""},{"location":"localization/autoware_gyro_odometer/#purpose","title":"Purpose","text":"

autoware_gyro_odometer is the package to estimate twist by combining imu and vehicle speed.

"},{"location":"localization/autoware_gyro_odometer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_gyro_odometer/#input","title":"Input","text":"Name Type Description vehicle/twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance from vehicle imu sensor_msgs::msg::Imu imu from sensor"},{"location":"localization/autoware_gyro_odometer/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped estimated twist with covariance"},{"location":"localization/autoware_gyro_odometer/#parameters","title":"Parameters","text":"Name Type Description Default Range output_frame string output's frame id base_link N/A message_timeout_sec float delay tolerance time for message 0.2 N/A"},{"location":"localization/autoware_gyro_odometer/#assumptions-known-limits","title":"Assumptions / Known limits","text":" "},{"location":"localization/autoware_gyro_odometer/#diagnostics","title":"Diagnostics","text":"Name Description Transition condition to Warning Transition condition to Error topic_time_stamp the time stamp of service calling. [nano second] none none is_arrived_first_vehicle_twist whether the vehicle twist topic has been received even once. not arrive yet none is_arrived_first_imu whether the imu topic has been received even once. not arrive yet none vehicle_twist_time_stamp_dt the time difference between the current time and the latest vehicle twist topic. [second] none the time is longer than message_timeout_sec imu_time_stamp_dt the time difference between the current time and the latest imu topic. [second] none the time is longer than message_timeout_sec vehicle_twist_queue_size the size of vehicle_twist_queue. none none imu_queue_size the size of gyro_queue. none none is_succeed_transform_imu whether transform imu is succeed or not. none failed"},{"location":"localization/autoware_landmark_based_localizer/","title":"Landmark Based Localizer","text":""},{"location":"localization/autoware_landmark_based_localizer/#landmark-based-localizer","title":"Landmark Based Localizer","text":"

This directory contains packages for landmark-based localization.

Landmarks are, for example

etc.

Since these landmarks are easy to detect and estimate pose, the ego pose can be calculated from the pose of the detected landmark if the pose of the landmark is written on the map in advance.

Currently, landmarks are assumed to be flat.

The following figure shows the principle of localization in the case of ar_tag_based_localizer.

This calculated ego pose is passed to the EKF, where it is fused with the twist information and used to estimate a more accurate ego pose.

"},{"location":"localization/autoware_landmark_based_localizer/#node-diagram","title":"Node diagram","text":""},{"location":"localization/autoware_landmark_based_localizer/#landmark_manager","title":"landmark_manager","text":"

The definitions of the landmarks written to the map are introduced in the next section. See Map Specifications.

The landmark_manager is a utility package to load landmarks from the map.

Users can define landmarks as Lanelet2 4-vertex polygons. In this case, it is possible to define an arrangement in which the four vertices cannot be considered to be on the same plane. The direction of the landmark in that case is difficult to calculate. So, if the 4 vertices are considered as forming a tetrahedron and its volume exceeds the volume_threshold parameter, the landmark will not publish tf_static.

"},{"location":"localization/autoware_landmark_based_localizer/#landmark-based-localizer-packages","title":"Landmark based localizer packages","text":""},{"location":"localization/autoware_landmark_based_localizer/#map-specifications","title":"Map specifications","text":"

See https://github.com/autowarefoundation/autoware_lanelet2_extension/blob/main/autoware_lanelet2_extension/docs/lanelet2_format_extension.md#localization-landmarks

"},{"location":"localization/autoware_landmark_based_localizer/#about-consider_orientation","title":"About consider_orientation","text":"

The calculate_new_self_pose function in the LandmarkManager class includes a boolean argument named consider_orientation. This argument determines the method used to calculate the new self pose based on detected and mapped landmarks. The following image illustrates the difference between the two methods.

"},{"location":"localization/autoware_landmark_based_localizer/#consider_orientation-true","title":"consider_orientation = true","text":"

In this mode, the new self pose is calculated so that the relative Pose of the \"landmark detected from the current self pose\" is equal to the relative Pose of the \"landmark mapped from the new self pose\". This method can correct for orientation, but is strongly affected by the orientation error of the landmark detection.

"},{"location":"localization/autoware_landmark_based_localizer/#consider_orientation-false","title":"consider_orientation = false","text":"

In this mode, the new self pose is calculated so that only the relative position is correct for x, y, and z.

This method can not correct for orientation, but it is not affected by the orientation error of the landmark detection.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/","title":"AR Tag Based Localizer","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#ar-tag-based-localizer","title":"AR Tag Based Localizer","text":"

ArTagBasedLocalizer is a vision-based localization node.

This node uses the ArUco library to detect AR-Tags from camera images and calculates and publishes the pose of the ego vehicle based on these detections. The positions and orientations of the AR-Tags are assumed to be written in the Lanelet2 format.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#ar_tag_based_localizer-node","title":"ar_tag_based_localizer node","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#input","title":"Input","text":"Name Type Description ~/input/lanelet2_map autoware_map_msgs::msg::LaneletMapBin Data of lanelet2 ~/input/image sensor_msgs::msg::Image Camera Image ~/input/camera_info sensor_msgs::msg::CameraInfo Camera Info ~/input/ekf_pose geometry_msgs::msg::PoseWithCovarianceStamped EKF Pose without IMU correction. It is used to validate detected AR tags by filtering out False Positives. Only if the EKF Pose and the AR tag-detected Pose are within a certain temporal and spatial range, the AR tag-detected Pose is considered valid and published."},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#output","title":"Output","text":"Name Type Description ~/output/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated Pose ~/debug/result sensor_msgs::msg::Image [debug topic] Image in which marker detection results are superimposed on the input image ~/debug/marker visualization_msgs::msg::MarkerArray [debug topic] Loaded landmarks to visualize in Rviz as thin boards /tf geometry_msgs::msg::TransformStamped [debug topic] TF from camera to detected tag /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics outputs"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#parameters","title":"Parameters","text":"Name Type Description Default Range marker_size float marker_size 0.6 N/A target_tag_ids array target_tag_ids ['0','1','2','3','4','5','6'] N/A base_covariance array base_covariance [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02] N/A distance_threshold float distance_threshold(m) 13.0 N/A consider_orientation boolean consider_orientation false N/A detection_mode string detection_mode select from [DM_NORMAL, DM_FAST, DM_VIDEO_FAST] DM_NORMAL N/A min_marker_size float min_marker_size 0.02 N/A ekf_time_tolerance float ekf_time_tolerance(sec) 5.0 N/A ekf_position_tolerance float ekf_position_tolerance(m) 10.0 N/A"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#how-to-launch","title":"How to launch","text":"

When launching Autoware, set artag for pose_source.

ros2 launch autoware_launch ... \\\npose_source:=artag \\\n...\n
"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#rosbag","title":"Rosbag","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#sample-rosbag-and-map-awsim-data","title":"Sample rosbag and map (AWSIM data)","text":"

This data is simulated data created by AWSIM. Essentially, AR tag-based self-localization is not intended for such public road driving, but for driving in a smaller area, so the max driving speed is set at 15 km/h.

It is a known problem that the timing of when each AR tag begins to be detected can cause significant changes in estimation.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#sample-rosbag-and-map-real-world-data","title":"Sample rosbag and map (Real world data)","text":"

Please remap the topic names and play it.

ros2 bag play /path/to/ar_tag_based_localizer_sample_bag/ -r 0.5 -s sqlite3 \\\n--remap /sensing/camera/front/image:=/sensing/camera/traffic_light/image_raw \\\n/sensing/camera/front/image/info:=/sensing/camera/traffic_light/camera_info\n

This dataset contains issues such as missing IMU data, and overall the accuracy is low. Even when running AR tag-based self-localization, significant difference from the true trajectory can be observed.

The image below shows the trajectory when the sample is executed and plotted.

The pull request video below should also be helpful.

https://github.com/autowarefoundation/autoware.universe/pull/4347#issuecomment-1663155248

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#principle","title":"Principle","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/","title":"LiDAR Marker Localizer","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#lidar-marker-localizer","title":"LiDAR Marker Localizer","text":"

LiDARMarkerLocalizer is a detect-reflector-based localization node .

"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#lidar_marker_localizer-node","title":"lidar_marker_localizer node","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#input","title":"Input","text":"Name Type Description ~/input/lanelet2_map autoware_map_msgs::msg::HADMapBin Data of lanelet2 ~/input/pointcloud sensor_msgs::msg::PointCloud2 PointCloud ~/input/ekf_pose geometry_msgs::msg::PoseWithCovarianceStamped EKF Pose"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#output","title":"Output","text":"Name Type Description ~/output/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated pose ~/debug/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped [debug topic] Estimated pose ~/debug/marker_detected geometry_msgs::msg::PoseArray [debug topic] Detected marker poses ~/debug/marker_mapped visualization_msgs::msg::MarkerArray [debug topic] Loaded landmarks to visualize in Rviz as thin boards ~/debug/marker_pointcloud sensor_msgs::msg::PointCloud2 [debug topic] PointCloud of the detected marker /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics outputs"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#parameters","title":"Parameters","text":"Name Type Description Default Range marker_name string The name of the markers listed in the HD map. reflector N/A resolution float Grid size for marker detection algorithm. [m] 0.05 \u22650.0 intensity_pattern array A sequence of high/low intensity to perform pattern matching. 1: high intensity (positive match), 0: not consider, -1: low intensity (negative match) [-1, -1, 0, 1, 1, 1, 1, 1, 0, -1, -1] N/A match_intensity_difference_threshold float Threshold for determining high/low. 20 \u22650 positive_match_num_threshold float Threshold for the number of required matches with the pattern. 3 \u22650 negative_match_num_threshold float Threshold for the number of required matches with the pattern. 3 \u22650 vote_threshold_for_detect_marker float Threshold for the number of rings matching the pattern needed to detect it as a marker. 20 \u22650 marker_height_from_ground float Height from the ground to the center of the marker. [m] 1.075 N/A self_pose_timeout_sec float Timeout for self pose. [sec] 1.0 \u22650.0 self_pose_distance_tolerance_m float Tolerance for the distance between two points when performing linear interpolation. [m] 1.0 \u22650.0 limit_distance_from_self_pose_to_nearest_marker float Distance limit for the purpose of determining whether the node should detect a marker. [m] 2.0 \u22650.0 limit_distance_from_self_pose_to_marker float Distance limit for avoiding miss detection. [m] 2.0 \u22650.0 base_covariance array Output covariance in the base_link coordinate. [0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 7.569e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 7.569e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00030625] N/A marker_width float Width of a marker. This param is used for visualizing the detected marker pointcloud[m] 0.8 \u22650.0 enable_save_log boolean False N/A save_file_directory_path string $(env HOME)/detected_reflector_intensity N/A save_file_name string detected_reflector_intensity N/A save_frame_id string velodyne_top N/A"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#how-to-launch","title":"How to launch","text":"

When launching Autoware, set lidar-marker for pose_source.

ros2 launch autoware_launch ... \\\npose_source:=lidar-marker \\\n...\n
"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#design","title":"Design","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#flowchart","title":"Flowchart","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#detection-algorithm","title":"Detection Algorithm","text":"
  1. Split the LiDAR point cloud into rings along the x-axis of the base_link coordinate system at intervals of the resolution size.
  2. Find the portion of intensity that matches the intensity_pattern.
  3. Perform steps 1 and 2 for each ring, accumulate the matching indices, and detect portions where the count exceeds the vote_threshold_for_detect_marker as markers.
"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#sample-dataset","title":"Sample Dataset","text":"

This dataset was acquired in National Institute for Land and Infrastructure Management, Full-scale tunnel experiment facility. The reflectors were installed by Taisei Corporation.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#collaborators","title":"Collaborators","text":""},{"location":"localization/autoware_localization_error_monitor/","title":"autoware_localization_error_monitor","text":""},{"location":"localization/autoware_localization_error_monitor/#autoware_localization_error_monitor","title":"autoware_localization_error_monitor","text":""},{"location":"localization/autoware_localization_error_monitor/#purpose","title":"Purpose","text":"

autoware_localization_error_monitor is a package for diagnosing localization errors by monitoring uncertainty of the localization results. The package monitors the following two values:

"},{"location":"localization/autoware_localization_error_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_localization_error_monitor/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization result"},{"location":"localization/autoware_localization_error_monitor/#output","title":"Output","text":"Name Type Description debug/ellipse_marker visualization_msgs::msg::Marker ellipse marker diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics outputs"},{"location":"localization/autoware_localization_error_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range scale float scale factor for monitored values 3 N/A error_ellipse_size float error threshold for long radius of confidence ellipse [m] 1.5 N/A warn_ellipse_size float warning threshold for long radius of confidence ellipse [m] 1.2 N/A error_ellipse_size_lateral_direction float error threshold for size of confidence ellipse along lateral direction [m] 0.3 N/A warn_ellipse_size_lateral_direction float warning threshold for size of confidence ellipse along lateral direction [m] 0.25 N/A"},{"location":"localization/autoware_ndt_scan_matcher/","title":"autoware_ndt_scan_matcher","text":""},{"location":"localization/autoware_ndt_scan_matcher/#autoware_ndt_scan_matcher","title":"autoware_ndt_scan_matcher","text":""},{"location":"localization/autoware_ndt_scan_matcher/#purpose","title":"Purpose","text":"

autoware_ndt_scan_matcher is a package for position estimation using the NDT scan matching method.

There are two main functions in this package:

One optional function is regularization. Please see the regularization chapter in the back for details. It is disabled by default.

"},{"location":"localization/autoware_ndt_scan_matcher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_ndt_scan_matcher/#input","title":"Input","text":"Name Type Description ekf_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped initial pose points_raw sensor_msgs::msg::PointCloud2 sensor pointcloud sensing/gnss/pose_with_covariance sensor_msgs::msg::PoseWithCovarianceStamped base position for regularization term

sensing/gnss/pose_with_covariance is required only when regularization is enabled.

"},{"location":"localization/autoware_ndt_scan_matcher/#output","title":"Output","text":"Name Type Description ndt_pose geometry_msgs::msg::PoseStamped estimated pose ndt_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped estimated pose with covariance /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics points_aligned sensor_msgs::msg::PointCloud2 [debug topic] pointcloud aligned by scan matching points_aligned_no_ground sensor_msgs::msg::PointCloud2 [debug topic] no ground pointcloud aligned by scan matching initial_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped [debug topic] initial pose used in scan matching multi_ndt_pose geometry_msgs::msg::PoseArray [debug topic] estimated poses from multiple initial poses in real-time covariance estimation multi_initial_pose geometry_msgs::msg::PoseArray [debug topic] initial poses for real-time covariance estimation exe_time_ms autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] execution time for scan matching [ms] transform_probability autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching no_ground_transform_probability autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching based on no ground LiDAR scan iteration_num autoware_internal_debug_msgs::msg::Int32Stamped [debug topic] number of scan matching iterations initial_to_result_relative_pose geometry_msgs::msg::PoseStamped [debug topic] relative pose between the initial point and the convergence point initial_to_result_distance autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the initial point and the convergence point [m] initial_to_result_distance_old autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the older of the two initial points used in linear interpolation and the convergence point [m] initial_to_result_distance_new autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the newer of the two initial points used in linear interpolation and the convergence point [m] ndt_marker visualization_msgs::msg::MarkerArray [debug topic] markers for debugging monte_carlo_initial_pose_marker visualization_msgs::msg::MarkerArray [debug topic] particles used in initial position estimation"},{"location":"localization/autoware_ndt_scan_matcher/#service","title":"Service","text":"Name Type Description ndt_align_srv autoware_localization_srvs::srv::PoseWithCovarianceStamped service to estimate initial pose"},{"location":"localization/autoware_ndt_scan_matcher/#parameters","title":"Parameters","text":""},{"location":"localization/autoware_ndt_scan_matcher/#core-parameters","title":"Core Parameters","text":""},{"location":"localization/autoware_ndt_scan_matcher/#frame","title":"Frame","text":"Name Type Description Default Range base_frame string Vehicle reference frame. base_link N/A ndt_base_frame string NDT reference frame. ndt_base_link N/A map_frame string Map frame. map N/A"},{"location":"localization/autoware_ndt_scan_matcher/#sensor-points","title":"Sensor Points","text":"Name Type Description Default Range timeout_sec float Tolerance of timestamp difference between current time and sensor pointcloud. [sec] 1 \u22650.0 required_distance float Required distance of input sensor points [m]. If the max distance of input sensor points is lower than this value, the scan matching will not be performed. 10 N/A"},{"location":"localization/autoware_ndt_scan_matcher/#ndt","title":"Ndt","text":"Name Type Description Default Range trans_epsilon float The maximum difference between two consecutive transformations in order to consider convergence. 0.01 \u22650.0 step_size float The newton line search maximum step length. 0.1 \u22650.0 resolution float The ND voxel grid resolution. 2 \u22650.0 max_iterations float The number of iterations required to calculate alignment. 30 \u22651 num_threads float Number of threads used for parallel computing. 4 \u22651"},{"location":"localization/autoware_ndt_scan_matcher/#initial-pose-estimation","title":"Initial Pose Estimation","text":"Name Type Description Default Range particles_num float The number of particles to estimate initial pose. 200 \u22651 n_startup_trials float The number of initial random trials in the TPE (Tree-Structured Parzen Estimator). This value should be equal to or less than 'initial_estimate_particles_num' and more than 0. If it is equal to 'initial_estimate_particles_num', the search will be the same as a full random search. 100 \u22651"},{"location":"localization/autoware_ndt_scan_matcher/#validation","title":"Validation","text":"Name Type Description Default Range initial_pose_timeout_sec float Tolerance of timestamp difference between initial_pose and sensor pointcloud. [sec] 1 \u22650.0 initial_pose_distance_tolerance_m float Tolerance of distance difference between two initial poses used for linear interpolation. [m] 10 \u22650.0 initial_to_result_distance_tolerance_m float Tolerance of distance difference from initial pose to result pose. [m] 3 \u22650.0 critical_upper_bound_exe_time_ms float The execution time which means probably NDT cannot matches scans properly. [ms] 100 \u22650.0 skipping_publish_num float Tolerance for the number for times rejected estimation results consecutively. 5 \u22651"},{"location":"localization/autoware_ndt_scan_matcher/#score-estimation","title":"Score Estimation","text":"Name Type Description Default Range converged_param_type float Converged param type. 0=TRANSFORM_PROBABILITY, 1=NEAREST_VOXEL_TRANSFORMATION_LIKELIHOOD 1 \u22650\u22641 converged_param_transform_probability float If converged_param_type is 0, threshold for deciding whether to trust the estimation result. 3 \u22650.0 converged_param_nearest_voxel_transformation_likelihood float If converged_param_type is 1, threshold for deciding whether to trust the estimation result. 2.3 \u22650.0"},{"location":"localization/autoware_ndt_scan_matcher/#covariance","title":"Covariance","text":"Name Type Description Default Range output_pose_covariance array The covariance of output pose. Note that this covariance matrix is empirically derived. [0.0225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000625] N/A"},{"location":"localization/autoware_ndt_scan_matcher/#regularization","title":"Regularization","text":""},{"location":"localization/autoware_ndt_scan_matcher/#abstract","title":"Abstract","text":"

This is a function that adds the regularization term to the NDT optimization problem as follows.

\\[ \\begin{align} \\min_{\\mathbf{R},\\mathbf{t}} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) +\\mathrm{scale\\ factor}\\cdot \\left| \\mathbf{R}^\\top (\\mathbf{t_{base}-\\mathbf{t}}) \\cdot \\begin{pmatrix} 1\\\\ 0\\\\ 0 \\end{pmatrix} \\right|^2 \\end{align} \\]

, where t_base is base position measured by GNSS or other means. NDT(R,t) stands for the pure NDT cost function. The regularization term shifts the optimal solution to the base position in the longitudinal direction of the vehicle. Only errors along the longitudinal direction with respect to the base position are considered; errors along Z-axis and lateral-axis error are not considered.

Although the regularization term has rotation as a parameter, the gradient and hessian associated with it is not computed to stabilize the optimization. Specifically, the gradients are computed as follows.

\\[ \\begin{align} &g_x=\\nabla_x \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\cos\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_y=\\nabla_y \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\sin\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_z=\\nabla_z \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\\\ &g_\\mathbf{R}=\\nabla_\\mathbf{R} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\end{align} \\]

Regularization is disabled by default. If you wish to use it, please edit the following parameters to enable it.

"},{"location":"localization/autoware_ndt_scan_matcher/#where-is-regularization-available","title":"Where is regularization available","text":"

This feature is effective on feature-less roads where GNSS is available, such as

By remapping the base position topic to something other than GNSS, as described below, it can be valid outside of these.

"},{"location":"localization/autoware_ndt_scan_matcher/#using-other-base-position","title":"Using other base position","text":"

Other than GNSS, you can give other global position topics obtained from magnetic markers, visual markers or etc. if they are available in your environment. (Currently Autoware does not provide a node that gives such pose.) To use your topic for regularization, you need to remap the input_regularization_pose_topic with your topic in ndt_scan_matcher.launch.xml. By default, it is remapped with /sensing/gnss/pose_with_covariance.

"},{"location":"localization/autoware_ndt_scan_matcher/#limitations","title":"Limitations","text":"

Since this function determines the base position by linear interpolation from the recently subscribed poses, topics that are published at a low frequency relative to the driving speed cannot be used. Inappropriate linear interpolation may result in bad optimization results.

When using GNSS for base location, the regularization can have negative effects in tunnels, indoors, and near skyscrapers. This is because if the base position is far off from the true value, NDT scan matching may converge to inappropriate optimal position.

"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_1","title":"Parameters","text":"Name Type Description Default Range enable boolean Regularization switch. 0 N/A scale_factor float Regularization scale factor. 0.01 \u22650.0

Regularization is disabled by default because GNSS is not always accurate enough to serve the appropriate base position in any scenes.

If the scale_factor is too large, the NDT will be drawn to the base position and scan matching may fail. Conversely, if it is too small, the regularization benefit will be lost.

Note that setting scale_factor to 0 is equivalent to disabling regularization.

"},{"location":"localization/autoware_ndt_scan_matcher/#example","title":"Example","text":"

The following figures show tested maps.

The following figures show the trajectories estimated on the feature-less map with standard NDT and regularization-enabled NDT, respectively. The color of the trajectory indicates the error (meter) from the reference trajectory, which is computed with the feature-rich map.

"},{"location":"localization/autoware_ndt_scan_matcher/#dynamic-map-loading","title":"Dynamic map loading","text":"

Autoware supports dynamic map loading feature for ndt_scan_matcher. Using this feature, NDT dynamically requests for the surrounding pointcloud map to pointcloud_map_loader, and then receive and preprocess the map in an online fashion.

Using the feature, ndt_scan_matcher can theoretically handle any large size maps in terms of memory usage. (Note that it is still possible that there exists a limitation due to other factors, e.g. floating-point error)

"},{"location":"localization/autoware_ndt_scan_matcher/#additional-interfaces","title":"Additional interfaces","text":""},{"location":"localization/autoware_ndt_scan_matcher/#additional-outputs","title":"Additional outputs","text":"Name Type Description debug/loaded_pointcloud_map sensor_msgs::msg::PointCloud2 pointcloud maps used for localization (for debug)"},{"location":"localization/autoware_ndt_scan_matcher/#additional-client","title":"Additional client","text":"Name Type Description client_map_loader autoware_map_msgs::srv::GetDifferentialPointCloudMap map loading client"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_2","title":"Parameters","text":"Name Type Description Default Range update_distance float Dynamic map loading distance. 20 \u22650.0 map_radius float Dynamic map loading loading radius. 150 \u22650.0 lidar_radius float Radius of input LiDAR range (used for diagnostics of dynamic map loading). 100 \u22650.0"},{"location":"localization/autoware_ndt_scan_matcher/#notes-for-dynamic-map-loading","title":"Notes for dynamic map loading","text":"

To use dynamic map loading feature for ndt_scan_matcher, you also need to split the PCD files into grids (recommended size: 20[m] x 20[m])

Note that the dynamic map loading may FAIL if the map is split into two or more large size map (e.g. 1000[m] x 1000[m]). Please provide either of

Here is a split PCD map for sample-map-rosbag from Autoware tutorial: sample-map-rosbag_split.zip

PCD files How NDT loads map(s) single file at once (standard) multiple files dynamically"},{"location":"localization/autoware_ndt_scan_matcher/#scan-matching-score-based-on-no-ground-lidar-scan","title":"Scan matching score based on no ground LiDAR scan","text":""},{"location":"localization/autoware_ndt_scan_matcher/#abstract_1","title":"Abstract","text":"

This is a function that uses no ground LiDAR scan to estimate the scan matching score. This score can reflect the current localization performance more accurately. related issue.

"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_3","title":"Parameters","text":"Name Type Description Default Range enable boolean A flag for using scan matching score based on de-grounded LiDAR scan. 0 N/A z_margin_for_ground_removal float If lidar_point.z - base_link.z <= this threshold , the point will be removed. 0.8 \u22650.0"},{"location":"localization/autoware_ndt_scan_matcher/#2d-real-time-covariance-estimation","title":"2D real-time covariance estimation","text":""},{"location":"localization/autoware_ndt_scan_matcher/#abstract_2","title":"Abstract","text":"

Initially, the covariance of NDT scan matching is a fixed value(FIXED_VALUE mode). So, three modes are provided for 2D covariance (xx, xy, yx, yy) estimation in real time: LAPLACE_APPROXIMATION, MULTI_NDT, and MULTI_NDT_SCORE. LAPLACE_APPROXIMATION calculates the inverse matrix of the XY (2x2) part of the Hessian obtained by NDT scan matching and uses it as the covariance matrix. On the other hand, MULTI_NDT, and MULTI_NDT_SCORE use NDT convergence from multiple initial poses to obtain 2D covariance. Ideally, the arrangement of multiple initial poses is efficiently limited by the Hessian matrix of the NDT score function. In this implementation, the number of initial positions is fixed to simplify the code. To obtain the covariance, MULTI_NDT computes until convergence at each initial position, while MULTI_NDT_SCORE uses the nearest voxel transformation likelihood. The covariance can be seen as error ellipse from ndt_pose_with_covariance setting on rviz2. original paper.

Note that this function may spoil healthy system behavior if it consumes much calculation resources.

"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_4","title":"Parameters","text":"

There are three types in the calculation of 2D covariance in real time.You can select the method by changing covariance_estimation_type. initial_pose_offset_model is rotated around (x,y) = (0,0) in the direction of the first principal component of the Hessian matrix. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements. In MULTI_NDT_SCORE mode, the scale of the output 2D covariance can be adjusted according to the temperature.

Name Type Description Default Range covariance_estimation_type float 2D Real-time Converged estimation type. 0=FIXED_VALUE, 1=LAPLACE_APPROXIMATION, 2=MULTI_NDT, 3=MULTI_NDT_SCORE (FIXED_VALUE mode does not perform 2D Real-time Converged estimation) 0 \u22650\u22643 initial_pose_offset_model_x array Offset arrangement in covariance estimation [m]. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements. [0.0, 0.0, 0.5, -0.5, 1.0, -1.0] N/A initial_pose_offset_model_y array Offset arrangement in covariance estimation [m]. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements. [0.5, -0.5, 0.0, 0.0, 0.0, 0.0] N/A temperature float In MULTI_NDT_SCORE, the parameter that adjusts the estimated 2D covariance 0.05 >0 scale_factor float Scale value for adjusting the estimated covariance by a constant multiplication 1.0 >0"},{"location":"localization/autoware_ndt_scan_matcher/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/autoware_ndt_scan_matcher/#scan_matching_status","title":"scan_matching_status","text":"Name Description Transition condition to Warning Transition condition to Error Whether to reject the estimation result (affects skipping_publish_num) topic_time_stamp the time stamp of input topic none none no sensor_points_size the size of sensor points the size is 0 none yes sensor_points_delay_time_sec the delay time of sensor points the time is longer than sensor_points.timeout_sec none yes is_succeed_transform_sensor_points whether transform sensor points is succeed or not none failed yes sensor_points_max_distance the max distance of sensor points the max distance is shorter than sensor_points.required_distance none yes is_activated whether the node is in the \"activate\" state or not not \"activate\" state none if is_activated is false, then estimation is not executed and skipping_publish_num is set to 0. is_succeed_interpolate_initial_pose whether the interpolate of initial pose is succeed or not failed. (1) the size of initial_pose_buffer_ is smaller than 2. (2) the timestamp difference between initial_pose and sensor pointcloud is longer than validation.initial_pose_timeout_sec. (3) distance difference between two initial poses used for linear interpolation is longer than validation.initial_pose_distance_tolerance_m none yes is_set_map_points whether the map points is set or not not set none yes iteration_num the number of times calculate alignment the number of times is larger than ndt.max_iterations none yes local_optimal_solution_oscillation_num the number of times the solution is judged to be oscillating the number of times is larger than 10 none yes transform_probability the score of how well the map aligns with the sensor points the score is smaller thanscore_estimation.converged_param_transform_probability (only in the case of score_estimation.converged_param_type is 0=TRANSFORM_PROBABILITY) none yes transform_probability_diff the tp score difference for the current ndt optimization none none no transform_probability_before the tp score before the current ndt optimization none none no nearest_voxel_transformation_likelihood the score of how well the map aligns with the sensor points the score is smaller than score_estimation.converged_param_nearest_voxel_transformation_likelihood (only in the case of score_estimation.converged_param_type is 1=NEAREST_VOXEL_TRANSFORMATION_LIKELIHOOD) none yes nearest_voxel_transformation_likelihood_diff the nvtl score difference for the current ndt optimization none none no nearest_voxel_transformation_likelihood_before the nvtl score before the current ndt optimization none none no distance_initial_to_result the distance between the position before convergence processing and the position after the distance is longer than validation.initial_to_result_distance_tolerance_m none no execution_time the time for convergence processing the time is longer than validation.critical_upper_bound_exe_time_ms none no skipping_publish_num the number of times rejected estimation results consecutively the number of times is validation.skipping_publish_num or more none -

\u203bThe sensor_points_callback shares the same callback group as the trigger_node_service and ndt_align_service. Consequently, if the initial pose estimation takes too long, this diagnostic may become stale.

"},{"location":"localization/autoware_ndt_scan_matcher/#initial_pose_subscriber_status","title":"initial_pose_subscriber_status","text":"Name Description Transition condition to Warning Transition condition to Error topic_time_stamp the time stamp of input topic none none is_activated whether the node is in the \"activate\" state or not not \"activate\" state none is_expected_frame_id whether the input frame_id is the same as frame.map_frame or not none not the same"},{"location":"localization/autoware_ndt_scan_matcher/#regularization_pose_subscriber_status","title":"regularization_pose_subscriber_status","text":"Name Description Transition condition to Warning Transition condition to Error topic_time_stamp the time stamp of input topic none none"},{"location":"localization/autoware_ndt_scan_matcher/#trigger_node_service_status","title":"trigger_node_service_status","text":"Name Description Transition condition to Warning Transition condition to Error service_call_time_stamp the time stamp of service calling none none is_activated whether the node is in the \"activate\" state or not none none is_succeed_service whether the process of service is succeed or not none none

\u203b This diagnostic is only published when the service is called, so it becomes stale after the initial pose estimation is completed.

"},{"location":"localization/autoware_ndt_scan_matcher/#ndt_align_service_status","title":"ndt_align_service_status","text":"Name Description Transition condition to Warning Transition condition to Error service_call_time_stamp the time stamp of service calling none none is_succeed_transform_initial_pose whether transform initial pose is succeed or not none failed is_need_rebuild whether it need to rebuild the map. If the map has not been loaded yet or if distance_last_update_position_to_current_position encounters is an Error state, it is considered necessary to reconstruct the map, and is_need_rebuild becomes True. none none maps_size_before the number of maps before update map none none is_succeed_call_pcd_loader whether call pcd_loader service is succeed or not failed none maps_to_add_size the number of maps to be added none none maps_to_remove_size the number of maps to be removed none none map_update_execution_time the time for map updating none none maps_size_after the number of maps after update map none none is_updated_map whether map is updated. If the map update couldn't be performed or there was no need to update the map, it becomes False none is_updated_map is False but is_need_rebuild is True is_set_map_points whether the map points is set or not not set none is_set_sensor_points whether the sensor points is set or not not set none best_particle_score the best score of particle none none is_succeed_service whether the process of service is succeed or not failed none

\u203b This diagnostic is only published when the service is called, so it becomes stale after the initial pose estimation is completed.

"},{"location":"localization/autoware_ndt_scan_matcher/#map_update_status","title":"map_update_status","text":"Name Description Transition condition to Warning Transition condition to Error timer_callback_time_stamp the time stamp of timer_callback calling none none is_activated whether the node is in the \"activate\" state or not not \"activate\" state none is_set_last_update_position whether the last_update_position is set or not not set none distance_last_update_position_to_current_position the distance of last_update_position to current position none (the distance + dynamic_map_loading.lidar_radius) is larger than dynamic_map_loading.map_radius is_need_rebuild whether it need to rebuild the map. If the map has not been loaded yet or if distance_last_update_position_to_current_position encounters is an Error state, it is considered necessary to reconstruct the map, and is_need_rebuild becomes True. none none maps_size_before the number of maps before update map none none is_succeed_call_pcd_loader whether call pcd_loader service is succeed or not failed none maps_to_add_size the number of maps to be added none none maps_to_remove_size the number of maps to be removed none none map_update_execution_time the time for map updating none none maps_size_after the number of maps after update map none none is_updated_map whether map is updated. If the map update couldn't be performed or there was no need to update the map, it becomes False none is_updated_map is False but is_need_rebuild is True"},{"location":"localization/autoware_ndt_scan_matcher/include/autoware/ndt_scan_matcher/ndt_omp/","title":"ndt_omp","text":""},{"location":"localization/autoware_ndt_scan_matcher/include/autoware/ndt_scan_matcher/ndt_omp/#ndt_omp","title":"ndt_omp","text":"cspell:ignore Kenji, Koide <-->

The codes in this directory are derived from https://github.com/tier4/ndt_omp, which is a fork of https://github.com/koide3/ndt_omp.

We use this code in accordance with the LICENSE, and we have directly confirmed this with Dr. Kenji Koide.

We sincerely appreciate Dr. Koide\u2019s support and contributions.

"},{"location":"localization/autoware_pose2twist/","title":"autoware_pose2twist","text":""},{"location":"localization/autoware_pose2twist/#autoware_pose2twist","title":"autoware_pose2twist","text":""},{"location":"localization/autoware_pose2twist/#purpose","title":"Purpose","text":"

This autoware_pose2twist calculates the velocity from the input pose history. In addition to the computed twist, this node outputs the linear-x and angular-z components as a float message to simplify debugging.

The twist.linear.x is calculated as sqrt(dx * dx + dy * dy + dz * dz) / dt, and the values in the y and z fields are zero. The twist.angular is calculated as relative_rotation_vector / dt for each field.

"},{"location":"localization/autoware_pose2twist/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_pose2twist/#input","title":"Input","text":"Name Type Description pose geometry_msgs::msg::PoseStamped pose source to used for the velocity calculation."},{"location":"localization/autoware_pose2twist/#output","title":"Output","text":"Name Type Description twist geometry_msgs::msg::TwistStamped twist calculated from the input pose history. linear_x autoware_internal_debug_msgs::msg::Float32Stamped linear-x field of the output twist. angular_z autoware_internal_debug_msgs::msg::Float32Stamped angular-z field of the output twist."},{"location":"localization/autoware_pose2twist/#parameters","title":"Parameters","text":"

none.

"},{"location":"localization/autoware_pose2twist/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

none.

"},{"location":"localization/autoware_pose_covariance_modifier/","title":"Autoware Pose Covariance Modifier Node","text":""},{"location":"localization/autoware_pose_covariance_modifier/#autoware-pose-covariance-modifier-node","title":"Autoware Pose Covariance Modifier Node","text":""},{"location":"localization/autoware_pose_covariance_modifier/#purpose","title":"Purpose","text":"

This package makes it possible to use GNSS and NDT poses together in real time localization.

"},{"location":"localization/autoware_pose_covariance_modifier/#function","title":"Function","text":"

This package takes in GNSS (Global Navigation Satellite System) and NDT (Normal Distribution Transform) poses with covariances.

It outputs a single pose with covariance:

"},{"location":"localization/autoware_pose_covariance_modifier/#assumptions","title":"Assumptions","text":""},{"location":"localization/autoware_pose_covariance_modifier/#requirements","title":"Requirements","text":""},{"location":"localization/autoware_pose_covariance_modifier/#description","title":"Description","text":"

GNSS and NDT nodes provide the pose with covariance data utilized in an Extended Kalman Filter (EKF).

Accurate covariance values are crucial for the effectiveness of the EKF in estimating the state.

The GNSS system generates reliable standard deviation values, which can be transformed into covariance measures.

But we currently don't have a reliable way to determine the covariance values for the NDT poses. And the NDT matching system in Autoware outputs poses with preset covariance values.

For this reason, this package is designed to manage the selection of the pose source, based on the standard deviation values provided by the GNSS system.

It also tunes the covariance values of the NDT poses, based on the GNSS standard deviation values.

"},{"location":"localization/autoware_pose_covariance_modifier/#flowcharts","title":"Flowcharts","text":""},{"location":"localization/autoware_pose_covariance_modifier/#without-this-package","title":"Without this package","text":"

Only NDT pose is used in localization. GNSS pose is only used for initialization.

graph TD\n    ndt_scan_matcher[\"ndt_scan_matcher\"] --> |\"/localization/pose_estimator/pose_with_covariance\"| ekf_localizer[\"ekf_localizer\"]\n\nclassDef cl_node fill:#FFF2CC,stroke-width:3px,stroke:#D6B656;\n\nclass ndt_scan_matcher cl_node;\nclass ekf_localizer cl_node;
"},{"location":"localization/autoware_pose_covariance_modifier/#with-this-package","title":"With this package","text":"

Both NDT and GNSS poses are used in localization, depending on the standard deviation values coming from the GNSS system.

Here is a flowchart depicting the process and the predefined thresholds:

graph TD\n    gnss_poser[\"gnss_poser\"] --> |\"/sensing/gnss/<br/>pose_with_covariance\"| pose_covariance_modifier_node\n    ndt_scan_matcher[\"ndt_scan_matcher\"] --> |\"/localization/pose_estimator/ndt_scan_matcher/<br/>pose_with_covariance\"| pose_covariance_modifier_node\n\n    subgraph pose_covariance_modifier_node [\"Pose Covariance Modifier Node\"]\n        pc1gnss_pose_yaw<br/>stddev\n        pc1 -->|\"<= 0.3 rad\"| pc2gnss_pose_z<br/>stddev\n        pc2 -->|\"<= 0.1 m\"| pc3gnss_pose_xy<br/>stddev\n        pc2 -->|\"&gt; 0.1 m\"| ndt_pose(\"NDT Pose\")\n        pc3 -->|\"<= 0.1 m\"| gnss_pose(\"GNSS Pose\")\n        pc3 -->|\"0.1 m < x <= 0.2 m\"| gnss_ndt_pose(\"`Both GNSS and NDT Pose\n        (_with modified covariance_)`\")\n        pc3 -->|\"&gt; 0.2 m\"| ndt_pose\n        pc1 -->|\"&gt; 0.3 rad\"| ndt_pose\n    end\n\n    pose_covariance_modifier_node -->|\"/localization/pose_estimator/pose_with_covariance\"| ekf_localizer[\"ekf_localizer\"]\n\nclassDef cl_node fill:#FFF2CC,stroke-width:3px,stroke:#D6B656;\nclassDef cl_conditional fill:#FFE6CC,stroke-width:3px,stroke:#D79B00;\nclassDef cl_output fill:#D5E8D4,stroke-width:3px,stroke:#82B366;\n\nclass gnss_poser cl_node;\nclass ndt_scan_matcher cl_node;\nclass ekf_localizer cl_node;\nclass pose_covariance_modifier_node cl_node;\n\nclass pc1 cl_conditional;\nclass pc2 cl_conditional;\nclass pc3 cl_conditional;\n\nclass ndt_pose cl_output;\nclass gnss_pose cl_output;\nclass gnss_ndt_pose cl_output;
"},{"location":"localization/autoware_pose_covariance_modifier/#how-to-use-this-package","title":"How to use this package","text":"

This package is disabled by default in Autoware, you need to manually enable it.

To enable this package, you need to change the use_autoware_pose_covariance_modifier parameter to true within the pose_twist_estimator.launch.xml.

"},{"location":"localization/autoware_pose_covariance_modifier/#without-this-condition-default","title":"Without this condition (default)","text":""},{"location":"localization/autoware_pose_covariance_modifier/#with-this-condition","title":"With this condition","text":""},{"location":"localization/autoware_pose_covariance_modifier/#node","title":"Node","text":""},{"location":"localization/autoware_pose_covariance_modifier/#subscribed-topics","title":"Subscribed topics","text":"Name Type Description input_gnss_pose_with_cov_topic geometry_msgs::msg::PoseWithCovarianceStamped Input GNSS pose topic. input_ndt_pose_with_cov_topic geometry_msgs::msg::PoseWithCovarianceStamped Input NDT pose topic."},{"location":"localization/autoware_pose_covariance_modifier/#published-topics","title":"Published topics","text":"Name Type Description output_pose_with_covariance_topic geometry_msgs::msg::PoseWithCovarianceStamped Output pose topic. This topic is used by the ekf_localizer package. selected_pose_type std_msgs::msg::String Declares which pose sources are used in the output of this package output/ndt_position_stddev std_msgs::msg::Float64 Output pose ndt average standard deviation in position xy. It is published only when the enable_debug_topics is true. output/gnss_position_stddev std_msgs::msg::Float64 Output pose gnss average standard deviation in position xy. It is published only when the enable_debug_topics is true."},{"location":"localization/autoware_pose_covariance_modifier/#parameters","title":"Parameters","text":"

The parameters are set in config/pose_covariance_modifier.param.yaml .

Name Type Description Default Range threshold_gnss_stddev_yaw_deg_max float If GNSS yaw standard deviation values are larger than this, trust only NDT 0.3 N/A threshold_gnss_stddev_z_max float If GNSS position Z standard deviation values are larger than this, trust only NDT 0.1 N/A threshold_gnss_stddev_xy_bound_lower float If GNSS position XY standard deviation values are lower than this, trust only GNSS 0.1 N/A threshold_gnss_stddev_xy_bound_upper float If GNSS position XY standard deviation values are higher than this, trust only NDT 0.25 N/A ndt_std_dev_bound_lower float Lower bound value for standard deviation of NDT positions (x, y, z) when used with GNSS 0.15 N/A ndt_std_dev_bound_upper float Upper bound value for standard deviation of NDT positions (x, y, z) when used with GNSS 0.3 N/A gnss_pose_timeout_sec float If GNSS data is not received for this duration, trust only NDT 1 N/A enable_debug_topics boolean Publish additional debug topics 1 N/A"},{"location":"localization/autoware_pose_covariance_modifier/#faq","title":"FAQ","text":""},{"location":"localization/autoware_pose_covariance_modifier/#how-are-varying-frequency-rates-handled","title":"How are varying frequency rates handled?","text":"

The GNSS and NDT pose topics may have different frequencies. The GNSS pose topic may have a higher frequency than the NDT.

Let's assume that the inputs have the following frequencies:

Source Frequency GNSS 200 Hz NDT 10 Hz

This package publishes the output poses as they come in, depending on the mode.

End result:

Mode Output Freq GNSS Only 200 Hz GNSS + NDT 210 Hz NDT Only 10 Hz"},{"location":"localization/autoware_pose_covariance_modifier/#how-and-when-are-the-ndt-covariance-values-overwritten","title":"How and when are the NDT covariance values overwritten?","text":"Mode Outputs, Covariance GNSS Only GNSS, Unmodified GNSS + NDT GNSS: Unmodified, NDT: Interpolated NDT Only NDT, Unmodified

NDT covariance values overwritten only for the GNSS + NDT mode.

This enables a smooth transition between GNSS Only and NDT Only modes.

In this mode, both NDT and GNSS poses are published from this node.

"},{"location":"localization/autoware_pose_covariance_modifier/#ndt-covariance-calculation","title":"NDT covariance calculation","text":"

As the gnss_std_dev increases within its bounds, ndt_std_dev should proportionally decrease within its own bounds.

To achieve this, we first linearly interpolate:

"},{"location":"localization/autoware_pose_estimator_arbiter/","title":"autoware_pose_estimator_arbiter","text":""},{"location":"localization/autoware_pose_estimator_arbiter/#autoware_pose_estimator_arbiter","title":"autoware_pose_estimator_arbiter","text":"

Table of contents:

"},{"location":"localization/autoware_pose_estimator_arbiter/#abstract","title":"Abstract","text":"

This package launches multiple pose estimators and provides the capability to stop or resume specific pose estimators based on the situation. It provides provisional switching rules and will be adaptable to a wide variety of rules in the future.

Please refer to this discussion about other ideas on implementation.

"},{"location":"localization/autoware_pose_estimator_arbiter/#why-do-we-need-a-stopresume-mechanism","title":"Why do we need a stop/resume mechanism?","text":"

It is possible to launch multiple pose_estimators and fuse them using a Kalman filter by editing launch files. However, this approach is not preferable due to computational costs.

Particularly, NDT and YabLoc are computationally intensive, and it's not recommended to run them simultaneously. Also, even if both can be activated at the same time, the Kalman Filter may be affected by one of them giving bad output.

[!NOTE] Currently, there is ONLY A RULE implemented that always enables all pose_estimators. If users want to toggle pose_estimator with their own rules, they need to add new rules. by referring to example_rule. The example_rule has source code that can be used as a reference for implementing the rules.

"},{"location":"localization/autoware_pose_estimator_arbiter/#supporting-pose_estimators","title":"Supporting pose_estimators","text":""},{"location":"localization/autoware_pose_estimator_arbiter/#demonstration","title":"Demonstration","text":"

The following video demonstrates the switching of four different pose estimators.

Users can reproduce the demonstration using the following data and launch command:

sample data (rosbag & map) The rosbag is simulated data created by AWSIM. The map is an edited version of the original map data published on the AWSIM documentation page to make it suitable for multiple pose_estimators.

ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=<your-map-path> \\\nvehicle_model:=sample_vehicle \\\nsensor_model:=awsim_sensor_kit \\\npose_source:=ndt_yabloc_artag_eagleye\n
"},{"location":"localization/autoware_pose_estimator_arbiter/#interfaces","title":"Interfaces","text":"Click to show details ### Parameters There are no parameters. ### Services | Name | Type | Description | | ---------------- | ------------------------------- | ------------------------------- | | `/config_logger` | logging_demo::srv::ConfigLogger | service to modify logging level | ### Clients | Name | Type | Description | | --------------------- | --------------------- | --------------------------------- | | `/yabloc_suspend_srv` | std_srv::srv::SetBool | service to stop or restart yabloc | ### Subscriptions For pose estimator arbitration: | Name | Type | Description | | ------------------------------------- | --------------------------------------------- | -------------- | | `/input/artag/image` | sensor_msgs::msg::Image | ArTag input | | `/input/yabloc/image` | sensor_msgs::msg::Image | YabLoc input | | `/input/eagleye/pose_with_covariance` | geometry_msgs::msg::PoseWithCovarianceStamped | Eagleye output | | `/input/ndt/pointcloud` | sensor_msgs::msg::PointCloud2 | NDT input | For switching rule: | Name | Type | Description | | ----------------------------- | ------------------------------------------------------------ | --------------------------------- | | `/input/vector_map` | autoware_map_msgs::msg::LaneletMapBin | vector map | | `/input/pose_with_covariance` | geometry_msgs::msg::PoseWithCovarianceStamped | localization final output | | `/input/initialization_state` | autoware_adapi_v1_msgs::msg::LocalizationInitializationState | localization initialization state | ### Publications | Name | Type | Description | | -------------------------------------- | --------------------------------------------- | ------------------------------------------------------ | | `/output/artag/image` | sensor_msgs::msg::Image | relayed ArTag input | | `/output/yabloc/image` | sensor_msgs::msg::Image | relayed YabLoc input | | `/output/eagleye/pose_with_covariance` | geometry_msgs::msg::PoseWithCovarianceStamped | relayed Eagleye output | | `/output/ndt/pointcloud` | sensor_msgs::msg::PointCloud2 | relayed NDT input | | `/output/debug/marker_array` | visualization_msgs::msg::MarkerArray | [debug topic] everything for visualization | | `/output/debug/string` | visualization_msgs::msg::MarkerArray | [debug topic] debug information such as current status |"},{"location":"localization/autoware_pose_estimator_arbiter/#trouble-shooting","title":"Trouble Shooting","text":"

If it does not seems to work, users can get more information in the following ways.

[!TIP]

ros2 service call /localization/autoware_pose_estimator_arbiter/config_logger logging_demo/srv/ConfigLogger \\\n'{logger_name: localization.autoware_pose_estimator_arbiter, level: debug}'\n
"},{"location":"localization/autoware_pose_estimator_arbiter/#architecture","title":"Architecture","text":"Click to show details ### Case of running a single pose estimator When each pose_estimator is run alone, this package does nothing. Following figure shows the node configuration when NDT, YabLoc Eagleye and AR-Tag are run independently. ### Case of running multiple pose estimators When running multiple pose_estimators, autoware_pose_estimator_arbiter is executed. It comprises a **switching rule** and **stoppers** corresponding to each pose_estimator. - Stoppers control the pose_estimator activity by relaying inputs or outputs, or by requesting a suspend service. - Switching rules determine which pose_estimator to use. Which stoppers and switching rules are instantiated depends on the runtime arguments at startup. Following figure shows the node configuration when all pose_estimator are run simultaneously. - **NDT** The NDT stopper relays topics in the front side of the point cloud pre-processor. - **YabLoc** The YabLoc stopper relays input image topics in the frontend of the image pre-processor. YabLoc includes a particle filter process that operates on a timer, and even when image topics are not streamed, the particle prediction process continues to work. To address this, the YabLoc stopper also has a service client for explicitly stopping and resuming YabLoc. - **Eagleye** The Eagleye stopper relays Eagleye's output pose topics in the backend of Eagleye's estimation process. Eagleye performs time-series processing internally, and it can't afford to stop the input stream. Furthermore, Eagleye's estimation process is lightweight enough to be run continuously without a significant load, so the relay is inserted in the backend. - **ArTag** The ArTag stopper relays image topics in the front side of the landmark localizer."},{"location":"localization/autoware_pose_estimator_arbiter/#how-to-launch","title":"How to launch","text":"Click to show details The user can launch the desired pose_estimators by giving the pose_estimator names as a concatenation of underscores for the runtime argument `pose_source`.
ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=<your-map-path> \\\nvehicle_model:=sample_vehicle \\\nsensor_model:=awsim_sensor_kit \\\npose_source:=ndt_yabloc_artag_eagleye\n
Even if `pose_source` includes an unexpected string, it will be filtered appropriately. Please see the table below for details. | given runtime argument | parsed autoware_pose_estimator_arbiter's param (pose_sources) | | ------------------------------------------- | ------------------------------------------------------------- | | `pose_source:=ndt` | `[\"ndt\"]` | | `pose_source:=nan` | `[]` | | `pose_source:=yabloc_ndt` | `[\"ndt\",\"yabloc\"]` | | `pose_source:=yabloc_ndt_ndt_ndt` | `[\"ndt\",\"yabloc\"]` | | `pose_source:=ndt_yabloc_eagleye` | `[\"ndt\",\"yabloc\",\"eagleye\"]` | | `pose_source:=ndt_yabloc_nan_eagleye_artag` | `[\"ndt\",\"yabloc\",\"eagleye\",\"artag\"]` |"},{"location":"localization/autoware_pose_estimator_arbiter/#switching-rules","title":"Switching Rules","text":"Click to show details Currently, **ONLY ONE RULE** (`enable_all_rule`) is implemented. In the future, several rules will be implemented and users will be able to select rules. > [!TIP] > There are presets available to extend the rules. If you want to extend the rules, please see [example_rule](./example_rule/README.md). ### Enable All Rule This is the default and simplest rule. This rule enables all pose_estimators regardless of their current state.
flowchart LR\n  A{ }\n  A --whatever --> _A[enable all pose_estimators]
"},{"location":"localization/autoware_pose_estimator_arbiter/#pose-initialization","title":"Pose Initialization","text":"

When using multiple pose_estimators, it is necessary to appropriately adjust the parameters provided to the pose_initializer.

Click to show details The following table is based on the runtime argument \"pose_source\" indicating which initial pose estimation methods are available and the parameters that should be provided to the pose_initialization node. To avoid making the application too complicated, a priority is established so that NDT is always used when it is available. (The pose_initializer will only perform NDT-based initial pose estimation when `ndt_enabled` and `yabloc_enabled` are both `true`). This table's usage is described from three perspectives: - **Autoware Users:** Autoware users do not need to consult this table. They simply provide the desired combinations of pose_estimators, and the appropriate parameters are automatically provided to the pose_initializer. - **Autoware Developers:** Autoware developers can consult this table to know which parameters are assigned. - **Who implements New Pose Estimator Switching:** Developers must extend this table and implement the assignment of appropriate parameters to the pose_initializer. | pose_source | invoked initialization method | `ndt_enabled` | `yabloc_enabled` | `gnss_enabled` | `sub_gnss_pose_cov` | | :-------------------------: | ----------------------------- | ------------- | ---------------- | -------------- | -------------------------------------------- | | ndt | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | yabloc | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | eagleye | vehicle needs run for a while | false | false | true | /localization/pose_estimator/eagleye/... | | artag | 2D Pose Estimate (RViz) | false | false | true | /sensing/gnss/pose_with_covariance | | ndt, yabloc | ndt | true | true | true | /sensing/gnss/pose_with_covariance | | ndt, eagleye | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | ndt, artag | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | yabloc, eagleye | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | yabloc, artag | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | eagleye, artag | vehicle needs run for a while | false | false | true | /localization/pose_estimator/eagleye/pose... | | ndt, yabloc, eagleye | ndt | true | true | true | /sensing/gnss/pose_with_covariance | | ndt, eagleye, artag | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | yabloc, eagleye, artag | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | ndt, yabloc, eagleye, artag | ndt | true | true | true | /sensing/gnss/pose_with_covariance |"},{"location":"localization/autoware_pose_estimator_arbiter/#future-plans","title":"Future Plans","text":"Click to show details ### gradually switching In the future, this package will provide not only ON/OFF switching, but also a mechanism for low frequency operation, such as 50% NDT & 50% YabLoc. ### stopper for pose_estimators to be added in the future The basic strategy is to realize ON/OFF switching by relaying the input or output topics of that pose_estimator. If pose_estimator involves time-series processing with heavy computations, it's not possible to pause and resume with just topic relaying. In such cases, there may not be generally applicable solutions, but the following methods may help: 1. Completely stop and **reinitialize** time-series processing, as seen in the case of YabLoc. 2. Subscribe to `localization/kinematic_state` and **keep updating states** to ensure that the estimation does not break (relying on the output of the active pose_estimator). 3. The multiple pose_estimator **does not support** that particular pose_estimator. Please note that this issue is fundamental to realizing multiple pose_estimators, and it will arise regardless of the architecture proposed in this case."},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/","title":"example rule","text":""},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#example-rule","title":"example rule","text":"

The example rule provides a sample rule for controlling the arbiter. By combining the provided rules, it is possible to achieve demonstrations as follows. Users can extend the rules as needed by referencing this code, allowing them to control the arbiter as desired.

"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#demonstration","title":"Demonstration","text":"

The following video demonstrates the switching of four different pose estimators.

"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#switching-rules","title":"Switching Rules","text":""},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#pcd-map-based-rule","title":"Pcd Map Based Rule","text":"
flowchart LR\n  A{PCD is enough dense }\n  A --true--> B[enable NDT]\n  A --false--> C[enable YabLoc]
"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#vector-map-based-rule","title":"Vector Map Based Rule","text":"
flowchart LR\n  A{ }\n  A --whatever --> _A[When the ego vehicle is in a predetermined pose_estimator_area,\\n it enables the corresponding pose_estamtor.]
"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#rule-helpers","title":"Rule helpers","text":"

Rule helpers are auxiliary tools for describing switching rules.

"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#pcd-occupancy","title":"PCD occupancy","text":""},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#pose-estimator-area","title":"Pose estimator area","text":"

The pose_estimator_area is a planar area described by polygon in lanelet2. The height of the area is meaningless; it judges if the projection of its self-position is contained within the polygon or not.

A sample pose_estimator_area is shown below. The values provided below are placeholders. To be correctly read, the area should have the type \"pose_estimator_specify\" and the subtype should be one of ndt, yabloc, eagleye, or artag.

  <node id=\"1\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"10.0\"/>\n<tag k=\"local_y\" v=\"10.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n<node id=\"2\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"10.0\"/>\n<tag k=\"local_y\" v=\"20.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n<node id=\"3\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"20.0\"/>\n<tag k=\"local_y\" v=\"20.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n<node id=\"4\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"10.0\"/>\n<tag k=\"local_y\" v=\"20.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n\n...\n\n  <way id=\"5\">\n<nd ref=\"1\"/>\n<nd ref=\"2\"/>\n<nd ref=\"3\"/>\n<nd ref=\"4\"/>\n<tag k=\"type\" v=\"pose_estimator_specify\"/>\n<tag k=\"subtype\" v=\"eagleye\"/>\n<tag k=\"area\" v=\"yes\"/>\n</way>\n\n<way id=\"6\">\n<nd ref=\"7\"/>\n<nd ref=\"8\"/>\n<nd ref=\"9\"/>\n<nd ref=\"10\"/>\n<tag k=\"type\" v=\"pose_estimator_specify\"/>\n<tag k=\"subtype\" v=\"yabloc\"/>\n<tag k=\"area\" v=\"yes\"/>\n</way>\n
"},{"location":"localization/autoware_pose_initializer/","title":"autoware_pose_initializer","text":""},{"location":"localization/autoware_pose_initializer/#autoware_pose_initializer","title":"autoware_pose_initializer","text":""},{"location":"localization/autoware_pose_initializer/#purpose","title":"Purpose","text":"

The autoware_pose_initializer is the package to send an initial pose to ekf_localizer. It receives roughly estimated initial pose from GNSS/user. Passing the pose to ndt_scan_matcher, and it gets a calculated ego pose from ndt_scan_matcher via service. Finally, it publishes the initial pose to ekf_localizer. This node depends on the map height fitter library. See here for more details.

"},{"location":"localization/autoware_pose_initializer/#interfaces","title":"Interfaces","text":""},{"location":"localization/autoware_pose_initializer/#parameters","title":"Parameters","text":"Name Type Description Default Range user_defined_initial_pose.enable string If true, user_defined_initial_pose.pose is set as the initial position. [boolean] false N/A user_defined_initial_pose.pose string initial pose (x, y, z, quat_x, quat_y, quat_z, quat_w). [array] [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0] N/A gnss_pose_timeout float The duration that the GNSS pose is valid. [sec] 3.0 \u22650.0 pose_error_check_enabled boolean If true, check error between initial pose result and GNSS pose. false N/A pose_error_threshold float The error threshold between GNSS and initial pose 5.0 \u22650.0 stop_check_enabled string If true, initialization is accepted only when the vehicle is stopped. true N/A stop_check_duration float The duration used for the stop check above. [sec] 3.0 \u22650.0 ekf_enabled string If true, EKF localizer is activated. true N/A gnss_enabled string If true, use the GNSS pose when no pose is specified. true N/A yabloc_enabled string If true, YabLocModule is used. true N/A ndt_enabled string If true, the pose will be estimated by NDT scan matcher, otherwise it is passed through. true N/A gnss_particle_covariance array gnss particle covariance [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.0] N/A output_pose_covariance array output pose covariance [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2] N/A"},{"location":"localization/autoware_pose_initializer/#services","title":"Services","text":"Name Type Description /localization/initialize tier4_localization_msgs::srv::InitializeLocalization initial pose from api"},{"location":"localization/autoware_pose_initializer/#clients","title":"Clients","text":"Name Type Description /localization/pose_estimator/ndt_align_srv tier4_localization_msgs::srv::PoseWithCovarianceStamped pose estimation service"},{"location":"localization/autoware_pose_initializer/#subscriptions","title":"Subscriptions","text":"Name Type Description /sensing/gnss/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped pose from gnss /sensing/vehicle_velocity_converter/twist_with_covariance geometry_msgs::msg::TwistStamped twist for stop check"},{"location":"localization/autoware_pose_initializer/#publications","title":"Publications","text":"Name Type Description /localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState pose initialization state /initialpose3d geometry_msgs::msg::PoseWithCovarianceStamped calculated initial ego pose /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics"},{"location":"localization/autoware_pose_initializer/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/autoware_pose_initializer/#pose_initializer_status","title":"pose_initializer_status","text":"

If the score of initial pose estimation result is lower than score threshold, ERROR message is output to the /diagnostics topic.

"},{"location":"localization/autoware_pose_initializer/#connection-with-default-ad-api","title":"Connection with Default AD API","text":"

This autoware_pose_initializer is used via default AD API. For detailed description of the API description, please refer to the description of autoware_default_adapi.

"},{"location":"localization/autoware_pose_initializer/#initialize-pose-via-cli","title":"Initialize pose via CLI","text":""},{"location":"localization/autoware_pose_initializer/#using-the-gnss-estimated-position","title":"Using the GNSS estimated position","text":"
ros2 service call /localization/initialize tier4_localization_msgs/srv/InitializeLocalization\n

The GNSS estimated position is used as the initial guess, and the localization algorithm automatically estimates a more accurate position.

"},{"location":"localization/autoware_pose_initializer/#using-the-input-position","title":"Using the input position","text":"
ros2 service call /localization/initialize tier4_localization_msgs/srv/InitializeLocalization \"\npose_with_covariance:\n  - header:\n      frame_id: map\n    pose:\n      pose:\n        position:\n          x: 89571.1640625\n          y: 42301.1875\n          z: -3.1565165519714355\n        orientation:\n          x: 0.0\n          y: 0.0\n          z: 0.28072773940524687\n          w: 0.9597874433062874\n      covariance: [0.25, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06853891909122467]\nmethod: 0\n\"\n

The input position is used as the initial guess, and the localization algorithm automatically estimates a more accurate position.

"},{"location":"localization/autoware_pose_initializer/#direct-initial-position-set","title":"Direct initial position set","text":"
ros2 service call /localization/initialize tier4_localization_msgs/srv/InitializeLocalization \"\npose_with_covariance:\n  - header:\n      frame_id: map\n    pose:\n      pose:\n        position:\n          x: 89571.1640625\n          y: 42301.1875\n          z: -3.1565165519714355\n        orientation:\n          x: 0.0\n          y: 0.0\n          z: 0.28072773940524687\n          w: 0.9597874433062874\n      covariance: [0.25, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06853891909122467]\nmethod: 1\n\"\n

The initial position is set directly by the input position without going through localization algorithm.

"},{"location":"localization/autoware_pose_initializer/#via-ros2-topic-pub","title":"Via ros2 topic pub","text":"
ros2 topic pub --once /initialpose geometry_msgs/msg/PoseWithCovarianceStamped \"\nheader:\n  frame_id: map\npose:\n  pose:\n    position:\n      x: 89571.1640625\n      y: 42301.1875\n      z: 0.0\n    orientation:\n      x: 0.0\n      y: 0.0\n      z: 0.28072773940524687\n      w: 0.9597874433062874\n\"\n

It behaves the same as \"initialpose (from rviz)\". The position.z and the covariance will be overwritten by ad_api_adaptors, so there is no need to input them.

"},{"location":"localization/autoware_pose_instability_detector/","title":"autoware_pose_instability_detector","text":""},{"location":"localization/autoware_pose_instability_detector/#autoware_pose_instability_detector","title":"autoware_pose_instability_detector","text":"

The pose_instability_detector is a node designed to monitor the stability of /localization/kinematic_state, which is an output topic of the Extended Kalman Filter (EKF).

This node triggers periodic timer callbacks to compare two poses:

The results of this comparison are then output to the /diagnostics topic.

If this node outputs WARN messages to /diagnostics, it means that the EKF output is significantly different from the integrated twist values. In other words, WARN outputs indicate that the vehicle has moved to a place outside the expected range based on the twist values. This discrepancy suggests that there may be an issue with either the estimated pose or the input twist.

The following diagram provides an overview of how the procedure looks like:

"},{"location":"localization/autoware_pose_instability_detector/#dead-reckoning-algorithm","title":"Dead reckoning algorithm","text":"

Dead reckoning is a method of estimating the position of a vehicle based on its previous position and velocity. The procedure for dead reckoning is as follows:

  1. Capture the necessary twist values from the /input/twist topic.
  2. Integrate the twist values to calculate the pose transition.
  3. Apply the pose transition to the previous pose to obtain the current pose.
"},{"location":"localization/autoware_pose_instability_detector/#collecting-twist-values","title":"Collecting twist values","text":"

The pose_instability_detector node collects the twist values from the ~/input/twist topic to perform dead reckoning. Ideally, pose_instability_detector needs the twist values between the previous pose and the current pose. Therefore, pose_instability_detector snips the twist buffer and apply interpolations and extrapolations to obtain the twist values at the desired time.

"},{"location":"localization/autoware_pose_instability_detector/#linear-transition-and-angular-transition","title":"Linear transition and angular transition","text":"

After the twist values are collected, the node calculates the linear transition and angular transition based on the twist values and add them to the previous pose.

"},{"location":"localization/autoware_pose_instability_detector/#threshold-definition","title":"Threshold definition","text":"

The pose_instability_detector node compares the pose calculated by dead reckoning with the latest pose from the EKF output. These two pose are ideally the same, but in reality, they are not due to the error in the twist values the pose observation. If these two poses are significantly different so that the absolute value exceeds the threshold, the node outputs a WARN message to the /diagnostics topic. There are six thresholds (x, y, z, roll, pitch, and yaw) to determine whether the poses are significantly different, and these thresholds are determined by the following subsections.

"},{"location":"localization/autoware_pose_instability_detector/#diff_position_x","title":"diff_position_x","text":"

This threshold examines the difference in the longitudinal axis between the two poses, and check whether the vehicle goes beyond the expected error. This threshold is a sum of \"maximum longitudinal error due to velocity scale factor error\" and \"pose estimation error tolerance\".

\\[ \\tau_x = v_{\\rm max}\\frac{\\beta_v}{100} \\Delta t + \\epsilon_x\\\\ \\] Symbol Description Unit \\(\\tau_x\\) Threshold for the difference in the longitudinal axis \\(m\\) \\(v_{\\rm max}\\) Maximum velocity \\(m/s\\) \\(\\beta_v\\) Scale factor tolerance for the maximum velocity \\(\\%\\) \\(\\Delta t\\) Time interval \\(s\\) \\(\\epsilon_x\\) Pose estimator (e. g. ndt_scan_matcher) error tolerance in the longitudinal axis \\(m\\)"},{"location":"localization/autoware_pose_instability_detector/#diff_position_y-and-diff_position_z","title":"diff_position_y and diff_position_z","text":"

These thresholds examine the difference in the lateral and vertical axes between the two poses, and check whether the vehicle goes beyond the expected error. The pose_instability_detector calculates the possible range where the vehicle goes, and get the maximum difference between the nominal dead reckoning pose and the maximum limit pose.

Addition to this, the pose_instability_detector node considers the pose estimation error tolerance to determine the threshold.

\\[ \\tau_y = l + \\epsilon_y \\] Symbol Description Unit \\(\\tau_y\\) Threshold for the difference in the lateral axis \\(m\\) \\(l\\) Maximum lateral distance described in the image above (See the appendix how this is calculated) \\(m\\) \\(\\epsilon_y\\) Pose estimator (e. g. ndt_scan_matcher) error tolerance in the lateral axis \\(m\\)

Note that pose_instability_detector sets the threshold for the vertical axis as the same as the lateral axis. Only the pose estimator error tolerance is different.

"},{"location":"localization/autoware_pose_instability_detector/#diff_angle_x-diff_angle_y-and-diff_angle_z","title":"diff_angle_x, diff_angle_y, and diff_angle_z","text":"

These thresholds examine the difference in the roll, pitch, and yaw angles between the two poses. This threshold is a sum of \"maximum angular error due to velocity scale factor error and bias error\" and \"pose estimation error tolerance\".

\\[ \\tau_\\phi = \\tau_\\theta = \\tau_\\psi = \\left(\\omega_{\\rm max}\\frac{\\beta_\\omega}{100} + b \\right) \\Delta t + \\epsilon_\\psi \\] Symbol Description Unit \\(\\tau_\\phi\\) Threshold for the difference in the roll angle \\({\\rm rad}\\) \\(\\tau_\\theta\\) Threshold for the difference in the pitch angle \\({\\rm rad}\\) \\(\\tau_\\psi\\) Threshold for the difference in the yaw angle \\({\\rm rad}\\) \\(\\omega_{\\rm max}\\) Maximum angular velocity \\({\\rm rad}/s\\) \\(\\beta_\\omega\\) Scale factor tolerance for the maximum angular velocity \\(\\%\\) \\(b\\) Bias tolerance of the angular velocity \\({\\rm rad}/s\\) \\(\\Delta t\\) Time interval \\(s\\) \\(\\epsilon_\\psi\\) Pose estimator (e. g. ndt_scan_matcher) error tolerance in the yaw angle \\({\\rm rad}\\)"},{"location":"localization/autoware_pose_instability_detector/#parameters","title":"Parameters","text":"Name Type Description Default Range timer_period float The period of timer_callback (sec). 0.5 >0 heading_velocity_maximum float The maximum of heading velocity (m/s). 16.667 \u22650.0 heading_velocity_scale_factor_tolerance float The tolerance of heading velocity scale factor (%). 3 \u22650.0 angular_velocity_maximum float The maximum of angular velocity (rad/s). 0.523 \u22650.0 angular_velocity_scale_factor_tolerance float The tolerance of angular velocity scale factor (%). 0.2 \u22650.0 angular_velocity_bias_tolerance float The tolerance of angular velocity bias (rad/s). 0.00698 \u22650.0 pose_estimator_longitudinal_tolerance float The tolerance of longitudinal position of pose estimator (m). 0.11 \u22650.0 pose_estimator_lateral_tolerance float The tolerance of lateral position of pose estimator (m). 0.11 \u22650.0 pose_estimator_vertical_tolerance float The tolerance of vertical position of pose estimator (m). 0.11 \u22650.0 pose_estimator_angular_tolerance float The tolerance of roll angle of pose estimator (rad). 0.0175 \u22650.0"},{"location":"localization/autoware_pose_instability_detector/#input","title":"Input","text":"Name Type Description ~/input/odometry nav_msgs::msg::Odometry Pose estimated by EKF ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped Twist"},{"location":"localization/autoware_pose_instability_detector/#output","title":"Output","text":"Name Type Description ~/debug/diff_pose geometry_msgs::msg::PoseStamped diff_pose /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics"},{"location":"localization/autoware_pose_instability_detector/#appendix","title":"Appendix","text":"

On calculating the maximum lateral distance \\(l\\), the pose_instability_detector node will estimate the following poses.

Pose heading velocity \\(v\\) angular velocity \\(\\omega\\) Nominal dead reckoning pose \\(v_{\\rm max}\\) \\(\\omega_{\\rm max}\\) Dead reckoning pose of corner A \\(\\left(1+\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1+\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} + b\\) Dead reckoning pose of corner B \\(\\left(1-\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1+\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} + b\\) Dead reckoning pose of corner C \\(\\left(1-\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1-\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} - b\\) Dead reckoning pose of corner D \\(\\left(1+\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1-\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} - b\\)

Given a heading velocity \\(v\\) and \\(\\omega\\), the 2D theoretical variation seen from the previous pose is calculated as follows:

\\[ \\begin{align*} \\left[ \\begin{matrix} \\Delta x\\\\ \\Delta y \\end{matrix} \\right] &= \\left[ \\begin{matrix} \\int_{0}^{\\Delta t} v \\cos(\\omega t) dt\\\\ \\int_{0}^{\\Delta t} v \\sin(\\omega t) dt \\end{matrix} \\right] \\\\ &= \\left[ \\begin{matrix} \\frac{v}{\\omega} \\sin(\\omega \\Delta t)\\\\ \\frac{v}{\\omega} \\left(1 - \\cos(\\omega \\Delta t)\\right) \\end{matrix} \\right] \\end{align*} \\]

We calculate this variation for each corner and get the maximum value of the lateral distance \\(l\\) by comparing the distance between the nominal dead reckoning pose and the corner poses.

"},{"location":"localization/autoware_stop_filter/","title":"stop_filter","text":""},{"location":"localization/autoware_stop_filter/#stop_filter","title":"stop_filter","text":""},{"location":"localization/autoware_stop_filter/#purpose","title":"Purpose","text":"

When this function did not exist, each node used a different criterion to determine whether the vehicle is stopping or not, resulting that some nodes were in operation of stopping the vehicle and some nodes continued running in the drive mode. This node aims to:

"},{"location":"localization/autoware_stop_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_stop_filter/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry"},{"location":"localization/autoware_stop_filter/#output","title":"Output","text":"Name Type Description output/odom nav_msgs::msg::Odometry odometry with suppressed longitudinal and yaw twist debug/stop_flag autoware_internal_debug_msgs::msg::BoolStamped flag to represent whether the vehicle is stopping or not"},{"location":"localization/autoware_stop_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range vx_threshold float Longitudinal velocity threshold to determine if the vehicle is stopping. [m/s] 0.01 \u22650.0 wz_threshold float Yaw velocity threshold to determine if the vehicle is stopping. [rad/s] 0.01 \u22650.0"},{"location":"localization/autoware_twist2accel/","title":"autoware_twist2accel","text":""},{"location":"localization/autoware_twist2accel/#autoware_twist2accel","title":"autoware_twist2accel","text":""},{"location":"localization/autoware_twist2accel/#purpose","title":"Purpose","text":"

This package is responsible for estimating acceleration using the output of ekf_localizer. It uses lowpass filter to mitigate the noise.

"},{"location":"localization/autoware_twist2accel/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_twist2accel/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry input/twist geometry_msgs::msg::TwistWithCovarianceStamped twist"},{"location":"localization/autoware_twist2accel/#output","title":"Output","text":"Name Type Description output/accel geometry_msgs::msg::AccelWithCovarianceStamped estimated acceleration"},{"location":"localization/autoware_twist2accel/#parameters","title":"Parameters","text":"Name Type Description Default Range use_odom boolean use odometry if true, else use twist input. 1 N/A accel_lowpass_gain float lowpass gain for lowpass filter in estimating acceleration. 0.9 \u22650.0"},{"location":"localization/autoware_twist2accel/#future-work","title":"Future work","text":"

Future work includes integrating acceleration into the EKF state.

"},{"location":"localization/yabloc/","title":"YabLoc","text":""},{"location":"localization/yabloc/#yabloc","title":"YabLoc","text":"

YabLoc is vision-based localization with vector map. https://youtu.be/Eaf6r_BNFfk

It estimates position by matching road surface markings extracted from images with a vector map. Point cloud maps and LiDAR are not required. YabLoc enables users localize vehicles that are not equipped with LiDAR and in environments where point cloud maps are not available.

"},{"location":"localization/yabloc/#packages","title":"Packages","text":""},{"location":"localization/yabloc/#how-to-launch-yabloc-instead-of-ndt","title":"How to launch YabLoc instead of NDT","text":"

When launching autoware, if you set pose_source:=yabloc as an argument, YabLoc will be launched instead of NDT. By default, pose_source is ndt.

A sample command to run YabLoc is as follows

ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=$HOME/autoware_map/sample-map-rosbag\\\nvehicle_model:=sample_vehicle \\\nsensor_model:=sample_sensor_kit \\\npose_source:=yabloc\n
"},{"location":"localization/yabloc/#architecture","title":"Architecture","text":""},{"location":"localization/yabloc/#principle","title":"Principle","text":"

The diagram below illustrates the basic principle of YabLoc. It extracts road surface markings by extracting the line segments using the road area obtained from graph-based segmentation. The red line at the center-top of the diagram represents the line segments identified as road surface markings. YabLoc transforms these segments for each particle and determines the particle's weight by comparing them with the cost map generated from Lanelet2.

"},{"location":"localization/yabloc/#visualization","title":"Visualization","text":""},{"location":"localization/yabloc/#core-visualization-topics","title":"Core visualization topics","text":"

These topics are not visualized by default.

index topic name description 1 /localization/yabloc/pf/predicted_particle_marker particle distribution of particle filter. Red particles are probable candidate. 2 /localization/yabloc/pf/scored_cloud 3D projected line segments. the color indicates how well they match the map. 3 /localization/yabloc/image_processing/lanelet2_overlay_image overlay of lanelet2 (yellow lines) onto image based on estimated pose. If they match well with the actual road markings, it means that the localization performs well."},{"location":"localization/yabloc/#image-topics-for-debug","title":"Image topics for debug","text":"

These topics are not visualized by default.

index topic name description 1 /localization/yabloc/pf/cost_map_image cost map made from lanelet2 2 /localization/yabloc/pf/match_image projected line segments 3 /localization/yabloc/image_processing/image_with_colored_line_segment classified line segments. green line segments are used in particle correction 4 /localization/yabloc/image_processing/lanelet2_overlay_image overlay of lanelet2 5 /localization/yabloc/image_processing/segmented_image graph based segmentation result"},{"location":"localization/yabloc/#limitation","title":"Limitation","text":""},{"location":"localization/yabloc/yabloc_common/","title":"yabloc_common","text":""},{"location":"localization/yabloc/yabloc_common/#yabloc_common","title":"yabloc_common","text":"

This package contains some executable nodes related to map. Also, This provides some yabloc common library.

"},{"location":"localization/yabloc/yabloc_common/#ground_server","title":"ground_server","text":""},{"location":"localization/yabloc/yabloc_common/#purpose","title":"Purpose","text":"

It estimates the height and tilt of the ground from lanelet2.

"},{"location":"localization/yabloc/yabloc_common/#input-outputs","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input","title":"Input","text":"Name Type Description input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map input/pose geometry_msgs::msg::PoseStamped estimated self pose"},{"location":"localization/yabloc/yabloc_common/#output","title":"Output","text":"Name Type Description output/ground std_msgs::msg::Float32MultiArray estimated ground parameters. it contains x, y, z, normal_x, normal_y, normal_z. output/ground_markers visualization_msgs::msg::Marker visualization of estimated ground plane output/ground_status std_msgs::msg::String status log of ground plane estimation output/height std_msgs::msg::Float32 altitude output/near_cloud sensor_msgs::msg::PointCloud2 point cloud extracted from lanelet2 and used for ground tilt estimation"},{"location":"localization/yabloc/yabloc_common/#parameters","title":"Parameters","text":"Name Type Description Default Range force_zero_tilt boolean if true, the tilt is always determined to be horizontal False N/A K float the number of neighbors for ground search on a map 50 N/A R float radius for ground search on a map [m] 10 N/A"},{"location":"localization/yabloc/yabloc_common/#ll2_decomposer","title":"ll2_decomposer","text":""},{"location":"localization/yabloc/yabloc_common/#purpose_1","title":"Purpose","text":"

This node extracts the elements related to the road surface markings and yabloc from lanelet2.

"},{"location":"localization/yabloc/yabloc_common/#input-outputs_1","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input_1","title":"Input","text":"Name Type Description input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"localization/yabloc/yabloc_common/#output_1","title":"Output","text":"Name Type Description output/ll2_bounding_box sensor_msgs::msg::PointCloud2 bounding boxes extracted from lanelet2 output/ll2_road_marking sensor_msgs::msg::PointCloud2 road surface markings extracted from lanelet2 output/ll2_sign_board sensor_msgs::msg::PointCloud2 traffic sign boards extracted from lanelet2 output/sign_board_marker visualization_msgs::msg::MarkerArray visualized traffic sign boards"},{"location":"localization/yabloc/yabloc_common/#parameters_1","title":"Parameters","text":"Name Type Description Default Range road_marking_labels array line string types that indicating road surface markings in lanelet2 ['cross_walk', 'zebra_marking', 'line_thin', 'line_thick', 'pedestrian_marking', 'stop_line', 'road_border'] N/A sign_board_labels array line string types that indicating traffic sign boards in lanelet2 ['sign-board'] N/A bounding_box_labels array line string types that indicating not mapped areas in lanelet2 ['none'] N/A"},{"location":"localization/yabloc/yabloc_image_processing/","title":"yabloc_image_processing","text":""},{"location":"localization/yabloc/yabloc_image_processing/#yabloc_image_processing","title":"yabloc_image_processing","text":"

This package contains some executable nodes related to image processing.

"},{"location":"localization/yabloc/yabloc_image_processing/#line_segment_detector","title":"line_segment_detector","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose","title":"Purpose","text":"

This node extract all line segments from gray scale image.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input","title":"Input","text":"Name Type Description input/image_raw sensor_msgs::msg::Image undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output","title":"Output","text":"Name Type Description output/image_with_line_segments sensor_msgs::msg::Image image with line segments highlighted output/line_segments_cloud sensor_msgs::msg::PointCloud2 detected line segments as point cloud. each point contains x,y,z, normal_x, normal_y, normal_z and z, and normal_z are always empty."},{"location":"localization/yabloc/yabloc_image_processing/#graph_segmentation","title":"graph_segmentation","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_1","title":"Purpose","text":"

This node extract road surface region by graph-based-segmentation.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_1","title":"Input","text":"Name Type Description input/image_raw sensor_msgs::msg::Image undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output_1","title":"Output","text":"Name Type Description output/mask_image sensor_msgs::msg::Image image with masked segments determined as road surface area output/segmented_image sensor_msgs::msg::Image segmented image for visualization"},{"location":"localization/yabloc/yabloc_image_processing/#parameters","title":"Parameters","text":"Name Type Description Default Range target_height_ratio float height on the image to retrieve the candidate road surface 0.85 N/A target_candidate_box_width float size of the square area to search for candidate road surfaces 15 N/A pickup_additional_graph_segment boolean if this is true, additional regions of similar color are retrieved 1 N/A similarity_score_threshold float threshold for picking up additional areas 0.8 N/A sigma float parameter for cv::ximgproc::segmentation::GraphSegmentation 0.5 N/A k float parameter for cv::ximgproc::segmentation::GraphSegmentation 300 N/A min_size float parameter for cv::ximgproc::segmentation::GraphSegmentation 100 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#segment_filter","title":"segment_filter","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_2","title":"Purpose","text":"

This is a node that integrates the results of graph_segment and lsd to extract road surface markings.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_2","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_2","title":"Input","text":"Name Type Description input/line_segments_cloud sensor_msgs::msg::PointCloud2 detected line segment input/mask_image sensor_msgs::msg::Image image with masked segments determined as road surface area input/camera_info sensor_msgs::msg::CameraInfo undistorted camera info"},{"location":"localization/yabloc/yabloc_image_processing/#output_2","title":"Output","text":"Name Type Description output/line_segments_cloud sensor_msgs::msg::PointCloud2 filtered line segments for visualization output/projected_image sensor_msgs::msg::Image projected filtered line segments for visualization output/projected_line_segments_cloud sensor_msgs::msg::PointCloud2 projected filtered line segments"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_1","title":"Parameters","text":"Name Type Description Default Range min_segment_length float min length threshold (if it is negative, it is unlimited) 1.5 N/A max_segment_distance float max distance threshold (if it is negative, it is unlimited) 30 N/A max_lateral_distance float max lateral distance threshold (if it is negative, it is unlimited) 10 N/A publish_image_with_segment_for_debug boolean toggle whether to publish the filtered line segment for debug 1 N/A max_range float range of debug projection visualization 20 N/A image_size float image size of debug projection visualization 800 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#undistort","title":"undistort","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_3","title":"Purpose","text":"

This node performs image resizing and undistortion at the same time.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_3","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_3","title":"Input","text":"Name Type Description input/camera_info sensor_msgs::msg::CameraInfo camera info input/image_raw sensor_msgs::msg::Image raw camera image input/image_raw/compressed sensor_msgs::msg::CompressedImage compressed camera image

This node subscribes to both compressed image and raw image topics. If raw image is subscribed to even once, compressed image will no longer be subscribed to. This is to avoid redundant decompression within Autoware.

"},{"location":"localization/yabloc/yabloc_image_processing/#output_3","title":"Output","text":"Name Type Description output/camera_info sensor_msgs::msg::CameraInfo resized camera info output/image_raw sensor_msgs::msg::CompressedImage undistorted and resized image"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_2","title":"Parameters","text":"Name Type Description Default Range use_sensor_qos boolean whether to use sensor qos or not True N/A width float resized image width size 800 N/A override_frame_id string value for overriding the camera's frame_id. if blank, frame_id of static_tf is not overwritten N/A"},{"location":"localization/yabloc/yabloc_image_processing/#about-tf_static-overriding","title":"about tf_static overriding","text":"click to open Some nodes requires `/tf_static` from `/base_link` to the frame_id of `/sensing/camera/traffic_light/image_raw/compressed` (e.g. `/traffic_light_left_camera/camera_optical_link`). You can verify that the tf_static is correct with the following command.
ros2 run tf2_ros tf2_echo base_link traffic_light_left_camera/camera_optical_link\n
If the wrong `/tf_static` are broadcasted due to using a prototype vehicle, not having accurate calibration data, or some other unavoidable reason, it is useful to give the frame_id in `override_camera_frame_id`. If you give it a non-empty string, `/image_processing/undistort_node` will rewrite the frame_id in camera_info. For example, you can give a different tf_static as follows.
ros2 launch yabloc_launch sample_launch.xml override_camera_frame_id:=fake_camera_optical_link\nros2 run tf2_ros static_transform_publisher \\\n--frame-id base_link \\\n--child-frame-id fake_camera_optical_link \\\n--roll -1.57 \\\n--yaw -1.570\n
"},{"location":"localization/yabloc/yabloc_image_processing/#lanelet2_overlay","title":"lanelet2_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_4","title":"Purpose","text":"

This node overlays lanelet2 on the camera image based on the estimated self-position.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_4","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_4","title":"Input","text":"Name Type Description input/pose geometry_msgs::msg::PoseStamped estimated self pose input/projected_line_segments_cloud sensor_msgs::msg::PointCloud2 projected line segments including non-road markings input/camera_info sensor_msgs::msg::CameraInfo undistorted camera info input/image_raw sensor_msgs::msg::Image undistorted camera image input/ground std_msgs::msg::Float32MultiArray ground tilt input/ll2_road_marking sensor_msgs::msg::PointCloud2 lanelet2 elements regarding road surface markings input/ll2_sign_board sensor_msgs::msg::PointCloud2 lanelet2 elements regarding traffic sign boards"},{"location":"localization/yabloc/yabloc_image_processing/#output_4","title":"Output","text":"Name Type Description output/lanelet2_overlay_image sensor_msgs::msg::Image lanelet2 overlaid image output/projected_marker visualization_msgs::msg::Marker 3d projected line segments including non-road markings"},{"location":"localization/yabloc/yabloc_image_processing/#line_segments_overlay","title":"line_segments_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_5","title":"Purpose","text":"

This node visualize classified line segments on the camera image

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_5","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_5","title":"Input","text":"Name Type Description input/line_segments_cloud sensor_msgs::msg::PointCloud2 classified line segments input/image_raw sensor_msgs::msg::Image undistorted camera image"},{"location":"localization/yabloc/yabloc_image_processing/#output_5","title":"Output","text":"Name Type Description output/image_with_colored_line_segments sensor_msgs::msg::Image image with highlighted line segments"},{"location":"localization/yabloc/yabloc_monitor/","title":"yabloc_monitor","text":""},{"location":"localization/yabloc/yabloc_monitor/#yabloc_monitor","title":"yabloc_monitor","text":"

YabLoc monitor is a node that monitors the status of the YabLoc localization system. It is a wrapper node that monitors the status of the YabLoc localization system and publishes the status as diagnostics.

"},{"location":"localization/yabloc/yabloc_monitor/#feature","title":"Feature","text":""},{"location":"localization/yabloc/yabloc_monitor/#availability","title":"Availability","text":"

The node monitors the final output pose of YabLoc to verify the availability of YabLoc.

"},{"location":"localization/yabloc/yabloc_monitor/#others","title":"Others","text":"

To be added,

"},{"location":"localization/yabloc/yabloc_monitor/#interfaces","title":"Interfaces","text":""},{"location":"localization/yabloc/yabloc_monitor/#input","title":"Input","text":"Name Type Description ~/input/yabloc_pose geometry_msgs/PoseStamped The final output pose of YabLoc"},{"location":"localization/yabloc/yabloc_monitor/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"localization/yabloc/yabloc_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range availability/timestamp_tolerance float tolerable time difference between current time and latest estimated pose 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/","title":"yabLoc_particle_filter","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#yabloc_particle_filter","title":"yabLoc_particle_filter","text":"

This package contains some executable nodes related to particle filter.

"},{"location":"localization/yabloc/yabloc_particle_filter/#particle_predictor","title":"particle_predictor","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#input","title":"Input","text":"Name Type Description input/initialpose geometry_msgs::msg::PoseWithCovarianceStamped to specify the initial position of particles input/twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped linear velocity and angular velocity of prediction update input/height std_msgs::msg::Float32 ground height input/weighted_particles yabloc_particle_filter::msg::ParticleArray particles weighted by corrector nodes"},{"location":"localization/yabloc/yabloc_particle_filter/#output","title":"Output","text":"Name Type Description output/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped particle centroid with covariance output/pose geometry_msgs::msg::PoseStamped particle centroid with covariance output/predicted_particles yabloc_particle_filter::msg::ParticleArray particles weighted by predictor nodes debug/init_marker visualization_msgs::msg::Marker debug visualization of initial position debug/particles_marker_array visualization_msgs::msg::MarkerArray particles visualization. published if visualize is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range visualize boolean whether particles are also published in visualization_msgs or not True N/A static_linear_covariance float overriding covariance of /twist_with_covariance 0.04 N/A static_angular_covariance float overriding covariance of /twist_with_covariance 0.006 N/A resampling_interval_seconds float the interval of particle resampling 1.0 N/A num_of_particles float the number of particles 500 N/A prediction_rate float frequency of forecast updates, in Hz 50.0 N/A cov_xx_yy array the covariance of initial pose [2.0, 0.25] N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services","title":"Services","text":"Name Type Description yabloc_trigger_srv std_srvs::srv::SetBool activation and deactivation of yabloc estimation"},{"location":"localization/yabloc/yabloc_particle_filter/#gnss_particle_corrector","title":"gnss_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_1","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#input_1","title":"Input","text":"Name Type Description input/height std_msgs::msg::Float32 ground height input/predicted_particles yabloc_particle_filter::msg::ParticleArray predicted particles input/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped gnss measurement. used if use_ublox_msg is false input/navpvt ublox_msgs::msg::NavPVT gnss measurement. used if use_ublox_msg is true"},{"location":"localization/yabloc/yabloc_particle_filter/#output_1","title":"Output","text":"Name Type Description output/weighted_particles yabloc_particle_filter::msg::ParticleArray weighted particles debug/gnss_range_marker visualization_msgs::msg::MarkerArray gnss weight distribution debug/particles_marker_array visualization_msgs::msg::MarkerArray particles visualization. published if visualize is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_1","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A mahalanobis_distance_threshold float if the Mahalanobis distance to the GNSS for particle exceeds this, the correction skips. 30 N/A for_fixed/max_weight float gnss weight distribution used when observation is fixed 5 N/A for_fixed/flat_radius float gnss weight distribution used when observation is fixed 0.5 N/A for_fixed/max_radius float gnss weight distribution used when observation is fixed 10 N/A for_fixed/min_weight float gnss weight distribution used when observation is fixed 0.5 N/A for_not_fixed/max_weight float gnss weight distribution used when observation is not fixed 1 N/A for_not_fixed/flat_radius float gnss weight distribution used when observation is not fixed 5 N/A for_not_fixed/max_radius float gnss weight distribution used when observation is not fixed 20 N/A for_not_fixed/min_weight float gnss weight distribution used when observation is not fixed 0.5 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#camera_particle_corrector","title":"camera_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_2","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#inputs-outputs_2","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#input_2","title":"Input","text":"Name Type Description input/predicted_particles yabloc_particle_filter::msg::ParticleArray predicted particles input/ll2_bounding_box sensor_msgs::msg::PointCloud2 road surface markings converted to line segments input/ll2_road_marking sensor_msgs::msg::PointCloud2 road surface markings converted to line segments input/projected_line_segments_cloud sensor_msgs::msg::PointCloud2 projected line segments input/pose geometry_msgs::msg::PoseStamped reference to retrieve the area map around the self location"},{"location":"localization/yabloc/yabloc_particle_filter/#output_2","title":"Output","text":"Name Type Description output/weighted_particles yabloc_particle_filter::msg::ParticleArray weighted particles debug/cost_map_image sensor_msgs::msg::Image cost map created from lanelet2 debug/cost_map_range visualization_msgs::msg::MarkerArray cost map boundary debug/match_image sensor_msgs::msg::Image projected line segments image debug/scored_cloud sensor_msgs::msg::PointCloud2 weighted 3d line segments debug/scored_post_cloud sensor_msgs::msg::PointCloud2 weighted 3d line segments which are iffy debug/state_string std_msgs::msg::String string describing the node state debug/particles_marker_array visualization_msgs::msg::MarkerArray particles visualization. published if visualize is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_2","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A image_size float image size of debug/cost_map_image 800 N/A max_range float width of hierarchical cost map 40 N/A gamma float gamma value of the intensity gradient of the cost map 5 N/A min_prob float minimum particle weight the corrector node gives 0.1 N/A far_weight_gain float exp(-far_weight_gain_ * squared_distance_from_camera) is weight gain. if this is large, the nearby road markings will be more important 0.001 N/A enabled_at_first boolean if it is false, this node is not activated at first. you can activate by service call 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services_1","title":"Services","text":"Name Type Description switch_srv std_srvs::srv::SetBool activation and deactivation of correction"},{"location":"localization/yabloc/yabloc_pose_initializer/","title":"yabloc_pose_initializer","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#yabloc_pose_initializer","title":"yabloc_pose_initializer","text":"

This package contains a node related to initial pose estimation.

This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by ansible during env preparation phase of the installation. It is also possible to download it manually. Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.

To download and extract the model manually:

$ mkdir -p ~/autoware_data/yabloc_pose_initializer/\n$ wget -P ~/autoware_data/yabloc_pose_initializer/ \\\nhttps://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/136_road-segmentation-adas-0001/resources.tar.gz\n$ tar xzf ~/autoware_data/yabloc_pose_initializer/resources.tar.gz -C ~/autoware_data/yabloc_pose_initializer/\n
"},{"location":"localization/yabloc/yabloc_pose_initializer/#note","title":"Note","text":"

This package makes use of external code. The trained files are provided by apollo. The trained files are automatically downloaded during env preparation.

Original model URL

https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/road-segmentation-adas-0001

Open Model Zoo is licensed under Apache License Version 2.0.

Converted model URL

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/136_road-segmentation-adas-0001

model conversion scripts are released under the MIT license

"},{"location":"localization/yabloc/yabloc_pose_initializer/#special-thanks","title":"Special thanks","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#camera_pose_initializer","title":"camera_pose_initializer","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#purpose","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#input","title":"Input","text":"Name Type Description input/camera_info sensor_msgs::msg::CameraInfo undistorted camera info input/image_raw sensor_msgs::msg::Image undistorted camera image input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"localization/yabloc/yabloc_pose_initializer/#output","title":"Output","text":"Name Type Description output/candidates visualization_msgs::msg::MarkerArray initial pose candidates"},{"location":"localization/yabloc/yabloc_pose_initializer/#parameters","title":"Parameters","text":"Name Type Description Default Range angle_resolution float how many divisions of 1 sigma angle range 30 N/A"},{"location":"localization/yabloc/yabloc_pose_initializer/#services","title":"Services","text":"Name Type Description yabloc_align_srv tier4_localization_msgs::srv::PoseWithCovarianceStamped initial pose estimation request"},{"location":"map/autoware_lanelet2_map_visualizer/","title":"autoware_lanelet2_map_visualizer package","text":""},{"location":"map/autoware_lanelet2_map_visualizer/#autoware_lanelet2_map_visualizer-package","title":"autoware_lanelet2_map_visualizer package","text":"

This package provides the features of visualizing the lanelet2 maps.

"},{"location":"map/autoware_lanelet2_map_visualizer/#lanelet2_map_visualization","title":"lanelet2_map_visualization","text":""},{"location":"map/autoware_lanelet2_map_visualizer/#feature","title":"Feature","text":"

lanelet2_map_visualization visualizes autoware_map_msgs/LaneletMapBin messages into visualization_msgs/MarkerArray.

"},{"location":"map/autoware_lanelet2_map_visualizer/#how-to-run","title":"How to Run","text":"

ros2 run autoware_lanelet2_map_visualizer lanelet2_map_visualization

"},{"location":"map/autoware_lanelet2_map_visualizer/#subscribed-topics","title":"Subscribed Topics","text":""},{"location":"map/autoware_lanelet2_map_visualizer/#published-topics","title":"Published Topics","text":""},{"location":"map/autoware_map_height_fitter/","title":"autoware_map_height_fitter","text":""},{"location":"map/autoware_map_height_fitter/#autoware_map_height_fitter","title":"autoware_map_height_fitter","text":"

This library fits the given point with the ground of the point cloud map. The map loading operation is switched by the parameter enable_partial_load of the node specified by map_loader_name. The node using this library must use multi thread executor.

"},{"location":"map/autoware_map_height_fitter/#parameters","title":"Parameters","text":"Name Type Description Default Range map_loader_name string Node name of the map loader from which this map_height_fitter will retrieve its parameters /map/pointcloud_map_loader N/A target string Target map to fit (choose from 'pointcloud_map', 'vector_map') pointcloud_map N/A"},{"location":"map/autoware_map_height_fitter/#topic-subscription","title":"Topic subscription","text":"Topic Name Description ~/pointcloud_map The topic containing the whole pointcloud map (only used when enable_partial_load = false)"},{"location":"map/autoware_map_height_fitter/#service-client","title":"Service client","text":"Service Name Description ~/partial_map_load The service to load the partial map"},{"location":"map/autoware_map_loader/","title":"autoware_map_loader package","text":""},{"location":"map/autoware_map_loader/#autoware_map_loader-package","title":"autoware_map_loader package","text":"

This package provides the features of loading various maps.

"},{"location":"map/autoware_map_loader/#pointcloud_map_loader","title":"pointcloud_map_loader","text":""},{"location":"map/autoware_map_loader/#feature","title":"Feature","text":"

pointcloud_map_loader provides pointcloud maps to the other Autoware nodes in various configurations. Currently, it supports the following two types:

NOTE: We strongly recommend to use divided maps when using large pointcloud map to enable the latter two features (partial and differential load). Please go through the prerequisites section for more details, and follow the instruction for dividing the map and preparing the metadata.

"},{"location":"map/autoware_map_loader/#prerequisites","title":"Prerequisites","text":""},{"location":"map/autoware_map_loader/#prerequisites-on-pointcloud-map-files","title":"Prerequisites on pointcloud map file(s)","text":"

You may provide either a single .pcd file or multiple .pcd files. If you are using multiple PCD data, it MUST obey the following rules:

  1. The pointcloud map should be projected on the same coordinate defined in map_projection_loader, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to the readme of map_projection_loader.
  2. It must be divided by straight lines parallel to the x-axis and y-axis. The system does not support division by diagonal lines or curved lines.
  3. The division size along each axis should be equal.
  4. The division size should be about 20m x 20m. Particularly, care should be taken as using too large division size (for example, more than 100m) may have adverse effects on dynamic map loading features in ndt_scan_matcher and autoware_compare_map_segmentation.
  5. All the split maps should not overlap with each other.
  6. Metadata file should also be provided. The metadata structure description is provided below.
"},{"location":"map/autoware_map_loader/#metadata-structure","title":"Metadata structure","text":"

The metadata should look like this:

x_resolution: 20.0\ny_resolution: 20.0\nA.pcd: [1200, 2500] # -> 1200 < x < 1220, 2500 < y < 2520\nB.pcd: [1220, 2500] # -> 1220 < x < 1240, 2500 < y < 2520\nC.pcd: [1200, 2520] # -> 1200 < x < 1220, 2520 < y < 2540\nD.pcd: [1240, 2520] # -> 1240 < x < 1260, 2520 < y < 2540\n

where,

You may use pointcloud_divider for dividing pointcloud map as well as generating the compatible metadata.yaml.

"},{"location":"map/autoware_map_loader/#directory-structure-of-these-files","title":"Directory structure of these files","text":"

If you only have one pointcloud map, Autoware will assume the following directory structure by default.

sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n

If you have multiple rosbags, an example directory structure would be as follows. Note that you need to have a metadata when you have multiple pointcloud map files.

sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u2502 \u251c\u2500\u2500 A.pcd\n\u2502 \u251c\u2500\u2500 B.pcd\n\u2502 \u251c\u2500\u2500 C.pcd\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n
"},{"location":"map/autoware_map_loader/#specific-features","title":"Specific features","text":""},{"location":"map/autoware_map_loader/#publish-raw-pointcloud-map-ros-2-topic","title":"Publish raw pointcloud map (ROS 2 topic)","text":"

The node publishes the raw pointcloud map loaded from the .pcd file(s).

"},{"location":"map/autoware_map_loader/#publish-downsampled-pointcloud-map-ros-2-topic","title":"Publish downsampled pointcloud map (ROS 2 topic)","text":"

The node publishes the downsampled pointcloud map loaded from the .pcd file(s). You can specify the downsample resolution by changing the leaf_size parameter.

"},{"location":"map/autoware_map_loader/#publish-metadata-of-pointcloud-map-ros-2-topic","title":"Publish metadata of pointcloud map (ROS 2 topic)","text":"

The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the .yaml file. Please see the description of PointCloudMapMetaData.msg for details.

"},{"location":"map/autoware_map_loader/#send-partial-pointcloud-map-ros-2-service","title":"Send partial pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area. Please see the description of GetPartialPointCloudMap.srv for details.

"},{"location":"map/autoware_map_loader/#send-differential-pointcloud-map-ros-2-service","title":"Send differential pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs. Please see the description of GetDifferentialPointCloudMap.srv for details.

"},{"location":"map/autoware_map_loader/#send-selected-pointcloud-map-ros-2-service","title":"Send selected pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given IDs query from a client node, the node sends a set of pointcloud maps (each of which attached with unique ID) specified by query. Please see the description of GetSelectedPointCloudMap.srv for details.

"},{"location":"map/autoware_map_loader/#parameters","title":"Parameters","text":"Name Type Description Default Range enable_whole_load boolean Enable raw pointcloud map publishing True N/A enable_downsampled_whole_load boolean Enable downsampled pointcloud map publishing False N/A enable_partial_load boolean Enable partial pointcloud map server True N/A enable_selected_load boolean Enable selected pointcloud map server False N/A leaf_size float Downsampling leaf size (only used when enable_downsampled_whole_load is set true) 3.0 N/A pcd_paths_or_directory array Path(s) to pointcloud map file or directory [] N/A pcd_metadata_path string Path to pointcloud metadata file N/A"},{"location":"map/autoware_map_loader/#interfaces","title":"Interfaces","text":""},{"location":"map/autoware_map_loader/#lanelet2_map_loader","title":"lanelet2_map_loader","text":""},{"location":"map/autoware_map_loader/#feature_1","title":"Feature","text":"

lanelet2_map_loader loads Lanelet2 file and publishes the map data as autoware_map_msgs/LaneletMapBin message. The node projects lan/lon coordinates into arbitrary coordinates defined in /map/map_projector_info from map_projection_loader. Please see autoware_map_msgs/msg/MapProjectorInfo.msg for supported projector types.

"},{"location":"map/autoware_map_loader/#how-to-run","title":"How to run","text":"

ros2 run autoware_map_loader lanelet2_map_loader --ros-args -p lanelet2_map_path:=path/to/map.osm

"},{"location":"map/autoware_map_loader/#subscribed-topics","title":"Subscribed Topics","text":""},{"location":"map/autoware_map_loader/#published-topics","title":"Published Topics","text":""},{"location":"map/autoware_map_loader/#parameters_1","title":"Parameters","text":"Name Type Description Default Range allow_unsupported_version boolean Flag to load unsupported format_version anyway. If true, just prints warning. true N/A center_line_resolution float Resolution of the Lanelet center line [m] 5.0 N/A use_waypoints boolean If true, centerline in the Lanelet2 map will be used as a waypoints tag. True N/A lanelet2_map_path string The lanelet2 map path pointing to the .osm file N/A

use_waypoints decides how to handle a centerline. This flag enables to use the overwriteLaneletsCenterlineWithWaypoints function instead of overwriteLaneletsCenterline. Please see the document of the autoware_lanelet2_extension package in detail.

"},{"location":"map/autoware_map_projection_loader/","title":"autoware_map_projection_loader","text":""},{"location":"map/autoware_map_projection_loader/#autoware_map_projection_loader","title":"autoware_map_projection_loader","text":""},{"location":"map/autoware_map_projection_loader/#feature","title":"Feature","text":"

autoware_map_projection_loader is responsible for publishing map_projector_info that defines in which kind of coordinate Autoware is operating. This is necessary information especially when you want to convert from global (geoid) to local coordinate or the other way around.

"},{"location":"map/autoware_map_projection_loader/#map-projector-info-file-specification","title":"Map projector info file specification","text":"

You need to provide a YAML file, namely map_projector_info.yaml under the map_path directory. For pointcloud_map_metadata.yaml, please refer to the Readme of autoware_map_loader.

sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n

There are three types of transformations from latitude and longitude to XYZ coordinate system as shown in the figure below. Please refer to the following details for the necessary parameters for each projector type.

"},{"location":"map/autoware_map_projection_loader/#using-local-coordinate","title":"Using local coordinate","text":"
# map_projector_info.yaml\nprojector_type: Local\n
"},{"location":"map/autoware_map_projection_loader/#limitation","title":"Limitation","text":"

The functionality that requires latitude and longitude will become unavailable.

The currently identified unavailable functionalities are:

"},{"location":"map/autoware_map_projection_loader/#using-mgrs","title":"Using MGRS","text":"

If you want to use MGRS, please specify the MGRS grid as well.

# map_projector_info.yaml\nprojector_type: MGRS\nvertical_datum: WGS84\nmgrs_grid: 54SUE\n
"},{"location":"map/autoware_map_projection_loader/#limitation_1","title":"Limitation","text":"

It cannot be used with maps that span across two or more MGRS grids. Please use it only when it falls within the scope of a single MGRS grid.

"},{"location":"map/autoware_map_projection_loader/#using-localcartesianutm","title":"Using LocalCartesianUTM","text":"

If you want to use local cartesian UTM, please specify the map origin as well.

# map_projector_info.yaml\nprojector_type: LocalCartesianUTM\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/autoware_map_projection_loader/#using-transversemercator","title":"Using TransverseMercator","text":"

If you want to use Transverse Mercator projection, please specify the map origin as well.

# map_projector_info.yaml\nprojector_type: TransverseMercator\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/autoware_map_projection_loader/#published-topics","title":"Published Topics","text":""},{"location":"map/autoware_map_projection_loader/#parameters","title":"Parameters","text":"

Note that these parameters are assumed to be passed from launch arguments, and it is not recommended to directly write them in map_projection_loader.param.yaml.

Name Type Description Default Range map_projector_info_path string The path where map_projector_info.yaml is located $(var map_projector_info_path) N/A lanelet2_map_path string The path where the lanelet2 map file (.osm) is located $(var lanelet2_map_path) N/A"},{"location":"map/autoware_map_tf_generator/","title":"autoware_map_tf_generator","text":""},{"location":"map/autoware_map_tf_generator/#autoware_map_tf_generator","title":"autoware_map_tf_generator","text":""},{"location":"map/autoware_map_tf_generator/#purpose","title":"Purpose","text":"

The nodes in this package broadcast the viewer frame for visualization of the map in RViz.

Note that there is no module to need the viewer frame and this is used only for visualization.

The following are the supported methods to calculate the position of the viewer frame:

"},{"location":"map/autoware_map_tf_generator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"map/autoware_map_tf_generator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"map/autoware_map_tf_generator/#input","title":"Input","text":""},{"location":"map/autoware_map_tf_generator/#autoware_pcd_map_tf_generator","title":"autoware_pcd_map_tf_generator","text":"Name Type Description /map/pointcloud_map sensor_msgs::msg::PointCloud2 Subscribe pointcloud map to calculate position of viewer frames"},{"location":"map/autoware_map_tf_generator/#autoware_vector_map_tf_generator","title":"autoware_vector_map_tf_generator","text":"Name Type Description /map/vector_map autoware_map_msgs::msg::LaneletMapBin Subscribe vector map to calculate position of viewer frames"},{"location":"map/autoware_map_tf_generator/#output","title":"Output","text":"Name Type Description /tf_static tf2_msgs/msg/TFMessage Broadcast viewer frames"},{"location":"map/autoware_map_tf_generator/#parameters","title":"Parameters","text":""},{"location":"map/autoware_map_tf_generator/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"map/autoware_map_tf_generator/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range map_frame string The parent frame name of viewer frame map N/A viewer_frame string Name of viewer frame viewer N/A"},{"location":"map/autoware_map_tf_generator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"map/util/lanelet2_map_preprocessor/","title":"Index","text":"

This package has been moved to https://github.com/autowarefoundation/autoware_tools/tree/main/map/autoware_lanelet2_map_utils .

"},{"location":"perception/autoware_bytetrack/","title":"bytetrack","text":""},{"location":"perception/autoware_bytetrack/#bytetrack","title":"bytetrack","text":""},{"location":"perception/autoware_bytetrack/#purpose","title":"Purpose","text":"

The core algorithm, named ByteTrack, mainly aims to perform multi-object tracking. Because the algorithm associates almost every detection box including ones with low detection scores, the number of false negatives is expected to decrease by using it.

demo video

"},{"location":"perception/autoware_bytetrack/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_bytetrack/#cite","title":"Cite","text":""},{"location":"perception/autoware_bytetrack/#2d-tracking-modification-from-original-codes","title":"2d tracking modification from original codes","text":"

The paper just says that the 2d tracking algorithm is a simple Kalman filter. Original codes use the top-left-corner and aspect ratio and size as the state vector.

This is sometimes unstable because the aspect ratio can be changed by the occlusion. So, we use the top-left and size as the state vector.

Kalman filter settings can be controlled by the parameters in config/bytetrack_node.param.yaml.

"},{"location":"perception/autoware_bytetrack/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_bytetrack/#bytetrack_node","title":"bytetrack_node","text":""},{"location":"perception/autoware_bytetrack/#input","title":"Input","text":"Name Type Description in/rect tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes"},{"location":"perception/autoware_bytetrack/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/objects/debug/uuid tier4_perception_msgs/DynamicObjectArray The universally unique identifiers (UUID) for each object"},{"location":"perception/autoware_bytetrack/#bytetrack_visualizer","title":"bytetrack_visualizer","text":""},{"location":"perception/autoware_bytetrack/#input_1","title":"Input","text":"Name Type Description in/image sensor_msgs/Image or sensor_msgs/CompressedImage The input image on which object detection is performed in/rect tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes in/uuid tier4_perception_msgs/DynamicObjectArray The universally unique identifiers (UUID) for each object"},{"location":"perception/autoware_bytetrack/#output_1","title":"Output","text":"Name Type Description out/image sensor_msgs/Image The image that detection bounding boxes and their UUIDs are drawn"},{"location":"perception/autoware_bytetrack/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_bytetrack/#bytetrack_node_1","title":"bytetrack_node","text":"Name Type Default Value Description track_buffer_length int 30 The frame count that a tracklet is considered to be lost"},{"location":"perception/autoware_bytetrack/#bytetrack_visualizer_1","title":"bytetrack_visualizer","text":"Name Type Default Value Description use_raw bool false The flag for the node to switch sensor_msgs/Image or sensor_msgs/CompressedImage as input"},{"location":"perception/autoware_bytetrack/#assumptionsknown-limits","title":"Assumptions/Known limits","text":""},{"location":"perception/autoware_bytetrack/#reference-repositories","title":"Reference repositories","text":""},{"location":"perception/autoware_bytetrack/#license","title":"License","text":"

The codes under the lib directory are copied from the original codes and modified. The original codes belong to the MIT license stated as follows, while this ported packages are provided with Apache License 2.0:

MIT License

Copyright (c) 2021 Yifu Zhang

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

"},{"location":"perception/autoware_cluster_merger/","title":"autoware cluster merger","text":""},{"location":"perception/autoware_cluster_merger/#autoware-cluster-merger","title":"autoware cluster merger","text":""},{"location":"perception/autoware_cluster_merger/#purpose","title":"Purpose","text":"

autoware_cluster_merger is a package for merging pointcloud clusters as detected objects with feature type.

"},{"location":"perception/autoware_cluster_merger/#inner-working-algorithms","title":"Inner-working / Algorithms","text":"

The clusters of merged topics are simply concatenated from clusters of input topics.

"},{"location":"perception/autoware_cluster_merger/#input-output","title":"Input / Output","text":""},{"location":"perception/autoware_cluster_merger/#input","title":"Input","text":"Name Type Description input/cluster0 tier4_perception_msgs::msg::DetectedObjectsWithFeature pointcloud clusters input/cluster1 tier4_perception_msgs::msg::DetectedObjectsWithFeature pointcloud clusters"},{"location":"perception/autoware_cluster_merger/#output","title":"Output","text":"Name Type Description output/clusters tier4_perception_msgs::msg::DetectedObjectsWithFeature merged clusters"},{"location":"perception/autoware_cluster_merger/#parameters","title":"Parameters","text":"Name Type Description Default value output_frame_id string The header frame_id of output topic. base_link"},{"location":"perception/autoware_cluster_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_cluster_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_cluster_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_cluster_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_cluster_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_compare_map_segmentation/","title":"autoware_compare_map_segmentation","text":""},{"location":"perception/autoware_compare_map_segmentation/#autoware_compare_map_segmentation","title":"autoware_compare_map_segmentation","text":""},{"location":"perception/autoware_compare_map_segmentation/#purpose","title":"Purpose","text":"

The autoware_compare_map_segmentation is a package that filters the ground points from the input pointcloud by using map info (e.g. pcd, elevation map or split map pointcloud from map_loader interface).

"},{"location":"perception/autoware_compare_map_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_compare_map_segmentation/#compare-elevation-map-filter","title":"Compare Elevation Map Filter","text":"

Compare the z of the input points with the value of elevation_map. The height difference is calculated by the binary integration of neighboring cells. Remove points whose height difference is below the height_diff_thresh.

"},{"location":"perception/autoware_compare_map_segmentation/#distance-based-compare-map-filter","title":"Distance Based Compare Map Filter","text":"

This filter compares the input pointcloud with the map pointcloud using the nearestKSearch function of kdtree and removes points that are close to the map point cloud. The map pointcloud can be loaded statically at once at the beginning or dynamically as the vehicle moves.

"},{"location":"perception/autoware_compare_map_segmentation/#voxel-based-approximate-compare-map-filter","title":"Voxel Based Approximate Compare Map Filter","text":"

The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them.

"},{"location":"perception/autoware_compare_map_segmentation/#voxel-based-compare-map-filter","title":"Voxel Based Compare Map Filter","text":"

The filter loads the map pointcloud (static loading whole map at once at beginning or dynamic loading during vehicle moving) and utilizes VoxelGrid to downsample map pointcloud.

For each point of input pointcloud, the filter use getCentroidIndexAt combine with getGridCoordinates function from VoxelGrid class to check if the downsampled map point existing surrounding input points. Remove the input point which has downsampled map point in voxels containing or being close to the point.

"},{"location":"perception/autoware_compare_map_segmentation/#voxel-distance-based-compare-map-filter","title":"Voxel Distance based Compare Map Filter","text":"

This filter is a combination of the distance_based_compare_map_filter and voxel_based_approximate_compare_map_filter. The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid and a k-d tree of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them. For points that do not belong to any voxel grid, they are compared again with the map point cloud using the radiusSearch function of the k-d tree and are removed if they are close enough to the map.

"},{"location":"perception/autoware_compare_map_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_compare_map_segmentation/#compare-elevation-map-filter_1","title":"Compare Elevation Map Filter","text":""},{"location":"perception/autoware_compare_map_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/elevation_map grid_map::msg::GridMap elevation map"},{"location":"perception/autoware_compare_map_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/autoware_compare_map_segmentation/#parameters","title":"Parameters","text":"Name Type Description Default value map_layer_name string elevation map layer name elevation map_frame float frame_id of the map that is temporarily used before elevation_map is subscribed map height_diff_thresh float Remove points whose height difference is below this value [m] 0.15"},{"location":"perception/autoware_compare_map_segmentation/#other-filters","title":"Other Filters","text":""},{"location":"perception/autoware_compare_map_segmentation/#input_1","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/map sensor_msgs::msg::PointCloud2 map (in case static map loading) /localization/kinematic_state nav_msgs::msg::Odometry current ego-vehicle pose (in case dynamic map loading)"},{"location":"perception/autoware_compare_map_segmentation/#output_1","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/autoware_compare_map_segmentation/#parameters_1","title":"Parameters","text":"Name Type Description Default value use_dynamic_map_loading bool map loading mode selection, true for dynamic map loading, false for static map loading, recommended for no-split map pointcloud true distance_threshold float Threshold distance to compare input points with map points [m] 0.5 map_update_distance_threshold float Threshold of vehicle movement distance when map update is necessary (in dynamic map loading) [m] 10.0 map_loader_radius float Radius of map need to be loaded (in dynamic map loading) [m] 150.0 timer_interval_ms int Timer interval to check if the map update is necessary (in dynamic map loading) [ms] 100 publish_debug_pcd bool Enable to publish voxelized updated map in debug/downsampled_map/pointcloud for debugging. It might cause additional computation cost false downsize_ratio_z_axis double Positive ratio to reduce voxel_leaf_size and neighbor point distance threshold in z axis 0.5"},{"location":"perception/autoware_compare_map_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/","title":"autoware_crosswalk_traffic_light_estimator","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#autoware_crosswalk_traffic_light_estimator","title":"autoware_crosswalk_traffic_light_estimator","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#purpose","title":"Purpose","text":"

autoware_crosswalk_traffic_light_estimator estimates pedestrian traffic signals which can be summarized as the following two tasks:

This module works without ~/input/route, but its behavior is outputting the subscribed results as is.

"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#input","title":"Input","text":"Name Type Description ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/route autoware_planning_msgs::msg::LaneletRoute optional: route ~/input/classified/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray classified signals"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray output that contains estimated pedestrian traffic signals"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#parameters","title":"Parameters","text":"Name Type Description Default value use_last_detect_color bool If this parameter is true, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is false, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) true last_detect_color_hold_time double The time threshold to hold for last detect color. The unit is second. 2.0 last_colors_hold_time double The time threshold to hold for history detected pedestrian traffic light color. The unit is second. 1.0"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

When the pedestrian traffic signals are detected by perception pipeline

When the pedestrian traffic signals are NOT detected by perception pipeline

"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#estimate-whether-pedestrian-traffic-signals-are-flashing","title":"Estimate whether pedestrian traffic signals are flashing","text":"
start\nif (the pedestrian traffic light classification result exists)then\n    : update the flashing flag according to the classification result(in_signal) and last_signals\n    if (the traffic light is flashing?)then(yes)\n      : update the traffic light state\n    else(no)\n      : the traffic light state is the same with the classification result\nif (the classification result not exists)\n    : the traffic light state is the same with the estimation\n : output the current traffic light state\nend\n
"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#update-flashing-flag","title":"Update flashing flag","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#update-traffic-light-status","title":"Update traffic light status","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#estimate-the-color-of-pedestrian-traffic-signals","title":"Estimate the color of pedestrian traffic signals","text":"

If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe RED in order to prevent pedestrian from crossing when the following conditions are satisfied.

"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#situation1","title":"Situation1","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#situation2","title":"Situation2","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_detected_object_feature_remover/","title":"autoware_detected_object_feature_remover","text":""},{"location":"perception/autoware_detected_object_feature_remover/#autoware_detected_object_feature_remover","title":"autoware_detected_object_feature_remover","text":""},{"location":"perception/autoware_detected_object_feature_remover/#purpose","title":"Purpose","text":"

The autoware_detected_object_feature_remover is a package to convert topic-type from DetectedObjectWithFeatureArray to DetectedObjects.

"},{"location":"perception/autoware_detected_object_feature_remover/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_detected_object_feature_remover/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_feature_remover/#input","title":"Input","text":"Name Type Description ~/input tier4_perception_msgs::msg::DetectedObjectWithFeatureArray detected objects with feature field"},{"location":"perception/autoware_detected_object_feature_remover/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/autoware_detected_object_feature_remover/#parameters","title":"Parameters","text":"

None

"},{"location":"perception/autoware_detected_object_feature_remover/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_detected_object_validation/","title":"detected_object_validation","text":""},{"location":"perception/autoware_detected_object_validation/#detected_object_validation","title":"detected_object_validation","text":""},{"location":"perception/autoware_detected_object_validation/#purpose","title":"Purpose","text":"

The purpose of this package is to eliminate obvious false positives of DetectedObjects.

"},{"location":"perception/autoware_detected_object_validation/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/autoware_detected_object_validation/#node-parameters","title":"Node Parameters","text":""},{"location":"perception/autoware_detected_object_validation/#object_lanelet_filter","title":"object_lanelet_filter","text":"Name Type Description Default Range filter_target_label.UNKNOWN boolean If true, unknown objects are filtered. 1 N/A filter_target_label.CAR boolean If true, car objects are filtered. 0 N/A filter_target_label.TRUCK boolean If true, truck objects are filtered. 0 N/A filter_target_label.BUS boolean If true, bus objects are filtered. 0 N/A filter_target_label.TRAILER boolean If true, trailer objects are filtered. 0 N/A filter_target_label.MOTORCYCLE boolean If true, motorcycle objects are filtered. 0 N/A filter_target_label.BICYCLE boolean If true, bicycle objects are filtered. 0 N/A filter_target_label.PEDESTRIAN boolean If true, pedestrian objects are filtered. 0 N/A polygon_overlap_filter.enabled boolean If true, objects that are not in the lanelet polygon are filtered. 1 N/A lanelet_direction_filter.enabled boolean If true, objects that are not in the same direction as the lanelet are filtered. 0 N/A lanelet_direction_filter.velocity_yaw_threshold float If the yaw difference between the object and the lanelet is greater than this value, the object is filtered. 0.785398 N/A lanelet_direction_filter.object_speed_threshold float If the object speed is greater than this value, the object is filtered. 3 N/A"},{"location":"perception/autoware_detected_object_validation/#object_position_filter","title":"object_position_filter","text":"Name Type Description Default Range filter_target_label.UNKNOWN boolean Filter for UNKNOWN label 1 N/A filter_target_label.CAR boolean Filter for CAR label 0 N/A filter_target_label.TRUCK boolean Filter for TRUCK label 0 N/A filter_target_label.BUS boolean Filter for BUS label 0 N/A filter_target_label.TRAILER boolean Filter for TRAILER label 0 N/A filter_target_label.MOTORCYCLE boolean Filter for MOTORCYCLE label 0 N/A filter_target_label.BICYCLE boolean Filter for BICYCLE label 0 N/A filter_target_label.PEDESTRIAN boolean Filter for PEDESTRIAN label 0 N/A upper_bound_x float Upper bound for X coordinate 100 N/A lower_bound_x float Lower bound for X coordinate 0 N/A upper_bound_y float Upper bound for Y coordinate 10 N/A lower_bound_y float Lower bound for Y coordinate -10 N/A"},{"location":"perception/autoware_detected_object_validation/#obstacle_pointcloud_based_validator","title":"obstacle_pointcloud_based_validator","text":"Name Type Description Default Range min_points_num array The minimum number of obstacle point clouds in DetectedObjects [10, 10, 10, 10, 10, 10, 10, 10] N/A max_points_num array The max number of obstacle point clouds in DetectedObjects [10, 10, 10, 10, 10, 10, 10, 10] N/A min_points_and_distance_ratio array Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. [800, 800, 800, 800, 800, 800, 800, 800] N/A validate_max_distance_m float The maximum distance from the baselink to the object to be validated 70.0 N/A using_2d_validator boolean The xy-plane projected (2D) obstacle point clouds will be used for validation False N/A enable_debugger boolean Whether to create debug topics or not? False N/A"},{"location":"perception/autoware_detected_object_validation/#occupancy_grid_based_validator","title":"occupancy_grid_based_validator","text":"Name Type Description Default Range mean_threshold float The percentage threshold of allowed non-freespace. 0.6 N/A enable_debug boolean Whether to display debug images or not? 0 N/A"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/","title":"object_lanelet_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#object_lanelet_filter","title":"object_lanelet_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#purpose","title":"Purpose","text":"

The object_lanelet_filter is a node that filters detected object by using vector map. The objects only inside of the vector map will be published.

"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#input","title":"Input","text":"Name Type Description input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map input/object autoware_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#output","title":"Output","text":"Name Type Description output/object autoware_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#parameters","title":"Parameters","text":"Name Type Description debug bool if true, publish debug markers lanelet_extra_margin double if > 0 lanelet polygons are expanded by extra margin, if <= 0 margin is disabled"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered."},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The lanelet filter is performed based on the shape polygon and bounding box of the objects.

"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/","title":"object_position_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#object_position_filter","title":"object_position_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#purpose","title":"Purpose","text":"

The object_position_filter is a node that filters detected object based on x,y values. The objects only inside of the x, y bound will be published.

"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#input","title":"Input","text":"Name Type Description input/object autoware_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#output","title":"Output","text":"Name Type Description output/object autoware_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered. upper_bound_x float 100.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_x float 0.00 Bound for filtering. Only used if filter_by_xy_position is true upper_bound_y float 50.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_y float -50.00 Bound for filtering. Only used if filter_by_xy_position is true"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Filtering is performed based on the center position of the object.

"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/","title":"obstacle pointcloud based validator","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#obstacle-pointcloud-based-validator","title":"obstacle pointcloud based validator","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If the number of obstacle point groups in the DetectedObjects is small, it is considered a false positive and removed. The obstacle point cloud can be a point cloud after compare map filtering or a ground filtered point cloud.

In the debug image above, the red DetectedObject is the validated object. The blue object is the deleted object.

"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/obstacle_pointcloud sensor_msgs::msg::PointCloud2 Obstacle point cloud of dynamic objects"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#parameters","title":"Parameters","text":"Name Type Description using_2d_validator bool The xy-plane projected (2D) obstacle point clouds will be used for validation min_points_num int The minimum number of obstacle point clouds in DetectedObjects max_points_num int The max number of obstacle point clouds in DetectedObjects min_points_and_distance_ratio float Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. enable_debugger bool Whether to create debug topics or not?"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only represented objects as BoundingBox or Cylinder are supported.

"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/","title":"occupancy grid based validator","text":""},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#occupancy-grid-based-validator","title":"occupancy grid based validator","text":""},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Compare the occupancy grid map with the DetectedObject, and if a larger percentage of obstacles are in freespace, delete them.

Basically, it takes an occupancy grid map as input and generates a binary image of freespace or other.

A mask image is generated for each DetectedObject and the average value (percentage) in the mask image is calculated. If the percentage is low, it is deleted.

"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/occupancy_grid_map nav_msgs::msg::OccupancyGrid OccupancyGrid with no time series calculation is preferred."},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#parameters","title":"Parameters","text":"Name Type Description mean_threshold float The percentage threshold of allowed non-freespace. enable_debug bool Whether to display debug images or not?"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only vehicle represented as BoundingBox are supported.

"},{"location":"perception/autoware_detection_by_tracker/","title":"autoware_detection_by_tracker","text":""},{"location":"perception/autoware_detection_by_tracker/#autoware_detection_by_tracker","title":"autoware_detection_by_tracker","text":""},{"location":"perception/autoware_detection_by_tracker/#purpose","title":"Purpose","text":"

This package feeds back the tracked objects to the detection module to keep it stable and keep detecting objects.

The autoware detection by tracker takes as input an unknown object containing a cluster of points and a tracker. The unknown object is optimized to fit the size of the tracker so that it can continue to be detected.

"},{"location":"perception/autoware_detection_by_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The autoware detection by tracker receives an unknown object containing a point cloud and a tracker, where the unknown object is mainly shape-fitted using euclidean clustering. Shape fitting using euclidean clustering and other methods has a problem called under segmentation and over segmentation.

Adapted from [3]

Simply looking at the overlap between the unknown object and the tracker does not work. We need to take measures for under segmentation and over segmentation.

"},{"location":"perception/autoware_detection_by_tracker/#policy-for-dealing-with-over-segmentation","title":"Policy for dealing with over segmentation","text":"
  1. Merge the unknown objects in the tracker as a single object.
  2. Shape fitting using the tracker information such as angle and size as reference information.
"},{"location":"perception/autoware_detection_by_tracker/#policy-for-dealing-with-under-segmentation","title":"Policy for dealing with under segmentation","text":"
  1. Compare the tracker and unknown objects, and determine that those with large recall and small precision are under segmented objects.
  2. In order to divide the cluster of under segmented objects, it iterate the parameters to make small clusters.
  3. Adjust the parameters several times and adopt the one with the highest IoU.
"},{"location":"perception/autoware_detection_by_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detection_by_tracker/#input","title":"Input","text":"Name Type Description ~/input/initial_objects tier4_perception_msgs::msg::DetectedObjectsWithFeature unknown objects ~/input/tracked_objects tier4_perception_msgs::msg::TrackedObjects trackers"},{"location":"perception/autoware_detection_by_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::DetectedObjects objects"},{"location":"perception/autoware_detection_by_tracker/#parameters","title":"Parameters","text":"Name Type Description Default value tracker_ignore_label.UNKNOWN bool If true, the node will ignore the tracker if its label is unknown. true tracker_ignore_label.CAR bool If true, the node will ignore the tracker if its label is CAR. false tracker_ignore_label.PEDESTRIAN bool If true, the node will ignore the tracker if its label is pedestrian. false tracker_ignore_label.BICYCLE bool If true, the node will ignore the tracker if its label is bicycle. false tracker_ignore_label.MOTORCYCLE bool If true, the node will ignore the tracker if its label is MOTORCYCLE. false tracker_ignore_label.BUS bool If true, the node will ignore the tracker if its label is bus. false tracker_ignore_label.TRUCK bool If true, the node will ignore the tracker if its label is truck. false tracker_ignore_label.TRAILER bool If true, the node will ignore the tracker if its label is TRAILER. false"},{"location":"perception/autoware_detection_by_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_detection_by_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_detection_by_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_detection_by_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] M. Himmelsbach, et al. \"Tracking and classification of arbitrary objects with bottom-up/top-down detection.\" (2012).

[2] Arya Senna Abdul Rachman, Arya. \"3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.\" (2017).

[3] David Held, et al. \"A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues.\" (2016).

"},{"location":"perception/autoware_detection_by_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_elevation_map_loader/","title":"autoware_elevation_map_loader","text":""},{"location":"perception/autoware_elevation_map_loader/#autoware_elevation_map_loader","title":"autoware_elevation_map_loader","text":""},{"location":"perception/autoware_elevation_map_loader/#purpose","title":"Purpose","text":"

This package provides elevation map for autoware_compare_map_segmentation.

"},{"location":"perception/autoware_elevation_map_loader/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Generate elevation_map from subscribed pointcloud_map and vector_map and publish it. Save the generated elevation_map locally and load it from next time.

The elevation value of each cell is the average value of z of the points of the lowest cluster. Cells with No elevation value can be inpainted using the values of neighboring cells.

"},{"location":"perception/autoware_elevation_map_loader/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_elevation_map_loader/#input","title":"Input","text":"Name Type Description input/pointcloud_map sensor_msgs::msg::PointCloud2 The point cloud map input/vector_map autoware_map_msgs::msg::LaneletMapBin (Optional) The binary data of lanelet2 map input/pointcloud_map_metadata autoware_map_msgs::msg::PointCloudMapMetaData (Optional) The metadata of point cloud map"},{"location":"perception/autoware_elevation_map_loader/#output","title":"Output","text":"Name Type Description output/elevation_map grid_map_msgs::msg::GridMap The elevation map output/elevation_map_cloud sensor_msgs::msg::PointCloud2 (Optional) The point cloud generated from the value of elevation map"},{"location":"perception/autoware_elevation_map_loader/#service","title":"Service","text":"Name Type Description service/get_selected_pcd_map autoware_map_msgs::srv::GetSelectedPointCloudMap (Optional) service to request point cloud map. If pointcloud_map_loader uses selected pointcloud map loading via ROS 2 service, use this."},{"location":"perception/autoware_elevation_map_loader/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_elevation_map_loader/#node-parameters","title":"Node parameters","text":"Name Type Description Default value map_layer_name std::string elevation_map layer name elevation param_file_path std::string GridMap parameters config path_default elevation_map_directory std::string elevation_map file (bag2) path_default map_frame std::string map_frame when loading elevation_map file map use_inpaint bool Whether to inpaint empty cells true inpaint_radius float Radius of a circular neighborhood of each point inpainted that is considered by the algorithm [m] 0.3 use_elevation_map_cloud_publisher bool Whether to publish output/elevation_map_cloud false use_lane_filter bool Whether to filter elevation_map with vector_map false lane_margin float Margin distance from the lane polygon of the area to be included in the inpainting mask [m]. Used only when use_lane_filter=True. 0.0 use_sequential_load bool Whether to get point cloud map by service false sequential_map_load_num int The number of point cloud maps to load at once (only used when use_sequential_load is set true). This should not be larger than number of all point cloud map cells. 1"},{"location":"perception/autoware_elevation_map_loader/#gridmap-parameters","title":"GridMap parameters","text":"

The parameters are described on config/elevation_map_parameters.yaml.

"},{"location":"perception/autoware_elevation_map_loader/#general-parameters","title":"General parameters","text":"Name Type Description Default value pcl_grid_map_extraction/num_processing_threads int Number of threads for processing grid map cells. Filtering of the raw input point cloud is not parallelized. 12"},{"location":"perception/autoware_elevation_map_loader/#grid-map-parameters","title":"Grid map parameters","text":"

See: https://github.com/ANYbotics/grid_map/tree/ros2/grid_map_pcl

Resulting grid map parameters.

Name Type Description Default value pcl_grid_map_extraction/grid_map/min_num_points_per_cell int Minimum number of points in the point cloud that have to fall within any of the grid map cells. Otherwise the cell elevation will be set to NaN. 3 pcl_grid_map_extraction/grid_map/resolution float Resolution of the grid map. Width and length are computed automatically. 0.3 pcl_grid_map_extraction/grid_map/height_type int The parameter that determine the elevation of a cell 0: Smallest value among the average values of each cluster, 1: Mean value of the cluster with the most points 1 pcl_grid_map_extraction/grid_map/height_thresh float Height range from the smallest cluster (Only for height_type 1) 1.0"},{"location":"perception/autoware_elevation_map_loader/#point-cloud-pre-processing-parameters","title":"Point Cloud Pre-processing Parameters","text":""},{"location":"perception/autoware_elevation_map_loader/#rigid-body-transform-parameters","title":"Rigid body transform parameters","text":"

Rigid body transform that is applied to the point cloud before computing elevation.

Name Type Description Default value pcl_grid_map_extraction/cloud_transform/translation float Translation (xyz) that is applied to the input point cloud before computing elevation. 0.0 pcl_grid_map_extraction/cloud_transform/rotation float Rotation (intrinsic rotation, convention X-Y'-Z'') that is applied to the input point cloud before computing elevation. 0.0"},{"location":"perception/autoware_elevation_map_loader/#cluster-extraction-parameters","title":"Cluster extraction parameters","text":"

Cluster extraction is based on pcl algorithms. See https://pointclouds.org/documentation/tutorials/cluster_extraction.html for more details.

Name Type Description Default value pcl_grid_map_extraction/cluster_extraction/cluster_tolerance float Distance between points below which they will still be considered part of one cluster. 0.2 pcl_grid_map_extraction/cluster_extraction/min_num_points int Min number of points that a cluster needs to have (otherwise it will be discarded). 3 pcl_grid_map_extraction/cluster_extraction/max_num_points int Max number of points that a cluster can have (otherwise it will be discarded). 1000000"},{"location":"perception/autoware_elevation_map_loader/#outlier-removal-parameters","title":"Outlier removal parameters","text":"

See https://pointclouds.org/documentation/tutorials/statistical_outlier.html for more explanation on outlier removal.

Name Type Description Default value pcl_grid_map_extraction/outlier_removal/is_remove_outliers float Whether to perform statistical outlier removal. false pcl_grid_map_extraction/outlier_removal/mean_K float Number of neighbors to analyze for estimating statistics of a point. 10 pcl_grid_map_extraction/outlier_removal/stddev_threshold float Number of standard deviations under which points are considered to be inliers. 1.0"},{"location":"perception/autoware_elevation_map_loader/#subsampling-parameters","title":"Subsampling parameters","text":"

See https://pointclouds.org/documentation/tutorials/voxel_grid.html for more explanation on point cloud downsampling.

Name Type Description Default value pcl_grid_map_extraction/downsampling/is_downsample_cloud bool Whether to perform downsampling or not. false pcl_grid_map_extraction/downsampling/voxel_size float Voxel sizes (xyz) in meters. 0.02"},{"location":"perception/autoware_euclidean_cluster/","title":"autoware_euclidean_cluster","text":""},{"location":"perception/autoware_euclidean_cluster/#autoware_euclidean_cluster","title":"autoware_euclidean_cluster","text":""},{"location":"perception/autoware_euclidean_cluster/#purpose","title":"Purpose","text":"

autoware_euclidean_cluster is a package for clustering points into smaller parts to classify objects.

This package has two clustering methods: euclidean_cluster and voxel_grid_based_euclidean_cluster.

"},{"location":"perception/autoware_euclidean_cluster/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_euclidean_cluster/#euclidean_cluster","title":"euclidean_cluster","text":"

pcl::EuclideanClusterExtraction is applied to points. See official document for details.

"},{"location":"perception/autoware_euclidean_cluster/#voxel_grid_based_euclidean_cluster","title":"voxel_grid_based_euclidean_cluster","text":"
  1. A centroid in each voxel is calculated by pcl::VoxelGrid.
  2. The centroids are clustered by pcl::EuclideanClusterExtraction.
  3. The input points are clustered based on the clustered centroids.
"},{"location":"perception/autoware_euclidean_cluster/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_euclidean_cluster/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/autoware_euclidean_cluster/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature cluster pointcloud debug/clusters sensor_msgs::msg::PointCloud2 colored cluster pointcloud for visualization"},{"location":"perception/autoware_euclidean_cluster/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_euclidean_cluster/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/autoware_euclidean_cluster/#euclidean_cluster_1","title":"euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space"},{"location":"perception/autoware_euclidean_cluster/#voxel_grid_based_euclidean_cluster_1","title":"voxel_grid_based_euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space voxel_leaf_size float the voxel leaf size of x and y min_points_number_per_voxel int the minimum number of points for a voxel"},{"location":"perception/autoware_euclidean_cluster/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

The use_height option of voxel_grid_based_euclidean_cluster isn't implemented yet.

"},{"location":"perception/autoware_ground_segmentation/","title":"autoware_ground_segmentation","text":""},{"location":"perception/autoware_ground_segmentation/#autoware_ground_segmentation","title":"autoware_ground_segmentation","text":""},{"location":"perception/autoware_ground_segmentation/#purpose","title":"Purpose","text":"

The autoware_ground_segmentation is a node that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each ground segmentation algorithm is in the following links.

Filter Name Description Detail ray_ground_filter A method of removing the ground based on the geometrical relationship between points lined up on radiation link scan_ground_filter Almost the same method as ray_ground_filter, but with slightly improved performance link ransac_ground_filter A method of removing the ground by approximating the ground to a plane link"},{"location":"perception/autoware_ground_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_ground_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"perception/autoware_ground_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/autoware_ground_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id has_static_tf_only bool false flag to listen TF only once max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"perception/autoware_ground_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

autoware::pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"perception/autoware_ground_segmentation/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/","title":"RANSAC Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#ransac-ground-filter","title":"RANSAC Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Apply the input points to the plane, and set the points at a certain distance from the plane as points other than the ground. Normally, whn using this method, the input points is filtered so that it is almost flat before use. Since the drivable area is often flat, there are methods such as filtering by lane.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame string base_link frame has_static_tf_only bool Flag to listen TF only once unit_axis string The axis which we need to search ground plane max_iterations int The maximum number of iterations outlier_threshold double The distance threshold to the model [m] plane_slope_threshold double The slope threshold to prevent mis-fitting [deg] voxel_size_x double voxel size x [m] voxel_size_y double voxel size y [m] voxel_size_z double voxel size z [m] height_threshold double The height threshold from ground plane for no ground points [m] debug bool whether to output debug information"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#referencesexternal-links","title":"References/External links","text":"

https://pcl.readthedocs.io/projects/tutorials/en/latest/planar_segmentation.html

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/","title":"Ray Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#ray-ground-filter","title":"Ray Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The points is separated radially (Ray), and the ground is classified for each Ray sequentially from the point close to ego-vehicle based on the geometric information such as the distance and angle between the points.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description input_frame string frame id of input pointcloud output_frame string frame id of output pointcloud has_static_tf_only bool Flag to listen TF only once general_max_slope double The triangle created by general_max_slope is called the global cone. If the point is outside the global cone, it is judged to be a point that is no on the ground initial_max_slope double Generally, the point where the object first hits is far from ego-vehicle because of sensor blind spot, so resolution is different from that point and thereafter, so this parameter exists to set a separate local_max_slope local_max_slope double The triangle created by local_max_slope is called the local cone. This parameter for classification based on the continuity of points min_height_threshold double This parameter is used instead of height_threshold because it's difficult to determine continuity in the local cone when the points are too close to each other. radial_divider_angle double The angle of ray concentric_divider_distance double Only check points which radius is larger than concentric_divider_distance reclass_distance_threshold double To check if point is to far from previous one, if so classify again min_x double The parameter to set vehicle footprint manually max_x double The parameter to set vehicle footprint manually min_y double The parameter to set vehicle footprint manually max_y double The parameter to set vehicle footprint manually"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/","title":"Scan Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#scan-ground-filter","title":"Scan Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This algorithm works by following steps,

  1. Divide whole pointclouds into groups by azimuth angle (so-called ray)
  2. Sort points by radial distance (xy-distance), on each ray.
  3. Divide pointcloud into grids, on each ray.
  4. Classify the point
    1. Check radial distance to previous pointcloud, if the distance is large and previous pointcloud is \"no ground\" and the height level of current point greater than previous point, the current pointcloud is classified as no ground.
    2. Check vertical angle of the point compared with previous ground grid
    3. Check the height of the point compared with predicted ground level
    4. If vertical angle is greater than local_slope_max and related height to predicted ground level is greater than \"non ground height threshold\", the point is classified as \"non ground\"
    5. If the vertical angle is in range of [-local_slope_max, local_slope_max] or related height to predicted ground level is smaller than non_ground_height_threshold, the point is classified as \"ground\"
    6. If the vertical angle is lower than -local_slope_max or the related height to ground level is greater than detection_range_z_max, the point will be classified as out of range
"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description input_frame string \"base_link\" frame id of input pointcloud output_frame string \"base_link\" frame id of output pointcloud has_static_tf_only bool false Flag to listen TF only once global_slope_max_angle_deg double 8.0 The global angle to classify as the ground or object [deg].A large threshold may reduce false positive of high slope road classification but it may lead to increase false negative of non-ground classification, particularly for small objects. local_slope_max_angle_deg double 10.0 The local angle to classify as the ground or object [deg] when comparing with adjacent point.A small value enhance accuracy classification of object with inclined surface. This should be considered together with split_points_distance_tolerance value. radial_divider_angle_deg double 1.0 The angle which divide the whole pointcloud to sliced group [deg] split_points_distance_tolerance double 0.2 The xy-distance threshold to distinguish far and near [m] split_height_distance double 0.2 The height threshold to distinguish ground and non-ground pointcloud when comparing with adjacent points [m]. A small threshold improves classification of non-ground point, especially for high elevation resolution pointcloud lidar. However, it might cause false positive for small step-like road surface or misaligned multiple lidar configuration. use_virtual_ground_point bool true whether to use the ground center of front wheels as the virtual ground point. detection_range_z_max float 2.5 Maximum height of detection range [m], applied only for elevation_grid_mode center_pcl_shift float 0.0 The x-axis offset of addition LiDARs from vehicle center of mass [m], recommended to use only for additional LiDARs in elevation_grid_mode non_ground_height_threshold float 0.2 Height threshold of non ground objects [m] as split_height_distance and applied only for elevation_grid_mode grid_mode_switch_radius float 20.0 The distance where grid division mode change from by distance to by vertical angle [m], applied only for elevation_grid_mode grid_size_m float 0.5 The first grid size [m], applied only for elevation_grid_mode.A large value enhances the prediction stability for ground surface. suitable for rough surface or multiple lidar configuration. gnd_grid_buffer_size uint16 4 Number of grids using to estimate local ground slope, applied only for elevation_grid_mode low_priority_region_x float -20.0 The non-zero x threshold in back side from which small objects detection is low priority [m] elevation_grid_mode bool true Elevation grid scan mode option use_recheck_ground_cluster bool true Enable recheck ground cluster use_lowest_point bool true to select lowest point for reference in recheck ground cluster, otherwise select middle point"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

The elevation grid idea is referred from \"Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239\"

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/","title":"autoware_image_projection_based_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/#autoware_image_projection_based_fusion","title":"autoware_image_projection_based_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/#purpose","title":"Purpose","text":"

The autoware_image_projection_based_fusion is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).

"},{"location":"perception/autoware_image_projection_based_fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_image_projection_based_fusion/#sync-algorithm","title":"Sync Algorithm","text":""},{"location":"perception/autoware_image_projection_based_fusion/#matching","title":"matching","text":"

The offset between each camera and the lidar is set according to their shutter timing. After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.

current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0

"},{"location":"perception/autoware_image_projection_based_fusion/#fusion-and-timer","title":"fusion and timer","text":"

The subscription status of the message is signed with 'O'.

1.if a pointcloud message is subscribed under the below condition:

pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O O

If the roi msgs can be matched, fuse them and postprocess the pointcloud message. Otherwise, fuse the matched roi msgs and cache the pointcloud.

2.if a pointcloud message is subscribed under the below condition:

pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O

if the roi msgs can be matched, fuse them and cache the pointcloud.

3.if a pointcloud message is subscribed under the below condition:

pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O O

If the roi msg 3 is subscribed before the next pointcloud message coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. If the roi msg 3 is not subscribed before the next pointcloud message coming or timeout, postprocess the pointcloud message as it is.

The timeout threshold should be set according to the postprocessing time. E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. current default value at autoware.universe for XX1: - timeout_ms: 50.0

"},{"location":"perception/autoware_image_projection_based_fusion/#the-build_only-option","title":"The build_only option","text":"

The pointpainting_fusion node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true\n
"},{"location":"perception/autoware_image_projection_based_fusion/#known-limits","title":"Known Limits","text":"

The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.

"},{"location":"perception/autoware_image_projection_based_fusion/#approximate-camera-projection","title":"Approximate camera projection","text":"

For algorithms like pointpainting_fusion, the computation required to project points onto an unrectified (raw) image can be substantial. To address this, an option is provided to reduce the computational load. Set the approximate_camera_projection parameter to true for each camera (ROIs). If the corresponding point_project_to_unrectified_image parameter is also set to true, the projections will be pre-cached.

The cached projections are calculated using a grid, with the grid size specified by the approximation_grid_width_size and approximation_grid_height_size parameters in the configuration file. The centers of the grid are used as the projected points.

"},{"location":"perception/autoware_image_projection_based_fusion/#detail-description-of-each-fusions-algorithm-is-in-the-following-links","title":"Detail description of each fusion's algorithm is in the following links","text":"Fusion Name Description Detail roi_cluster_fusion Overwrite a classification label of clusters by that of ROIs from a 2D object detector. link roi_detected_object_fusion Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. link pointpainting_fusion Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. link roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects link"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/","title":"pointpainting_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#pointpainting_fusion","title":"pointpainting_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#purpose","title":"Purpose","text":"

The pointpainting_fusion is a package for utilizing the class information detected by a 2D object detection in 3D object detection.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The lidar points are projected onto the output of an image-only 2d object detection network and the class scores are appended to each point. The painted point cloud can then be fed to the centerpoint network.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 painted pointcloud ~/output/objects autoware_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file build_only bool false shutdown the node after TensorRT engine file is built trt_precision string fp16 TensorRT inference precision: fp32 or fp16 post_process_params.score_threshold double 0.4 detected objects with score less than threshold are ignored post_process_params.yaw_norm_thresholds list[double] [0.3, 0.3, 0.3, 0.3, 0.0] An array of distance threshold values of norm of yaw [rad]. post_process_params.iou_nms_search_distance_2d double 10.0 A maximum distance value to search the nearest objects. post_process_params.iou_nms_threshold double 0.1 A threshold value of NMS using IoU score. post_process_params.has_twist boolean false Indicates whether the model outputs twist value. densification_params.world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_params.num_past_frames int 1 the number of past frames to fuse with the current frame"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#referencesexternal-links","title":"References/External links","text":"

[1] Vora, Sourabh, et al. \"PointPainting: Sequential fusion for 3d object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[2] CVPR'20 Workshop on Scalability in Autonomous Driving] Waymo Open Dataset Challenge: https://youtu.be/9g9GsI33ol8?t=535 Ding, Zhuangzhuang, et al. \"1st Place Solution for Waymo Open Dataset Challenge--3D Detection and Domain Adaptation.\" arXiv preprint arXiv:2006.15505 (2020).

"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/","title":"roi_cluster_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#roi_cluster_fusion","title":"roi_cluster_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#purpose","title":"Purpose","text":"

The roi_cluster_fusion is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature clustered pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature labeled cluster pointcloud ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#parameters","title":"Parameters","text":"

The following figure is an inner pipeline overview of RoI cluster fusion node. Please refer to it for your parameter settings.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description fusion_distance double If the detected object's distance to frame_id is less than the threshold, the fusion will be processed trust_object_distance double if the detected object's distance is less than the trust_object_distance, trust_object_iou_mode will be used, otherwise non_trust_object_iou_mode will be used trust_object_iou_mode string select mode from 3 options {iou, iou_x, iou_y} to calculate IoU in range of [0, trust_distance]. iou: IoU along x-axis and y-axis iou_x: IoU along x-axis iou_y: IoU along y-axis non_trust_object_iou_mode string the IOU mode using in range of [trust_distance, fusion_distance] if trust_distance < fusion_distance use_cluster_semantic_type bool if false, the labels of clusters are overwritten by UNKNOWN before fusion only_allow_inside_cluster bool if true, the only clusters contained inside RoIs by a detector roi_scale_factor double the scale factor for offset of detector RoIs if only_allow_inside_cluster=true iou_threshold double the IoU threshold to overwrite a label of clusters with a label of roi unknown_iou_threshold double the IoU threshold to fuse cluster with unknown label of roi remove_unknown bool if true, remove all UNKNOWN labeled objects from output rois_number int the number of input rois debug_mode bool If true, subscribe and publish images for visualization."},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/","title":"roi_detected_object_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#roi_detected_object_fusion","title":"roi_detected_object_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#purpose","title":"Purpose","text":"

The roi_detected_object_fusion is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In what follows, we describe the algorithm utilized by roi_detected_object_fusion (the meaning of each parameter can be found in the Parameters section):

  1. If the existence_probability of a detected object is greater than the threshold, it is accepted without any further processing and published in output.
  2. The remaining detected objects are projected onto image planes, and if the resulting ROIs overlap with the ones from the 2D detector, they are published as fused objects in output. The Intersection over Union (IoU) is used to determine if there are overlaps between the detections from input and the ROIs from input/rois.

The DetectedObject has three possible shape choices/implementations, where the polygon's vertices for each case are defined as follows:

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#input","title":"Input","text":"Name Type Description input autoware_perception_msgs::msg::DetectedObjects input detected objects input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes. input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image. input/image_raw[0-7] sensor_msgs::msg::Image images for visualization."},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#output","title":"Output","text":"Name Type Description output autoware_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization, ~/debug/fused_objects autoware_perception_msgs::msg::DetectedObjects fused detected objects ~/debug/ignored_objects autoware_perception_msgs::msg::DetectedObjects not fused detected objects"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number int the number of input rois debug_mode bool If set to true, the node subscribes to the image topic and publishes an image with debug drawings. passthrough_lower_bound_probability_thresholds vector[double] If the existence_probability of a detected object is greater than the threshold, it is published in output. trust_distances vector[double] If the distance of a detected object from the origin of frame_id is greater than the threshold, it is published in output. min_iou_threshold double If the iou between detected objects and rois is greater than min_iou_threshold, the objects are classified as fused. use_roi_probability float If set to true, the algorithm uses existence_probability of ROIs to match with the that of detected objects. roi_probability_threshold double If the existence_probability of ROIs is greater than the threshold, matched detected objects are published in output. can_assign_matrix vector[int] association matrix between rois and detected_objects to check that two rois on images can be match"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

POLYGON, which is a shape of a detected object, isn't supported yet.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/","title":"roi_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#roi_pointcloud_fusion","title":"roi_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#purpose","title":"Purpose","text":"

The node roi_pointcloud_fusion is to cluster the pointcloud based on Region Of Interests (ROIs) detected by a 2D object detector, specific for unknown labeled ROI.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 output pointcloud as default of interface output_clusters tier4_perception_msgs::msg::DetectedObjectsWithFeature output clusters debug/clusters sensor_msgs/msg/PointCloud2 colored cluster pointcloud for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid cluster_2d_tolerance double cluster tolerance measured in radial direction rois_number int the number of input rois"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/","title":"segmentation_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#segmentation_pointcloud_fusion","title":"segmentation_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#purpose","title":"Purpose","text":"

The node segmentation_pointcloud_fusion is a package for filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation model.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] sensor_msgs::msg::Image A gray-scale image of semantic segmentation mask, the pixel value is semantic class id input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 output filtered pointcloud"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range filter_semantic_label_target.UNKNOWN boolean If true, UNKNOWN class of semantic will be filtered. 0 N/A filter_semantic_label_target.BUILDING boolean If true, BUILDING class of semantic will be filtered. 1 N/A filter_semantic_label_target.WALL boolean If true, WALL class of semantic will be filtered. 1 N/A filter_semantic_label_target.OBSTACLE boolean If true, OBSTACLE class of semantic will be filtered. 0 N/A filter_semantic_label_target.TRAFFIC_LIGHT boolean If true, TRAFFIC_LIGHT class of semantic will be filtered. 0 N/A filter_semantic_label_target.TRAFFIC_SIGN boolean If true, TRAFFIC_SIGN class of semantic will be filtered. 0 N/A filter_semantic_label_target.PERSON boolean If true, PERSON class of semantic will be filtered. 0 N/A filter_semantic_label_target.VEHICLE boolean If true, VEHICLE class of semantic will be filtered. 0 N/A filter_semantic_label_target.BIKE boolean If true, BIKE class of semantic will be filtered. 0 N/A filter_semantic_label_target.ROAD boolean If true, ROAD class of semantic will be filtered. 1 N/A filter_semantic_label_target.SIDEWALK boolean If true, SIDEWALK class of semantic will be filtered. 0 N/A filter_semantic_label_target.ROAD_PAINT boolean If true, ROAD_PAINT class of semantic will be filtered. 0 N/A filter_semantic_label_target.CURBSTONE boolean If true, CURBSTONE class of semantic will be filtered. 0 N/A filter_semantic_label_target.CROSSWALK boolean If true, CROSSWALK class of semantic will be filtered. 0 N/A filter_semantic_label_target.VEGETATION boolean If true, VEGETATION class of semantic will be filtered. 1 N/A filter_semantic_label_target.SKY boolean If true, SKY class of semantic will be filtered. 0 N/A filter_distance_threshold float A maximum distance of pointcloud to apply filter [m]. 60 \u22650.0 is_publish_debug_mask boolean If true, debug mask image will be published. 0 N/A"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/","title":"autoware_lidar_apollo_instance_segmentation","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#autoware_lidar_apollo_instance_segmentation","title":"autoware_lidar_apollo_instance_segmentation","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#purpose","title":"Purpose","text":"

This node segments 3D pointcloud data from lidar sensors into obstacles, e.g., cars, trucks, bicycles, and pedestrians based on CNN based model and obstacle clustering method.

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

See the original design by Apollo.

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#input","title":"Input","text":"Name Type Description input/pointcloud sensor_msgs/PointCloud2 Pointcloud data from lidar sensors"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#output","title":"Output","text":"Name Type Description output/labeled_clusters tier4_perception_msgs/DetectedObjectsWithFeature Detected objects with labeled pointcloud cluster. debug/instance_pointcloud sensor_msgs/PointCloud2 Segmented pointcloud for visualization."},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold double 0.8 If the score of a detected object is lower than this value, the object is ignored. range int 60 Half of the length of feature map sides. [m] width int 640 The grid width of feature map. height int 640 The grid height of feature map. engine_file string \"vls-128.engine\" The name of TensorRT engine file for CNN model. prototxt_file string \"vls-128.prototxt\" The name of prototxt file for CNN model. caffemodel_file string \"vls-128.caffemodel\" The name of caffemodel file for CNN model. use_intensity_feature bool true The flag to use intensity feature of pointcloud. use_constant_feature bool false The flag to use direction and distance feature of pointcloud. target_frame string \"base_link\" Pointcloud data is transformed into this frame. z_offset int 2 z offset from target frame. [m] build_only bool false shutdown the node after TensorRT engine file is built"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is no training code for CNN model.

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#note","title":"Note","text":"

This package makes use of three external codes. The trained files are provided by apollo. The trained files are automatically downloaded when you build.

Original URL

Supported lidars are velodyne 16, 64 and 128, but you can also use velodyne 32 and other lidars with good accuracy.

  1. apollo 3D Obstacle Perception description

    /******************************************************************************\n* Copyright 2017 The Apollo Authors. All Rights Reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*****************************************************************************/\n
  2. tensorRTWrapper : It is used under the lib directory.

    MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
  3. autoware_perception description

    /*\n* Copyright 2018-2019 Autoware Foundation. All rights reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n
"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#special-thanks","title":"Special thanks","text":""},{"location":"perception/autoware_lidar_centerpoint/","title":"autoware_lidar_centerpoint","text":""},{"location":"perception/autoware_lidar_centerpoint/#autoware_lidar_centerpoint","title":"autoware_lidar_centerpoint","text":""},{"location":"perception/autoware_lidar_centerpoint/#purpose","title":"Purpose","text":"

autoware_lidar_centerpoint is a package for detecting dynamic 3D objects.

"},{"location":"perception/autoware_lidar_centerpoint/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

"},{"location":"perception/autoware_lidar_centerpoint/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_lidar_centerpoint/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/autoware_lidar_centerpoint/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects detected objects debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped cyclic time (msg) debug/processing_time_ms tier4_debug_msgs::msg::Float64Stamped processing time (ms)"},{"location":"perception/autoware_lidar_centerpoint/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_lidar_centerpoint/#ml-model-parameters","title":"ML Model Parameters","text":"

Note that these parameters are associated with ONNX file, predefined during the training phase. Be careful to change ONNX file as well when changing this parameter. Also, whenever you update the ONNX file, do NOT forget to check these values.

Name Type Default Value Description model_params.class_names list[string] [\"CAR\", \"TRUCK\", \"BUS\", \"BICYCLE\", \"PEDESTRIAN\"] list of class names for model outputs model_params.point_feature_size int 4 number of features per point in the point cloud model_params.max_voxel_size int 40000 maximum number of voxels model_params.point_cloud_range list[double] [-76.8, -76.8, -4.0, 76.8, 76.8, 6.0] detection range [min_x, min_y, min_z, max_x, max_y, max_z] [m] model_params.voxel_size list[double] [0.32, 0.32, 10.0] size of each voxel [x, y, z] [m] model_params.downsample_factor int 1 downsample factor for coordinates model_params.encoder_in_feature_size int 9 number of input features to the encoder model_params.has_variance bool false true if the model outputs pose variance as well as pose for each bbox model_params.has_twist bool false true if the model outputs velocity as well as pose for each bbox"},{"location":"perception/autoware_lidar_centerpoint/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file build_only bool false shutdown the node after TensorRT engine file is built trt_precision string fp16 TensorRT inference precision: fp32 or fp16 post_process_params.score_threshold double 0.4 detected objects with score less than threshold are ignored post_process_params.yaw_norm_thresholds list[double] [0.3, 0.3, 0.3, 0.3, 0.0] An array of distance threshold values of norm of yaw [rad]. post_process_params.iou_nms_search_distance_2d double - If two objects are farther than the value, NMS isn't applied. post_process_params.iou_nms_threshold double - IoU threshold for the IoU-based Non Maximum Suppression post_process_params.has_twist boolean false Indicates whether the model outputs twist value. densification_params.world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_params.num_past_frames int 1 the number of past frames to fuse with the current frame"},{"location":"perception/autoware_lidar_centerpoint/#the-build_only-option","title":"The build_only option","text":"

The autoware_lidar_centerpoint node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_tiny model_path:=/home/autoware/autoware_data/lidar_centerpoint model_param_path:=$(ros2 pkg prefix autoware_lidar_centerpoint --share)/config/centerpoint_tiny.param.yaml build_only:=true\n
"},{"location":"perception/autoware_lidar_centerpoint/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_lidar_centerpoint/#trained-models","title":"Trained Models","text":"

You can download the onnx format of trained models by clicking on the links below.

Centerpoint was trained in nuScenes (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. Centerpoint tiny was trained in Argoverse 2 (~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.

"},{"location":"perception/autoware_lidar_centerpoint/#training-centerpoint-model-and-deploying-to-the-autoware","title":"Training CenterPoint Model and Deploying to the Autoware","text":""},{"location":"perception/autoware_lidar_centerpoint/#overview","title":"Overview","text":"

This guide provides instructions on training a CenterPoint model using the mmdetection3d repository and seamlessly deploying it within Autoware.

"},{"location":"perception/autoware_lidar_centerpoint/#installation","title":"Installation","text":""},{"location":"perception/autoware_lidar_centerpoint/#install-prerequisites","title":"Install prerequisites","text":"

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-centerpoint python=3.8 -y\nconda activate train-centerpoint\n

Step 3. Install PyTorch

Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.

conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia\n
"},{"location":"perception/autoware_lidar_centerpoint/#install-mmdetection3d","title":"Install mmdetection3d","text":"

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim\nmim install mmengine\nmim install 'mmcv>=2.0.0rc4'\nmim install 'mmdet>=3.0.0rc5, <3.3.0'\n

Step 2. Install mmdetection3d forked repository

Introduced several valuable enhancements in our fork of the mmdetection3d repository. Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. In addition, we've integrated a PyTorch to ONNX converter and a T4 format reader for added functionality.

git clone https://github.com/autowarefoundation/mmdetection3d.git\ncd mmdetection3d\npip install -v -e .\n
"},{"location":"perception/autoware_lidar_centerpoint/#use-training-repository-with-docker","title":"Use Training Repository with Docker","text":"

Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.

Clone fork of the mmdetection3d repository

git clone https://github.com/autowarefoundation/mmdetection3d.git\n

Build the Docker image by running the following command:

cd mmdetection3d\ndocker build -t mmdetection3d -f docker/Dockerfile .\n

Run the Docker container:

docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d\n
"},{"location":"perception/autoware_lidar_centerpoint/#preparing-nuscenes-dataset-for-training","title":"Preparing NuScenes dataset for training","text":"

Step 1. Download the NuScenes dataset from the official website and extract the dataset to a folder of your choice.

Note: The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.

Step 2. Create a symbolic link to the dataset folder

ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/\n

Step 3. Prepare the NuScenes data by running:

cd mmdetection3d\npython tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes\n
"},{"location":"perception/autoware_lidar_centerpoint/#training-centerpoint-with-nuscenes-dataset","title":"Training CenterPoint with NuScenes Dataset","text":""},{"location":"perception/autoware_lidar_centerpoint/#prepare-the-config-file","title":"Prepare the config file","text":"

The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of this centerpoint configuration file from mmdetection3D. In this custom configuration, the use_voxel_center_z parameter is set as False to deactivate the z coordinate of the voxel center, aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as [32, 32].

The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file. This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.

"},{"location":"perception/autoware_lidar_centerpoint/#start-training","title":"Start training","text":"
python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom\n
"},{"location":"perception/autoware_lidar_centerpoint/#evaluation-of-the-trained-model","title":"Evaluation of the trained model","text":"

For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors: 1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes, including training, evaluation, and fine-tuning of models. It is organized in the T4 format.

"},{"location":"perception/autoware_lidar_centerpoint/#download-the-sample-dataset","title":"Download the sample dataset","text":"
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz\n#Extract the dataset to a folder of your choice\ntar -xvf lidar_detection_sample_dataset.tar.gz\n#Create a symbolic link to the dataset folder\nln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/\n
"},{"location":"perception/autoware_lidar_centerpoint/#prepare-dataset-and-evaluate-trained-model","title":"Prepare dataset and evaluate trained model","text":"

Create .pkl files for training, evaluation, and testing.

The dataset was formatted according to T4Dataset specifications, with 'sample_dataset' designated as one of its versions.

python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2\n

Run evaluation

python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT  --task lidar_det\n

Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at the base link location of the vehicle.

"},{"location":"perception/autoware_lidar_centerpoint/#deploying-centerpoint-model-to-autoware","title":"Deploying CenterPoint model to Autoware","text":""},{"location":"perception/autoware_lidar_centerpoint/#convert-centerpoint-pytorch-model-to-onnx-format","title":"Convert CenterPoint PyTorch model to ONNX Format","text":"

The autoware_lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, we have included a script that converts the CenterPoint model to Autoware compatible ONNX format. You can find it in mmdetection3d/projects/AutowareCenterPoint file.

python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models\n
"},{"location":"perception/autoware_lidar_centerpoint/#create-the-config-file-for-the-custom-model","title":"Create the config file for the custom model","text":"

Create a new config file named centerpoint_custom.param.yaml under the config file directory of the autoware_lidar_centerpoint node. Sets the parameters of the config file like point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.

/**:\nros__parameters:\nclass_names: [\"CAR\", \"TRUCK\", \"BUS\", \"BICYCLE\", \"PEDESTRIAN\"]\npoint_feature_size: 4\nmax_voxel_size: 40000\npoint_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0]\nvoxel_size: [0.2, 0.2, 8.0]\ndownsample_factor: 1\nencoder_in_feature_size: 9\n# post-process params\ncircle_nms_dist_threshold: 0.5\niou_nms_search_distance_2d: 10.0\niou_nms_threshold: 0.1\nyaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0]\n
"},{"location":"perception/autoware_lidar_centerpoint/#launch-the-lidar_centerpoint-node","title":"Launch the lidar_centerpoint node","text":"
cd /YOUR/AUTOWARE/PATH/Autoware\nsource install/setup.bash\nros2 launch autoware_lidar_centerpoint lidar_centerpoint.launch.xml  model_name:=centerpoint_custom  model_path:=/PATH/TO/ONNX/FILE/\n
"},{"location":"perception/autoware_lidar_centerpoint/#changelog","title":"Changelog","text":""},{"location":"perception/autoware_lidar_centerpoint/#v1-20220706","title":"v1 (2022/07/06)","text":"Name URLs Description centerpoint pts_voxel_encoder pts_backbone_neck_head There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32] of PillarFeatureNet centerpoint_tiny pts_voxel_encoder pts_backbone_neck_head The same model as default of v0.

These changes are compared with this configuration.

"},{"location":"perception/autoware_lidar_centerpoint/#v0-20211203","title":"v0 (2021/12/03)","text":"Name URLs Description default pts_voxel_encoder pts_backbone_neck_head There are two changes from the original CenterPoint architecture. num_filters=[32] of PillarFeatureNet and ds_layer_strides=[2, 2, 2] of RPN"},{"location":"perception/autoware_lidar_centerpoint/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_lidar_centerpoint/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_lidar_centerpoint/#referencesexternal-links","title":"References/External links","text":"

[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).

[2] Lang, Alex H., et al. \"PointPillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[3] https://github.com/tianweiy/CenterPoint

[4] https://github.com/open-mmlab/mmdetection3d

[5] https://github.com/open-mmlab/OpenPCDet

[6] https://github.com/yukkysaito/autoware_perception

[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars

[8] https://www.nuscenes.org/nuscenes

[9] https://www.argoverse.org/av2.html

"},{"location":"perception/autoware_lidar_centerpoint/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_lidar_centerpoint/#acknowledgment-deepenais-3d-annotation-tools-contribution","title":"Acknowledgment: deepen.ai's 3D Annotation Tools Contribution","text":"

Special thanks to Deepen AI for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset.

"},{"location":"perception/autoware_lidar_centerpoint/#legal-notice","title":"Legal Notice","text":"

The nuScenes dataset is released publicly for non-commercial use under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. Additional Terms of Use can be found at https://www.nuscenes.org/terms-of-use. To inquire about a commercial license please contact nuscenes@motional.com.

"},{"location":"perception/autoware_lidar_transfusion/","title":"autoware_lidar_transfusion","text":""},{"location":"perception/autoware_lidar_transfusion/#autoware_lidar_transfusion","title":"autoware_lidar_transfusion","text":""},{"location":"perception/autoware_lidar_transfusion/#purpose","title":"Purpose","text":"

The autoware_lidar_transfusion package is used for 3D object detection based on lidar data (x, y, z, intensity).

"},{"location":"perception/autoware_lidar_transfusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The implementation bases on TransFusion [1] work. It uses TensorRT library for data process and network inference.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

"},{"location":"perception/autoware_lidar_transfusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_lidar_transfusion/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 Input pointcloud."},{"location":"perception/autoware_lidar_transfusion/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects Detected objects. debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped Cyclic time (ms). debug/pipeline_latency_ms tier4_debug_msgs::msg::Float64Stamped Pipeline latency time (ms). debug/processing_time/preprocess_ms tier4_debug_msgs::msg::Float64Stamped Preprocess (ms). debug/processing_time/inference_ms tier4_debug_msgs::msg::Float64Stamped Inference time (ms). debug/processing_time/postprocess_ms tier4_debug_msgs::msg::Float64Stamped Postprocess time (ms). debug/processing_time/total_ms tier4_debug_msgs::msg::Float64Stamped Total processing time (ms)."},{"location":"perception/autoware_lidar_transfusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_lidar_transfusion/#transfusion-node","title":"TransFusion node","text":"Name Type Description Default Range trt_precision string A precision of TensorRT engine. fp16 ['fp16', 'fp32'] cloud_capacity integer Capacity of the point cloud buffer (should be set to at least the maximum theoretical number of points). 2000000 \u22651 onnx_path string A path to ONNX model file. $(var model_path)/transfusion.onnx N/A engine_path string A path to TensorRT engine file. $(var model_path)/transfusion.engine N/A densification_num_past_frames integer A number of past frames to be considered as same input frame. 1 \u22650 densification_world_frame_id string A name of frame id where world coordinates system is defined with respect to. map N/A circle_nms_dist_threshold float A distance threshold between detections in NMS. 0.5 \u22650.0 iou_nms_search_distance_2d float A maximum distance value to search the nearest objects. 10.0 \u22650.0 iou_nms_threshold float A threshold value of NMS using IoU score. 0.1 \u22650.0\u22641.0 yaw_norm_thresholds array A thresholds array of direction vectors norm, all of objects with vector norm less than this threshold are ignored. [0.3, 0.3, 0.3, 0.3, 0.0] N/A score_threshold float A threshold value of confidence score, all of objects with score less than this threshold are ignored. 0.2 \u22650.0"},{"location":"perception/autoware_lidar_transfusion/#transfusion-model","title":"TransFusion model","text":"Name Type Description Default Range class_names array An array of class names will be predicted. ['CAR', 'TRUCK', 'BUS', 'BICYCLE', 'PEDESTRIAN'] N/A voxels_num array A maximum number of voxels [min, opt, max]. [5000, 30000, 60000] N/A point_cloud_range array An array of distance ranges of each class. [-76.8, -76.8, -3.0, 76.8, 76.8, 5.0] N/A voxel_size array Voxels size [x, y, z]. [0.3, 0.3, 8.0] N/A num_proposals integer Number of proposals at TransHead. 500 \u22651"},{"location":"perception/autoware_lidar_transfusion/#detection-class-remapper","title":"Detection class remapper","text":"Name Type Description Default Range allow_remapping_by_area_matrix array Whether to allow remapping of classes. The order of 8x8 matrix classes comes from ObjectClassification msg. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] N/A min_area_matrix array Minimum area for specific class to consider class remapping. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.1, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] N/A max_area_matrix array Maximum area for specific class to consider class remapping. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] N/A"},{"location":"perception/autoware_lidar_transfusion/#the-build_only-option","title":"The build_only option","text":"

The autoware_lidar_transfusion node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml build_only:=true\n
"},{"location":"perception/autoware_lidar_transfusion/#the-log_level-option","title":"The log_level option","text":"

The default logging severity level for autoware_lidar_transfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:

ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml log_level:=debug\n
"},{"location":"perception/autoware_lidar_transfusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This library operates on raw cloud data (bytes). It is assumed that the input pointcloud message has following format:

[\n  sensor_msgs.msg.PointField(name='x', offset=0, datatype=7, count=1),\n  sensor_msgs.msg.PointField(name='y', offset=4, datatype=7, count=1),\n  sensor_msgs.msg.PointField(name='z', offset=8, datatype=7, count=1),\n  sensor_msgs.msg.PointField(name='intensity', offset=12, datatype=2, count=1)\n]\n

This input may consist of other fields as well - shown format is required minimum. For debug purposes, you can validate your pointcloud topic using simple command:

ros2 topic echo <input_topic> --field fields\n
"},{"location":"perception/autoware_lidar_transfusion/#trained-models","title":"Trained Models","text":"

You can download the onnx format of trained models by clicking on the links below.

The model was trained in TIER IV's internal database (~11k lidar frames) for 50 epochs.

"},{"location":"perception/autoware_lidar_transfusion/#changelog","title":"Changelog","text":""},{"location":"perception/autoware_lidar_transfusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_lidar_transfusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_lidar_transfusion/#referencesexternal-links","title":"References/External links","text":"

[1] Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu and Chiew-Lan Tai. \"TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers.\" arXiv preprint arXiv:2203.11496 (2022).

[2] https://github.com/wep21/CUDA-TransFusion

[3] https://github.com/open-mmlab/mmdetection3d

[4] https://github.com/open-mmlab/OpenPCDet

[5] https://www.nuscenes.org/nuscenes

"},{"location":"perception/autoware_lidar_transfusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_map_based_prediction/","title":"map_based_prediction","text":""},{"location":"perception/autoware_map_based_prediction/#map_based_prediction","title":"map_based_prediction","text":""},{"location":"perception/autoware_map_based_prediction/#role","title":"Role","text":"

map_based_prediction is a module to predict the future paths (and their probabilities) of other vehicles and pedestrians according to the shape of the map and the surrounding environment.

"},{"location":"perception/autoware_map_based_prediction/#assumptions","title":"Assumptions","text":""},{"location":"perception/autoware_map_based_prediction/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_map_based_prediction/#flow-chart","title":"Flow chart","text":""},{"location":"perception/autoware_map_based_prediction/#path-prediction-for-road-users","title":"Path prediction for road users","text":""},{"location":"perception/autoware_map_based_prediction/#remove-old-object-history","title":"Remove old object history","text":"

Store time-series data of objects to determine the vehicle's route and to detect lane change for several duration. Object Data contains the object's position, speed, and time information.

"},{"location":"perception/autoware_map_based_prediction/#get-current-lanelet-and-update-object-history","title":"Get current lanelet and update Object history","text":"

Search one or more lanelets satisfying the following conditions for each target object and store them in the ObjectData.

"},{"location":"perception/autoware_map_based_prediction/#get-predicted-reference-path","title":"Get predicted reference path","text":"

The conditions for left lane change detection are:

Lane change logics is illustrated in the figure below.An example of how to tune the parameters is described later.

"},{"location":"perception/autoware_map_based_prediction/#tuning-lane-change-detection-logic","title":"Tuning lane change detection logic","text":"

Currently we provide three parameters to tune lane change detection:

You can change these parameters in rosparam in the table below.

param name default value dist_threshold_for_lane_change_detection 1.0 [m] time_threshold_for_lane_change_detection 5.0 [s] cutoff_freq_of_velocity_for_lane_change_detection 0.1 [Hz]"},{"location":"perception/autoware_map_based_prediction/#tuning-threshold-parameters","title":"Tuning threshold parameters","text":"

Increasing these two parameters will slow down and stabilize the lane change estimation.

Normally, we recommend tuning only time_threshold_for_lane_change_detection because it is the more important factor for lane change decision.

"},{"location":"perception/autoware_map_based_prediction/#tuning-lateral-velocity-calculation","title":"Tuning lateral velocity calculation","text":"

Lateral velocity calculation is also a very important factor for lane change decision because it is used in the time domain decision.

The predicted time to reach the lane boundary is calculated by

\\[ t_{predicted} = \\dfrac{d_{lat}}{v_{lat}} \\]

where \\(d_{lat}\\) and \\(v_{lat}\\) represent the lateral distance to the lane boundary and the lateral velocity, respectively.

Lowering the cutoff frequency of the low-pass filter for lateral velocity will make the lane change decision more stable but slower. Our setting is very conservative, so you may increase this parameter if you want to make the lane change decision faster.

For the additional information, here we show how we calculate lateral velocity.

lateral velocity calculation method equation description [applied] time derivative of lateral distance \\(\\dfrac{\\Delta d_{lat}}{\\Delta t}\\) Currently, we use this method to deal with winding roads. Since this time differentiation easily becomes noisy, we also use a low-pass filter to get smoothed velocity. [not applied] Object Velocity Projection to Lateral Direction \\(v_{obj} \\sin(\\theta)\\) Normally, object velocities are less noisy than the time derivative of lateral distance. But the yaw difference \\(\\theta\\) between the lane and object directions sometimes becomes discontinuous, so we did not adopt this method.

Currently, we use the upper method with a low-pass filter to calculate lateral velocity.

"},{"location":"perception/autoware_map_based_prediction/#path-generation","title":"Path generation","text":"

Path generation is generated on the frenet frame. The path is generated by the following steps:

  1. Get the frenet frame of the reference path.
  2. Generate the frenet frame of the current position of the object and the finite position of the object.
  3. Optimize the path in each longitudinal and lateral coordinate of the frenet frame. (Use the quintic polynomial to fit start and end conditions.)
  4. Convert the path to the global coordinate.

See paper [2] for more details.

"},{"location":"perception/autoware_map_based_prediction/#tuning-lateral-path-shape","title":"Tuning lateral path shape","text":"

lateral_control_time_horizon parameter supports the tuning of the lateral path shape. This parameter is used to calculate the time to reach the reference path. The smaller the value, the more the path will be generated to reach the reference path quickly. (Mostly the center of the lane.)

"},{"location":"perception/autoware_map_based_prediction/#pruning-predicted-paths-with-lateral-acceleration-constraint-for-vehicle-obstacles","title":"Pruning predicted paths with lateral acceleration constraint (for vehicle obstacles)","text":"

It is possible to apply a maximum lateral acceleration constraint to generated vehicle paths. This check verifies if it is possible for the vehicle to perform the predicted path without surpassing a lateral acceleration threshold max_lateral_accel when taking a curve. If it is not possible, it checks if the vehicle can slow down on time to take the curve with a deceleration of min_acceleration_before_curve and comply with the constraint. If that is also not possible, the path is eliminated.

Currently we provide three parameters to tune the lateral acceleration constraint:

You can change these parameters in rosparam in the table below.

param name default value check_lateral_acceleration_constraints false [bool] max_lateral_accel 2.0 [m/s^2] min_acceleration_before_curve -2.0 [m/s^2]"},{"location":"perception/autoware_map_based_prediction/#using-vehicle-acceleration-for-path-prediction-for-vehicle-obstacles","title":"Using Vehicle Acceleration for Path Prediction (for Vehicle Obstacles)","text":"

By default, the map_based_prediction module uses the current obstacle's velocity to compute its predicted path length. However, it is possible to use the obstacle's current acceleration to calculate its predicted path's length.

"},{"location":"perception/autoware_map_based_prediction/#decaying-acceleration-model","title":"Decaying Acceleration Model","text":"

Since this module tries to predict the vehicle's path several seconds into the future, it is not practical to consider the current vehicle's acceleration as constant (it is not assumed the vehicle will be accelerating for prediction_time_horizon seconds after detection). Instead, a decaying acceleration model is used. With the decaying acceleration model, a vehicle's acceleration is modeled as:

$\\ a(t) = a_{t0} \\cdot e^{-\\lambda \\cdot t} $

where $\\ a_{t0} $ is the vehicle acceleration at the time of detection $\\ t0 $, and $\\ \\lambda $ is the decay constant $\\ \\lambda = \\ln(2) / hl $ and $\\ hl $ is the exponential's half life.

Furthermore, the integration of $\\ a(t) $ over time gives us equations for velocity, $\\ v(t) $ and distance $\\ x(t) $ as:

$\\ v(t) = v{t0} + a * (1/\\lambda) \\cdot (1 - e^{-\\lambda \\cdot t}) $

and

$\\ x(t) = x{t0} + (v + a{t0} * (1/\\lambda)) \\cdot t + a(1/\u03bb^2)(e^{-\\lambda \\cdot t} - 1) $

With this model, the influence of the vehicle's detected instantaneous acceleration on the predicted path's length is diminished but still considered. This feature also considers that the obstacle might not accelerate past its road's speed limit (multiplied by a tunable factor).

Currently, we provide three parameters to tune the use of obstacle acceleration for path prediction:

You can change these parameters in rosparam in the table below.

Param Name Default Value use_vehicle_acceleration false [bool] acceleration_exponential_half_life 2.5 [s] speed_limit_multiplier 1.5 []"},{"location":"perception/autoware_map_based_prediction/#path-prediction-for-crosswalk-users","title":"Path prediction for crosswalk users","text":"

This module treats Pedestrians and Bicycles as objects using the crosswalk, and outputs prediction path based on map and estimated object's velocity, assuming the object has intention to cross the crosswalk, if the objects satisfies at least one of the following conditions:

If there are a reachable crosswalk entry points within the prediction_time_horizon and the objects satisfies above condition, this module outputs additional predicted path to cross the opposite side via the crosswalk entry point.

This module takes into account the corresponding traffic light information. When RED signal is indicated, we assume the target object will not walk across. In addition, if the target object is stopping (not moving) against GREEN signal, we assume the target object will not walk across either. This prediction comes from the assumption that the object should move if the traffic light is green and the object is intended to cross.

If the target object is inside the road or crosswalk, this module outputs one or two additional prediction path(s) to reach exit point of the crosswalk. The number of prediction paths are depend on whether object is moving or not. If the object is moving, this module outputs one prediction path toward an exit point that existed in the direction of object's movement. One the other hand, if the object has stopped, it is impossible to infer which exit points the object want to go, so this module outputs two prediction paths toward both side exit point.

"},{"location":"perception/autoware_map_based_prediction/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_map_based_prediction/#input","title":"Input","text":"Name Type Description ~/perception/object_recognition/tracking/objects autoware_perception_msgs::msg::TrackedObjects tracking objects without predicted path. ~/vector_map autoware_map_msgs::msg::LaneletMapBin binary data of Lanelet2 Map. ~/perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray rearranged information on the corresponding traffic lights"},{"location":"perception/autoware_map_based_prediction/#output","title":"Output","text":"Name Type Description ~/input/objects autoware_perception_msgs::msg::TrackedObjects tracking objects. Default is set to /perception/object_recognition/tracking/objects ~/output/objects autoware_perception_msgs::msg::PredictedObjects tracking objects with predicted path. ~/objects_path_markers visualization_msgs::msg::MarkerArray marker for visualization. ~/debug/processing_time_ms std_msgs::msg::Float64 processing time of this module. ~/debug/cyclic_time_ms std_msgs::msg::Float64 cyclic time of this module."},{"location":"perception/autoware_map_based_prediction/#parameters","title":"Parameters","text":"Parameter Unit Type Description enable_delay_compensation [-] bool flag to enable the time delay compensation for the position of the object prediction_time_horizon [s] double predict time duration for predicted path lateral_control_time_horizon [s] double time duration for predicted path will reach the reference path (mostly center of the lane) prediction_sampling_delta_time [s] double sampling time for points in predicted path min_velocity_for_map_based_prediction [m/s] double apply map-based prediction to the objects with higher velocity than this value min_crosswalk_user_velocity [m/s] double minimum velocity used when crosswalk user's velocity is calculated max_crosswalk_user_delta_yaw_threshold_for_lanelet [rad] double maximum yaw difference between crosswalk user and lanelet to use in path prediction for crosswalk users dist_threshold_for_searching_lanelet [m] double The threshold of the angle used when searching for the lane to which the object belongs delta_yaw_threshold_for_searching_lanelet [rad] double The threshold of the angle used when searching for the lane to which the object belongs sigma_lateral_offset [m] double Standard deviation for lateral position of objects sigma_yaw_angle_deg [deg] double Standard deviation yaw angle of objects object_buffer_time_length [s] double Time span of object history to store the information history_time_length [s] double Time span of object information used for prediction prediction_time_horizon_rate_for_validate_shoulder_lane_length [-] double prediction path will disabled when the estimated path length exceeds lanelet length. This parameter control the estimated path length"},{"location":"perception/autoware_map_based_prediction/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_map_based_prediction/#reference","title":"Reference","text":"
  1. M. Werling, J. Ziegler, S. Kammel, and S. Thrun, \u201cOptimal trajectory generation for dynamic street scenario in a frenet frame,\u201d IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA, May 2010.
  2. A. Houenou, P. Bonnifait, V. Cherfaoui, and Wen Yao, \u201cVehicle trajectory prediction based on motion model and maneuver recognition,\u201d in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, nov 2013, pp. 4363-4369.
"},{"location":"perception/autoware_multi_object_tracker/","title":"multi_object_tracker","text":""},{"location":"perception/autoware_multi_object_tracker/#multi_object_tracker","title":"multi_object_tracker","text":""},{"location":"perception/autoware_multi_object_tracker/#purpose","title":"Purpose","text":"

The results of the detection are processed by a time series. The main purpose is to give ID and estimate velocity.

"},{"location":"perception/autoware_multi_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This multi object tracker consists of data association and EKF.

"},{"location":"perception/autoware_multi_object_tracker/#data-association","title":"Data association","text":"

The data association performs maximum score matching, called min cost max flow problem. In this package, mussp[1] is used as solver. In addition, when associating observations to tracers, data association have gates such as the area of the object from the BEV, Mahalanobis distance, and maximum distance, depending on the class label.

"},{"location":"perception/autoware_multi_object_tracker/#ekf-tracker","title":"EKF Tracker","text":"

Models for pedestrians, bicycles (motorcycles), cars and unknown are available. The pedestrian or bicycle tracker is running at the same time as the respective EKF model in order to enable the transition between pedestrian and bicycle tracking. For big vehicles such as trucks and buses, we have separate models for passenger cars and large vehicles because they are difficult to distinguish from passenger cars and are not stable. Therefore, separate models are prepared for passenger cars and big vehicles, and these models are run at the same time as the respective EKF models to ensure stability.

"},{"location":"perception/autoware_multi_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_multi_object_tracker/#input","title":"Input","text":"

Multiple inputs are pre-defined in the input channel parameters (described below) and the inputs can be configured

Name Type Description selected_input_channels std::vector<std::string> array of channel names "},{"location":"perception/autoware_multi_object_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::TrackedObjects tracked objects"},{"location":"perception/autoware_multi_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_multi_object_tracker/#input-channel-parameters","title":"Input Channel parameters","text":"Name Type Description Default Range topic string The ROS topic name for the input channel. /perception/object_recognition/detection/objects N/A can_spawn_new_tracker boolean Indicates if the input channel can spawn new trackers. True N/A optional.name string The name of the input channel. detected_objects N/A optional.short_name string The short name of the input channel. all N/A"},{"location":"perception/autoware_multi_object_tracker/#core-parameters","title":"Core Parameters","text":" Name Type Description Default Range car_tracker string Tracker model for car class. multi_vehicle_tracker N/A truck_tracker string Tracker model for truck class. multi_vehicle_tracker N/A bus_tracker string Tracker model for bus class. multi_vehicle_tracker N/A trailer_tracker string Tracker model for trailer class. multi_vehicle_tracker N/A pedestrian_tracker string Tracker model for pedestrian class. pedestrian_and_bicycle_tracker N/A bicycle_tracker string Tracker model for bicycle class. pedestrian_and_bicycle_tracker N/A motorcycle_tracker string Tracker model for motorcycle class. pedestrian_and_bicycle_tracker N/A publish_rate float Timer frequency to output with delay compensation. 10.0 N/A world_frame_id string Object kinematics definition frame. map N/A enable_delay_compensation boolean If True, tracker use timers to schedule publishers and use prediction step to extrapolate object state at desired timestamp. False N/A consider_odometry_uncertainty boolean If True, consider odometry uncertainty in tracking. False N/A tracker_lifetime float Lifetime of the tracker in seconds. 1.0 N/A min_known_object_removal_iou float Minimum IOU between associated objects with known label to remove younger tracker 0.1 N/A min_unknown_object_removal_iou float Minimum IOU between associated objects with unknown label to remove unknown tracker 0.001 N/A distance_threshold float Distance threshold in meters. 5.0 N/A confident_count_threshold.UNKNOWN float Number of measurements to consider tracker as confident for unknown object classes. 3 N/A confident_count_threshold.CAR float Number of measurements to consider tracker as confident for car object classes. 3 N/A confident_count_threshold.TRUCK float Number of measurements to consider tracker as confident for truck object classes. 3 N/A confident_count_threshold.BUS float Number of measurements to consider tracker as confident for bus object classes. 3 N/A confident_count_threshold.TRAILER float Number of measurements to consider tracker as confident for trailer object classes. 3 N/A confident_count_threshold.MOTORBIKE float Number of measurements to consider tracker as confident for motorbike object classes. 3 N/A confident_count_threshold.BICYCLE float Number of measurements to consider tracker as confident for bicycle object classes. 3 N/A confident_count_threshold.PEDESTRIAN float Number of measurements to consider tracker as confident for pedestrian object classes. 3 N/A publish_processing_time boolean Enable to publish debug message of process time information. False N/A publish_tentative_objects boolean Enable to publish tentative tracked objects, which have lower confidence. False N/A publish_debug_markers boolean Enable to publish debug markers, which indicates association of multi-inputs, existence probability of each detection. False N/A diagnostics_warn_delay float Delay threshold for warning diagnostics in seconds. 0.5 N/A diagnostics_error_delay float Delay threshold for error diagnostics in seconds. 1.0 N/A Name Type Description Default Range can_assign_matrix array Assignment table for data association. [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1] N/A max_dist_matrix array Maximum distance table for data association. [4.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 3.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0, 3.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0] N/A max_area_matrix array Maximum area table for data association. [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 12.1, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 36.0, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 60.0, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 60.0, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 2.5, 10000.0, 10000.0, 10000.0, 10000.0, 2.5, 2.5, 1.0, 2.5, 10000.0, 10000.0, 10000.0, 10000.0, 2.5, 2.5, 1.0, 2.0, 10000.0, 10000.0, 10000.0, 10000.0, 1.5, 1.5, 1.0] N/A min_area_matrix array Minimum area table for data association. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.6, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 6.0, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 10.0, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 10.0, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1, 0.001, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1, 0.001, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1] N/A max_rad_matrix array Maximum angle table for data association. [3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15] N/A min_iou_matrix array A matrix that represents the minimum Intersection over Union (IoU) limit allowed for assignment. [0.0001, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.2, 0.2, 0.2, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.0001] N/A"},{"location":"perception/autoware_multi_object_tracker/#simulation-parameters","title":"Simulation parameters","text":"Name Type Description Default Range car_tracker string Tracker model for car class. pass_through_tracker N/A truck_tracker string Tracker model for truck class. pass_through_tracker N/A bus_tracker string Tracker model for bus class. pass_through_tracker N/A pedestrian_tracker string Tracker model for pedestrian class. pass_through_tracker N/A bicycle_tracker string Tracker model for bicycle class. pass_through_tracker N/A motorcycle_tracker string Tracker model for motorcycle class. pass_through_tracker N/A"},{"location":"perception/autoware_multi_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

See the model explanations.

"},{"location":"perception/autoware_multi_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_multi_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_multi_object_tracker/#evaluation-of-mussp","title":"Evaluation of muSSP","text":"

According to our evaluation, muSSP is faster than normal SSP when the matrix size is more than 100.

Execution time for varying matrix size at 95% sparsity. In real data, the sparsity was often around 95%.

Execution time for varying the sparsity with matrix size 100.

"},{"location":"perception/autoware_multi_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

This package makes use of external code.

Name License Original Repository muSSP Apache-2.0 https://github.com/yu-lab-vt/muSSP

[1] C. Wang, Y. Wang, Y. Wang, C.-t. Wu, and G. Yu, \u201cmuSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking,\u201d NeurIPS, 2019

"},{"location":"perception/autoware_multi_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_multi_object_tracker/models/","title":"Models used in this module","text":""},{"location":"perception/autoware_multi_object_tracker/models/#models-used-in-this-module","title":"Models used in this module","text":""},{"location":"perception/autoware_multi_object_tracker/models/#tracking-model","title":"Tracking model","text":""},{"location":"perception/autoware_multi_object_tracker/models/#ctrv-model-1","title":"CTRV model [1]","text":"

CTRV model is a model that assumes constant turn rate and velocity magnitude.

\\[ \\begin{aligned} x_{k+1} & = x_{k} + v_{k} \\cos(\\psi_k) \\cdot {d t} \\\\ y_{k+1} & = y_{k} + v_{k} \\sin(\\psi_k) \\cdot {d t} \\\\ \\psi_{k+1} & = \\psi_k + \\dot\\psi_{k} \\cdot {d t} \\\\ v_{k+1} & = v_{k} \\\\ \\dot\\psi_{k+1} & = \\dot\\psi_{k} \\\\ \\end{aligned} \\] \\[ A = \\begin{bmatrix} 1 & 0 & -v \\sin(\\psi) \\cdot dt & \\cos(\\psi) \\cdot dt & 0 \\\\ 0 & 1 & v \\cos(\\psi) \\cdot dt & \\sin(\\psi) \\cdot dt & 0 \\\\ 0 & 0 & 1 & 0 & dt \\\\ 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 \\\\ \\end{bmatrix} \\]"},{"location":"perception/autoware_multi_object_tracker/models/#kinematic-bicycle-model-2","title":"Kinematic bicycle model [2]","text":"

Kinematic bicycle model uses slip angle \\(\\beta\\) and velocity \\(v\\) to calculate yaw update. The merit of using this model is that it can prevent unintended yaw rotation when the vehicle is stopped.

\\[ \\begin{aligned} x_{k+1} & = x_{k} + v_{k} \\cos \\left( \\psi_{k}+\\beta_{k} \\right) {d t} -\\frac{1}{2} \\left\\lbrace w_k v_k \\sin \\left(\\psi_{k}+\\beta_{k} \\right) \\right\\rbrace {d t}^2\\\\ y_{k+1} & = y_{k} + v_{k} \\sin \\left( \\psi_{k}+\\beta_{k} \\right) {d t} +\\frac{1}{2} \\left\\lbrace w_k v_k \\cos \\left(\\psi_{k}+\\beta_{k} \\right) \\right\\rbrace {d t}^2\\\\ \\psi_{k+1} & =\\psi_{k} + w_k {d t} \\\\ v_{k+1} & = v_{k} \\\\ \\beta_{k+1} & =\\beta_{k} \\end{aligned} \\] \\[ \\frac{\\partial f}{\\partial \\mathrm x}=\\left[\\begin{array}{ccccc} 1 & 0 & v \\cos (\\psi+\\beta) {d t} - \\frac{1}{2} \\left\\lbrace w v \\cos \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & \\sin (\\psi+\\beta) {d t} - \\left\\lbrace w \\sin \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & -v \\sin (\\psi+\\beta) {d t} - \\frac{v^2}{2l_r} \\left\\lbrace \\cos(\\beta)\\sin(\\psi+\\beta)+\\sin(\\beta)\\cos(\\psi+\\beta) \\right\\rbrace {d t}^2 \\\\ 0 & 1 & v \\sin (\\psi+\\beta) {d t} - \\frac{1}{2} \\left\\lbrace w v \\sin \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & \\cos (\\psi+\\beta) {d t} + \\left\\lbrace w \\cos \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & v \\cos (\\psi+\\beta) {d t} + \\frac{v^2}{2l_r} \\left\\lbrace \\cos(\\beta)\\cos(\\psi+\\beta)-\\sin(\\beta)\\sin(\\psi+\\beta) \\right\\rbrace {d t}^2 \\\\ 0 & 0 & 1 & \\frac{1}{l_r} \\sin \\beta {d t} & \\frac{v}{l_r} \\cos \\beta d t \\\\ 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 \\end{array}\\right] \\]"},{"location":"perception/autoware_multi_object_tracker/models/#remarks-on-the-output-twist","title":"remarks on the output twist","text":"

Remarks that the velocity \\(v_{k}\\) is the norm of velocity of vehicle, not the longitudinal velocity. So the output twist in the object coordinate \\((x,y)\\) is calculated as follows.

\\[ \\begin{aligned} v_{x} &= v_{k} \\cos \\left(\\beta_{k}\\right) \\\\ v_{y} &= v_{k} \\sin \\left(\\beta_{k}\\right) \\end{aligned} \\]"},{"location":"perception/autoware_multi_object_tracker/models/#anchor-point-based-estimation","title":"Anchor point based estimation","text":"

To separate the estimation of the position and the shape, we use anchor point based position estimation.

"},{"location":"perception/autoware_multi_object_tracker/models/#anchor-point-and-tracking-relationships","title":"Anchor point and tracking relationships","text":"

Anchor point is set when the tracking is initialized. Its position is equal to the center of the bounding box of the first tracking bounding box.

Here show how anchor point is used in tracking.

Raw detection is converted to anchor point coordinate, and tracking

"},{"location":"perception/autoware_multi_object_tracker/models/#manage-anchor-point-offset","title":"Manage anchor point offset","text":"

Anchor point should be kept in the same position of the object. In other words, the offset value must be adjusted so that the input BBOX and the output BBOX's closest plane to the ego vehicle are at the same position.

"},{"location":"perception/autoware_multi_object_tracker/models/#known-limits-drawbacks","title":"Known limits, drawbacks","text":""},{"location":"perception/autoware_multi_object_tracker/models/#references","title":"References","text":"

[1] Schubert, Robin & Richter, Eric & Wanielik, Gerd. (2008). Comparison and evaluation of advanced motion models for vehicle tracking. 1 - 6. 10.1109/ICIF.2008.4632283.

[2] Kong, Jason & Pfeiffer, Mark & Schildbach, Georg & Borrelli, Francesco. (2015). Kinematic and dynamic vehicle models for autonomous driving control design. 1094-1099. 10.1109/IVS.2015.7225830.

"},{"location":"perception/autoware_object_merger/","title":"object_merger","text":""},{"location":"perception/autoware_object_merger/#object_merger","title":"object_merger","text":""},{"location":"perception/autoware_object_merger/#purpose","title":"Purpose","text":"

object_merger is a package for merging detected objects from two methods by data association.

"},{"location":"perception/autoware_object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The successive shortest path algorithm is used to solve the data association problem (the minimum-cost flow problem). The cost is calculated by the distance between two objects and gate functions are applied to reset cost, s.t. the maximum distance, the maximum area and the minimum area.

"},{"location":"perception/autoware_object_merger/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_object_merger/#input","title":"Input","text":"Name Type Description input/object0 autoware_perception_msgs::msg::DetectedObjects detection objects input/object1 autoware_perception_msgs::msg::DetectedObjects detection objects"},{"location":"perception/autoware_object_merger/#output","title":"Output","text":"Name Type Description output/object autoware_perception_msgs::msg::DetectedObjects modified Objects"},{"location":"perception/autoware_object_merger/#parameters","title":"Parameters","text":" Name Type Description Default Range sync_queue_size integer The size of the synchronization queue. 20 N/A precision_threshold_to_judge_overlapped float The precision threshold to judge if objects are overlapped. 0.4 N/A recall_threshold_to_judge_overlapped float The recall threshold to judge if objects are overlapped. 0.5 N/A remove_overlapped_unknown_objects boolean Flag to remove overlapped unknown objects. True N/A base_link_frame_id string The frame ID of the association frame. base_link N/A priority_mode integer Index for the priority_mode. 3 [0, 1, 2, 3] Name Type Description Default Range can_assign_matrix array Assignment table for data association. [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1] N/A max_dist_matrix array Maximum distance table for data association. [4.0, 4.0, 5.0, 5.0, 5.0, 2.0, 2.0, 2.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0] N/A max_rad_matrix array Maximum angle table for data association. If value is greater than pi, it will be ignored. [3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15] N/A min_iou_matrix array Minimum IoU threshold matrix for data association. If value is negative, it will be ignored. [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.3, 0.2, 0.2, 0.2, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] N/A Name Type Description Default Range distance_threshold_list array Distance threshold for each class used in judging overlap. [9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0] N/A generalized_iou_threshold array Generalized IoU threshold for each class. [-0.1, -0.1, -0.1, -0.6, -0.6, -0.1, -0.1, -0.1] N/A"},{"location":"perception/autoware_object_merger/#tips","title":"Tips","text":""},{"location":"perception/autoware_object_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_object_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_object_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_object_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_object_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

Data association algorithm was the same as that of multi_object_tracker, but the algorithm of multi_object_tracker was already updated.

"},{"location":"perception/autoware_object_range_splitter/","title":"`autoware_object_range_splitter`","text":""},{"location":"perception/autoware_object_range_splitter/#autoware_object_range_splitter","title":"autoware_object_range_splitter","text":""},{"location":"perception/autoware_object_range_splitter/#purpose","title":"Purpose","text":"

autoware_object_range_splitter is a package to divide detected objects into two messages by the distance from the origin.

"},{"location":"perception/autoware_object_range_splitter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_object_range_splitter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_object_range_splitter/#input","title":"Input","text":"Name Type Description input/object autoware_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/autoware_object_range_splitter/#output","title":"Output","text":"Name Type Description output/long_range_object autoware_perception_msgs::msg::DetectedObjects long range detected objects output/short_range_object autoware_perception_msgs::msg::DetectedObjects short range detected objects"},{"location":"perception/autoware_object_range_splitter/#parameters","title":"Parameters","text":"Name Type Description Default Range split_range float object_range_splitter is a package to divide detected objects into two messages by the distance from the origin 30 N/A"},{"location":"perception/autoware_object_range_splitter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_object_range_splitter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_object_range_splitter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_object_range_splitter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_object_range_splitter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_object_velocity_splitter/","title":"autoware_object_velocity_splitter","text":""},{"location":"perception/autoware_object_velocity_splitter/#autoware_object_velocity_splitter","title":"autoware_object_velocity_splitter","text":"

This package contains a object filter module for autoware_perception_msgs/msg/DetectedObject. This package can split DetectedObjects into two messages by object's speed.

"},{"location":"perception/autoware_object_velocity_splitter/#interface","title":"Interface","text":""},{"location":"perception/autoware_object_velocity_splitter/#input","title":"Input","text":""},{"location":"perception/autoware_object_velocity_splitter/#output","title":"Output","text":""},{"location":"perception/autoware_object_velocity_splitter/#parameters","title":"Parameters","text":"

This parameter is velocity threshold to split objects

"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/","title":"autoware_occupancy_grid_map_outlier_filter","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#autoware_occupancy_grid_map_outlier_filter","title":"autoware_occupancy_grid_map_outlier_filter","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#purpose","title":"Purpose","text":"

This node is an outlier filter based on a occupancy grid map. Depending on the implementation of occupancy grid map, it can be called an outlier filter in time series, since the occupancy grid map expresses the occupancy probabilities in time series.

"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"
  1. Use the occupancy grid map to separate point clouds into those with low occupancy probability and those with high occupancy probability.

  2. The point clouds that belong to the low occupancy probability are not necessarily outliers. In particular, the top of the moving object tends to belong to the low occupancy probability. Therefore, if use_radius_search_2d_filter is true, then apply an radius search 2d outlier filter to the point cloud that is determined to have a low occupancy probability.

    1. For each low occupancy probability point, determine the outlier from the radius (radius_search_2d_filter/search_radius) and the number of point clouds. In this case, the point cloud to be referenced is not only low occupancy probability points, but all point cloud including high occupancy probability points.
    2. The number of point clouds can be multiplied by radius_search_2d_filter/min_points_and_distance_ratio and distance from base link. However, the minimum and maximum number of point clouds is limited.

The following video is a sample. Yellow points are high occupancy probability, green points are low occupancy probability which is not an outlier, and red points are outliers. At around 0:15 and 1:16 in the first video, a bird crosses the road, but it is considered as an outlier.

"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs/PointCloud2 Obstacle point cloud with ground removed. ~/input/occupancy_grid_map nav_msgs/OccupancyGrid A map in which the probability of the presence of an obstacle is occupancy probability map"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#output","title":"Output","text":"Name Type Description ~/output/pointcloud sensor_msgs/PointCloud2 Point cloud with outliers removed. trajectory ~/output/debug/outlier/pointcloud sensor_msgs/PointCloud2 Point clouds removed as outliers. ~/output/debug/low_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a low probability of occupancy in the occupancy grid map. However, it is not considered as an outlier. ~/output/debug/high_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a high probability of occupancy in the occupancy grid map. trajectory"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range radius_search_2d_filter.search_radius float Radius when calculating the density 1.0 N/A radius_search_2d_filter.min_points_and_distance_ratio float Threshold value of the number of point clouds per radius when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink 400.0 N/A radius_search_2d_filter.min_points float Minimum number of point clouds per radius 4 N/A radius_search_2d_filter.max_points float Maximum number of point clouds per radius 70 N/A radius_search_2d_filter.max_filter_points_nb float Maximum number of point clouds to be filtered 15000 N/A map_frame string map frame id map N/A base_link_frame string base link frame id base_link N/A cost_threshold float Cost threshold of occupancy grid map (0~100). 100 means 100% probability that there is an obstacle, close to 50 means that it is indistinguishable whether it is an obstacle or free space, 0 means that there is no obstacle 45 N/A use_radius_search_2d_filter boolean Whether or not to apply density-based outlier filters to objects that are judged to have low probability of occupancy on the occupancy grid map True N/A enable_debugger boolean Whether to output the point cloud for debugging False N/A"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/","title":"autoware_probabilistic_occupancy_grid_map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#autoware_probabilistic_occupancy_grid_map","title":"autoware_probabilistic_occupancy_grid_map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#purpose","title":"Purpose","text":"

This package outputs the probability of having an obstacle as occupancy grid map.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#settings","title":"Settings","text":"

Occupancy grid map is generated on map_frame, and grid orientation is fixed.

You may need to choose scan_origin_frame and gridmap_origin_frame which means sensor origin and gridmap origin respectively. Especially, set your main LiDAR sensor frame (e.g. velodyne_top in sample_vehicle) as a scan_origin_frame would result in better performance.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#parameters","title":"Parameters","text":" "},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#downsample-input-pointcloudoptional","title":"Downsample input pointcloud(Optional)","text":"

If you set downsample_input_pointcloud to true, the input pointcloud will be downsampled and following topics are also used. This feature is currently only for the pointcloud based occupancy grid map.

# downsampled raw and obstacle pointcloud\n/perception/occupancy_grid_map/obstacle/downsample/pointcloud\n/perception/occupancy_grid_map/raw/downsample/pointcloud\n
# downsampled raw and obstacle pointcloud\n/perception/occupancy_grid_map/obstacle/downsample/pointcloud\n/perception/occupancy_grid_map/<sensor_name>/raw/downsample/pointcloud\n
"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#test","title":"Test","text":"

This package provides unit tests using gtest. You can run the test by the following command.

colcon test --packages-select autoware_probabilistic_occupancy_grid_map --event-handlers console_direct+\n

Test contains the following.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/","title":"laserscan based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#laserscan-based-occupancy-grid-map","title":"laserscan based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The basic idea is to take a 2D laserscan and ray trace it to create a time-series processed occupancy grid map.

  1. the node take a laserscan and make an occupancy grid map with one frame. ray trace is done by Bresenham's line algorithm.
  2. Optionally, obstacle point clouds and raw point clouds can be received and reflected in the occupancy grid map. The reason is that laserscan only uses the most foreground point in the polar coordinate system, so it throws away a lot of information. As a result, the occupancy grid map is almost an UNKNOWN cell. Therefore, the obstacle point cloud and the raw point cloud are used to reflect what is judged to be the ground and what is judged to be an obstacle in the occupancy grid map. The black and red dots represent raw point clouds, and the red dots represent obstacle point clouds. In other words, the black points are determined as the ground, and the red point cloud is the points determined as obstacles. The gray cells are represented as UNKNOWN cells.

  3. Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

\\[ \\hat{P_{o}} = \\frac{(P_{o} *P_{z})}{(P_{o}* P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1} \\] \\[ \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2} \\]"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/laserscan sensor_msgs::LaserScan laserscan ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame input_obstacle_pointcloud bool whether to use the optional obstacle point cloud? If this is true, ~/input/obstacle_pointcloud topics will be received. input_obstacle_and_raw_pointcloud bool whether to use the optional obstacle and raw point cloud? If this is true, ~/input/obstacle_pointcloud and ~/input/raw_pointcloud topics will be received. use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m]"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

Bresenham's_line_algorithm

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/","title":"pointcloud based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#pointcloud-based-occupancy-grid-map","title":"pointcloud based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#1st-step","title":"1st step","text":"

First of all, input obstacle/raw pointcloud are transformed into the polar coordinate centered around scan_origin and divided int circular bins per angle_increment respectively. At this time, each point belonging to each bin is stored as range data. In addition, the x,y information in the map coordinate is also stored for ray-tracing on the map coordinate. The bin contains the following information for each point

The following figure shows each of the bins from side view.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#2nd-step","title":"2nd step","text":"

The ray trace is performed in three steps for each cell. The ray trace is done by Bresenham's line algorithm.

  1. Initialize freespace to the farthest point of each bin.

  2. Fill in the unknown cells. Based on the assumption that UNKNOWN is behind the obstacle, the cells that are more than a distance margin from each obstacle point are filled with UNKNOWN

    There are three reasons for setting a distance margin.

    When the parameter grid_map_type is \"OccupancyGridMapProjectiveBlindSpot\" and the scan_origin is a sensor frame like velodyne_top for instance, for each obstacle pointcloud, if there are no visible raw pointclouds that are located above the projected ray from the scan_origin to that obstacle pointcloud, the cells between the obstacle pointcloud and the projected point are filled with UNKNOWN. Note that the scan_origin should not be base_link if this flag is true because otherwise all the cells behind the obstacle point clouds would be filled with UNKNOWN.

  3. Fill in the occupied cells. Fill in the point where the obstacle point is located with occupied. In addition, If the distance between obstacle points is less than or equal to the distance margin, that interval is filled with OCCUPIED because the input may be inaccurate and obstacle points may not be determined as obstacles.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#3rd-step","title":"3rd step","text":"

Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

\\[ \\hat{P_{o}} = \\frac{(P_{o} *P_{z})}{(P_{o}* P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1} \\] \\[ \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2} \\]"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#related-topics","title":"Related topics","text":"

If you set downsample_input_pointcloud to true, the input pointcloud will be downsampled and following topics are also used.

# downsampled raw and obstacle pointcloud\n/perception/occupancy_grid_map/obstacle/downsample/pointcloud\n/perception/occupancy_grid_map/raw/downsample/pointcloud\n
"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m] grid_map_type string The type of grid map for estimating UNKNOWN region behind obstacle point clouds scan_origin string The origin of the scan. It should be a sensor frame. pub_debug_grid bool Whether to publish debug grid maps downsample_input_pointcloud bool Whether to downsample the input pointclouds. The downsampled pointclouds are used for the ray tracing. downsample_voxel_size double The voxel size for the downsampled pointclouds."},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#how-to-debug","title":"How to debug","text":"

If grid_map_type is \"OccupancyGridMapProjectiveBlindSpot\" and pub_debug_grid is true, it is possible to check the each process of grid map generation by running

ros2 launch autoware_probabilistic_occupancy_grid_map debug.launch.xml\n

and visualizing the following occupancy grid map topics (which are listed in config/grid_map_param.yaml):

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/","title":"synchronized OGM fusion","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#synchronized-ogm-fusion","title":"synchronized OGM fusion","text":"

For simplicity, we use OGM as the meaning of the occupancy grid map.

This package is used to fuse the OGMs from synchronized sensors. Especially for the lidar.

Here shows the example OGM for the this synchronized OGM fusion.

left lidar OGM right lidar OGM top lidar OGM

OGM fusion with asynchronous sensor outputs is not suitable for this package. Asynchronous OGM fusion is under construction.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#processing-flow","title":"Processing flow","text":"

The processing flow of this package is shown in the following figure.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#io","title":"I/O","text":"Input topic name Type Description input_ogm_topics list of nav_msgs::msg::OccupancyGrid List of input topics for Occupancy Grid Maps. This parameter is given in list, so Output topic name Type Description ~/output/occupancy_grid_map nav_msgs::msg::OccupancyGrid Output topic name of the fused Occupancy Grid Map. ~/debug/single_frame_map nav_msgs::msg::OccupancyGrid (debug topic) Output topic name of the single frame fused Occupancy Grid Map."},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#parameters","title":"Parameters","text":"

Synchronized OGM fusion node parameters are shown in the following table. Main parameters to be considered in the fusion node is shown as bold.

Ros param name Sample value Description input_ogm_topics [\"topic1\", \"topic2\"] List of input topics for Occupancy Grid Maps input_ogm_reliabilities [0.8, 0.2] Weights for the reliability of each input topic fusion_method \"overwrite\" Method of fusion (\"overwrite\", \"log-odds\", \"dempster-shafer\") match_threshold_sec 0.01 Matching threshold in milliseconds timeout_sec 0.1 Timeout duration in seconds input_offset_sec [0.0, 0.0] Offset time in seconds for each input topic mapframe \"map\" Frame name for the fused map baselink_frame \"base_link\" Frame name for the base link gridmap_origin_frame \"base_link\" Frame name for the origin of the grid map fusion_map_length_x 100.0 Length of the fused map along the X-axis fusion_map_length_y 100.0 Length of the fused map along the Y-axis fusion_map_resolution 0.5 Resolution of the fused map

Since this node assumes that the OGMs from synchronized sensors are generated in the same time, we need to tune the match_threshold_sec, timeout_sec and input_offset_sec parameters to successfully fuse the OGMs.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#fusion-methods","title":"Fusion methods","text":"

For the single frame fusion, the following fusion methods are supported.

Fusion Method in parameter Description overwrite The value of the cell in the fused OGM is overwritten by the value of the cell in the OGM with the highest priority. We set priority as Occupied > Free > Unknown. log-odds The value of the cell in the fused OGM is calculated by the log-odds ratio method, which is known as a Bayesian fusion method. The log-odds of a probability \\(p\\) can be written as \\(l_p = \\log(\\frac{p}{1-p})\\). And the fused log-odds is calculated by the sum of log-odds. \\(l_f = \\Sigma l_p\\) dempster-shafer The value of the cell in the fused OGM is calculated by the Dempster-Shafer theory[1]. This is also popular method to handle multiple evidences. This package applied conflict escape logic in [2] for the performance. See references for the algorithm details.

For the multi frame fusion, currently only supporting log-odds fusion method.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#how-to-use","title":"How to use","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#launch-fusion-node","title":"launch fusion node","text":"

The minimum node launch will be like the following.

<?xml version=\"1.0\"?>\n<launch>\n<arg name=\"output_topic\" default=\"~/output/occupancy_grid_map\"/>\n<arg name=\"fusion_node_param_path\" default=\"$(find-pkg-share autoware_probabilistic_occupancy_grid_map)/config/synchronized_grid_map_fusion_node.param.yaml\"/>\n\n<node name=\"synchronized_grid_map_fusion_node\" exec=\"synchronized_grid_map_fusion_node\" pkg=\"autoware_probabilistic_occupancy_grid_map\" output=\"screen\">\n<remap from=\"~/output/occupancy_grid_map\" to=\"$(var output_topic)\"/>\n<param from=\"$(var fusion_node_param_path)\"/>\n</node>\n</launch>\n
"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#optional-generate-ogms-in-each-sensor-frame","title":"(Optional) Generate OGMs in each sensor frame","text":"

You need to generate OGMs in each sensor frame before achieving grid map fusion.

autoware_probabilistic_occupancy_grid_map package supports to generate OGMs for the each from the point cloud data.

Example launch.xml (click to expand)
<include file=\"$(find-pkg-share tier4_perception_launch)/launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml\">\n<arg name=\"input/obstacle_pointcloud\" value=\"/perception/obstacle_segmentation/single_frame/pointcloud\"/>\n<arg name=\"input/raw_pointcloud\" value=\"/sensing/lidar/right/outlier_filtered/pointcloud_synchronized\"/>\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/right_lidar/map\"/>\n<arg name=\"map_frame\" value=\"base_link\"/>\n<arg name=\"scan_origin\" value=\"velodyne_right\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"param_file\" value=\"$(find-pkg-share autoware_probabilistic_occupancy_grid_map)/config/pointcloud_based_occupancy_grid_map_fusion.param.yaml\"/>\n</include>\n\n\nThe minimum parameter for the OGM generation in each frame is shown in the following table.\n\n|Parameter|Description|\n|--|--|\n|`input/obstacle_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is segmented as the obstacle.|\n|`input/raw_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is not segmented as the obstacle. |\n|`output`| The output topic of the OGM. |\n|`map_frame`| The tf frame for the OGM center origin. |\n|`scan_origin`| The tf frame for the sensor origin. |\n|`method`| The method for the OGM generation. Currently we support `pointcloud_based_occupancy_grid_map` and `laser_scan_based_occupancy_grid_map`. The pointcloud based method is recommended. |\n|`param_file`| The parameter file for the OGM generation. See [example parameter file](config/pointcloud_based_occupancy_grid_map_for_fusion.param.yaml) |\n

We recommend to use same map_frame, size and resolutions for the OGMs from synchronized sensors. Also, remember to set enable_single_frame_mode and filter_obstacle_pointcloud_by_raw_pointcloud to true in the autoware_probabilistic_occupancy_grid_map package (you do not need to set these parameters if you use the above example config file).

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#run-both-ogm-generation-node-and-fusion-node","title":"Run both OGM generation node and fusion node","text":"

We prepared the launch file to run both OGM generation node and fusion node in grid_map_fusion_with_synchronized_pointclouds.launch.py

You can include this launch file like the following.

<include file=\"$(find-pkg-share autoware_probabilistic_occupancy_grid_map)/launch/grid_map_fusion_with_synchronized_pointclouds.launch.py\">\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/fusion/map\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"fusion_config_file\" value=\"$(var fusion_config_file)\"/>\n<arg name=\"ogm_config_file\" value=\"$(var ogm_config_file)\"/>\n</include>\n

The minimum parameter for the launch file is shown in the following table.

Parameter Description output The output topic of the finally fused OGM. method The method for the OGM generation. Currently we support pointcloud_based_occupancy_grid_map and laser_scan_based_occupancy_grid_map. The pointcloud based method is recommended. fusion_config_file The parameter file for the grid map fusion. See example parameter file ogm_config_file The parameter file for the OGM generation. See example parameter file"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#references","title":"References","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/","title":"autoware_radar_crossing_objects_noise_filter","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#autoware_radar_crossing_objects_noise_filter","title":"autoware_radar_crossing_objects_noise_filter","text":"

This package contains a radar noise filter module for autoware_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#design","title":"Design","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#background","title":"Background","text":"

This package aim to filter the noise objects which cross from the ego vehicle. The reason why these objects are noise is as below.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#1-the-objects-with-doppler-velocity-can-be-trusted-more-than-those-with-vertical-velocity-to-it","title":"1. The objects with doppler velocity can be trusted more than those with vertical velocity to it","text":"

Radars can get velocity information of objects as doppler velocity, but cannot get vertical velocity to doppler velocity directory. Some radars can output the objects with not only doppler velocity but also vertical velocity by estimation. If the vertical velocity estimation is poor, it leads to output noise objects. In other words, the above situation is that the objects which has vertical twist viewed from ego vehicle can tend to be noise objects.

The example is below figure. Velocity estimation fails on static objects, resulting in ghost objects crossing in front of ego vehicles.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#2-turning-around-by-ego-vehicle-affect-the-output-from-radar","title":"2. Turning around by ego vehicle affect the output from radar","text":"

When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if radar_tracks_msgs_converter compensates by the ego vehicle twist. So if an object detected by radars has circular motion viewing from base_link, it is likely that the speed is estimated incorrectly and that the object is a static object.

The example is below figure. When the ego vehicle turn right, the surrounding objects have left circular motion.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#algorithm","title":"Algorithm","text":"

To filter the objects crossing to ego vehicle, this package filter the objects as below algorithm.

  // If velocity of an object is rather than the velocity_threshold,\n// and crossing_yaw is near to vertical\n// angle_threshold < crossing_yaw < pi - angle_threshold\nif (\nvelocity > node_param_.velocity_threshold &&\nabs(std::cos(crossing_yaw)) < abs(std::cos(node_param_.angle_threshold))) {\n// Object is noise object;\n} else {\n// Object is not noise object;\n}\n
"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#interface","title":"Interface","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#input","title":"Input","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#output","title":"Output","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#parameters","title":"Parameters","text":"

This parameter is the angle threshold to filter. It has condition that 0 < angle_threshold < pi / 2. If the crossing angle is larger than this parameter, it can be a candidate for noise object. In other words, if it is smaller than this parameter, it is a filtered object. If this parameter is set smaller, more objects are considered noise. In detail, see algorithm chapter.

This parameter is the velocity threshold to filter. If velocity of an object is larger than this parameter, it can be a candidate for noise object. In other words, if velocity of an object is smaller than this parameter, it is a filtered object. If this parameter is set smaller, more objects are considered noise. In detail, see algorithm chapter.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/","title":"`autoware_radar_fusion_to_detected_object`","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#autoware_radar_fusion_to_detected_object","title":"autoware_radar_fusion_to_detected_object","text":"

This package contains a sensor fusion module for radar-detected objects and 3D detected objects.

The fusion node can:

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#design","title":"Design","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#background","title":"Background","text":"

This package is the fusion with LiDAR-based 3D detection output and radar data. LiDAR based 3D detection can estimate position and size of objects with high precision, but it cannot estimate velocity of objects. Radar data can estimate doppler velocity of objects, but it cannot estimate position and size of objects with high precision This fusion package is aim to fuse these characteristic data, and to estimate position, size, velocity of objects with high precision.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#algorithm","title":"Algorithm","text":"

The document of core algorithm is here

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#interface-for-core-algorithm","title":"Interface for core algorithm","text":"

The parameters for core algorithm can be set as core_params.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#parameters-for-sensor-fusion","title":"Parameters for sensor fusion","text":"

This parameter is the distance to extend the 2D bird's-eye view bounding box on each side. This parameter is used as a threshold to find radar centroids falling inside the extended box.

This parameter is the object's velocity threshold to decide to split for two objects from radar information. Note that this feature is not currently implemented.

This parameter is the yaw orientation threshold. If the difference of yaw degree between from a LiDAR-based detection object and radar velocity, radar information is attached to output objects.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#weight-parameters-for-velocity-estimation","title":"Weight parameters for velocity estimation","text":"

To tune these weight parameters, please see document in detail.

This parameter is the twist coefficient of average twist of radar data in velocity estimation.

This parameter is the twist coefficient of median twist of radar data in velocity estimation.

This parameter is the twist coefficient of radar data nearest to the center of bounding box in velocity estimation.

This parameter is the twist coefficient of target value weighted average in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects.

This parameter is the twist coefficient of top target value radar data in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#parameters-for-fixed-object-information","title":"Parameters for fixed object information","text":"

This parameter is the flag whether convert doppler velocity to twist using the yaw information of a detected object.

This parameter is the threshold to filter output objects. If the probability of an output object is lower than this parameter, and the output object does not have radar points/objects, then delete the object.

This parameter is the flag to use probability compensation. If this parameter is true, compensate probability of objects to threshold probability.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#interface-for-autoware_radar_object_fusion_to_detected_object","title":"Interface for autoware_radar_object_fusion_to_detected_object","text":"

Sensor fusion with radar objects and a detected object.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_fusion_to_detected_object radar_object_to_detected_object.launch.xml\n
"},{"location":"perception/autoware_radar_fusion_to_detected_object/#input","title":"Input","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#output","title":"Output","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#parameters","title":"Parameters","text":"

The parameters for core algorithm can be set as node_params.

This parameter is update rate for the onTimer function. This parameter should be same as the frame rate of input topics.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#interface-for-radar_scan_fusion_to_detected_object-tbd","title":"Interface for radar_scan_fusion_to_detected_object (TBD)","text":"

Under implement

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/","title":"Algorithm","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#common-algorithm","title":"Common Algorithm","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#1-link-between-3d-bounding-box-and-radar-data","title":"1. Link between 3d bounding box and radar data","text":"

Choose radar pointcloud/objects within 3D bounding box from lidar-base detection with margin space from bird's-eye view.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#2-feature-support-split-the-object-going-in-a-different-direction","title":"2. [Feature support] Split the object going in a different direction","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#3-estimate-twist-of-object","title":"3. Estimate twist of object","text":"

Estimate twist from chosen radar pointcloud/objects using twist and target value (Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects). First, the estimation function calculate

Second, the estimation function calculate weighted average of these list. Third, twist information of estimated twist is attached to an object.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#4-feature-support-option-convert-doppler-velocity-to-twist","title":"4. [Feature support] [Option] Convert doppler velocity to twist","text":"

If the twist information of radars is doppler velocity, convert from doppler velocity to twist using yaw angle of DetectedObject. Because radar pointcloud has only doppler velocity information, radar pointcloud fusion should use this feature. On the other hand, because radar objects have twist information, radar object fusion should not use this feature.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#5-delete-objects-with-low-probability","title":"5. Delete objects with low probability","text":""},{"location":"perception/autoware_radar_object_clustering/","title":"`autoware_radar_object_clustering`","text":""},{"location":"perception/autoware_radar_object_clustering/#autoware_radar_object_clustering","title":"autoware_radar_object_clustering","text":"

This package contains a radar object clustering for autoware_perception_msgs/msg/DetectedObject input.

This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.

"},{"location":"perception/autoware_radar_object_clustering/#design","title":"Design","text":""},{"location":"perception/autoware_radar_object_clustering/#background","title":"Background","text":"

In radars with object output, there are cases that multiple detection results are obtained from one object, especially for large vehicles such as trucks and trailers. Its multiple detection results cause separation of objects in tracking module. Therefore, by this package the multiple detection results are clustered into one object in advance.

"},{"location":"perception/autoware_radar_object_clustering/#algorithm","title":"Algorithm","text":""},{"location":"perception/autoware_radar_object_clustering/#1-sort-by-distance-from-base_link","title":"1. Sort by distance from base_link","text":"

At first, to prevent changing the result from depending on the order of objects in DetectedObjects, input objects are sorted by distance from base_link. In addition, to apply matching in closeness order considering occlusion, objects are sorted in order of short distance in advance.

"},{"location":"perception/autoware_radar_object_clustering/#2-clustering","title":"2. Clustering","text":"

If two radar objects are near, and yaw angle direction and velocity between two radar objects is similar (the degree of these is defined by parameters), then these are clustered. Note that radar characteristic affect parameters for this matching. For example, if resolution of range distance or angle is low and accuracy of velocity is high, then distance_threshold parameter should be bigger and should set matching that strongly looks at velocity similarity.

After grouping for all radar objects, if multiple radar objects are grouping, the kinematics of the new clustered object is calculated from average of that and label and shape of the new clustered object is calculated from top confidence in radar objects.

"},{"location":"perception/autoware_radar_object_clustering/#3-fixed-label-correction","title":"3. Fixed label correction","text":"

When the label information from radar outputs lack accuracy, is_fixed_label parameter is recommended to set true. If the parameter is true, the label of a clustered object is overwritten by the label set by fixed_label parameter. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to VEHICLE.

"},{"location":"perception/autoware_radar_object_clustering/#4-fixed-size-correction","title":"4. Fixed size correction","text":"

When the size information from radar outputs lack accuracy, is_fixed_size parameter is recommended to set true. If the parameter is true, the size of a clustered object is overwritten by the label set by size_x, size_y, and size_z parameters. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to size_x, size_y, size_z, as average of vehicle size. Note that to use for multi_objects_tracker, the size parameters need to exceed min_area_matrix parameters of it.

"},{"location":"perception/autoware_radar_object_clustering/#limitation","title":"Limitation","text":"

For now, size estimation for clustered object is not implemented. So is_fixed_size parameter is recommended to set true, and size parameters is recommended to set to value near to average size of vehicles.

"},{"location":"perception/autoware_radar_object_clustering/#interface","title":"Interface","text":""},{"location":"perception/autoware_radar_object_clustering/#input","title":"Input","text":""},{"location":"perception/autoware_radar_object_clustering/#output","title":"Output","text":""},{"location":"perception/autoware_radar_object_clustering/#parameter","title":"Parameter","text":"

These parameter are thresholds for angle, distance, and velocity to judge whether radar detections come from one object in \"clustering\" processing, which is written in detail at algorithm section. If all of the difference in angle/distance/velocity from two objects is less than the thresholds, then the two objects are merged to one clustered object. If these parameter is larger, more objects are merged to one clustered object.

These are used in isSameObject function as below.

bool RadarObjectClusteringNode::isSameObject(\nconst DetectedObject & object_1, const DetectedObject & object_2)\n{\nconst double angle_diff = std::abs(autoware::universe_utils::normalizeRadian(\ntf2::getYaw(object_1.kinematics.pose_with_covariance.pose.orientation) -\ntf2::getYaw(object_2.kinematics.pose_with_covariance.pose.orientation)));\nconst double velocity_diff = std::abs(\nobject_1.kinematics.twist_with_covariance.twist.linear.x -\nobject_2.kinematics.twist_with_covariance.twist.linear.x);\nconst double distance = autoware::universe_utils::calcDistance2d(\nobject_1.kinematics.pose_with_covariance.pose.position,\nobject_2.kinematics.pose_with_covariance.pose.position);\n\nif (\ndistance < node_param_.distance_threshold && angle_diff < node_param_.angle_threshold &&\nvelocity_diff < node_param_.velocity_threshold) {\nreturn true;\n} else {\nreturn false;\n}\n}\n

is_fixed_label is the flag to use fixed label. If it is true, the label of a clustered object is overwritten by the label set by fixed_label parameter. If the radar objects do not have label information, then it is recommended to use fixed label.

is_fixed_size is the flag to use fixed size parameters. If it is true, the size of a clustered object is overwritten by the label set by size_x, size_y, and size_z parameters.

"},{"location":"perception/autoware_radar_object_tracker/","title":"`autoware_radar_object_tracker`","text":""},{"location":"perception/autoware_radar_object_tracker/#autoware_radar_object_tracker","title":"autoware_radar_object_tracker","text":""},{"location":"perception/autoware_radar_object_tracker/#purpose","title":"Purpose","text":"

This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.

"},{"location":"perception/autoware_radar_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This radar object tracker is a combination of data association and tracking algorithms.

"},{"location":"perception/autoware_radar_object_tracker/#data-association","title":"Data Association","text":"

The data association algorithm matches detected objects to existing tracks.

"},{"location":"perception/autoware_radar_object_tracker/#tracker-models","title":"Tracker Models","text":"

The tracker models used in this package vary based on the class of the detected object. See more details in the models.md.

"},{"location":"perception/autoware_radar_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_radar_object_tracker/#input","title":"Input","text":"Name Type Description ~/input autoware_perception_msgs::msg::DetectedObjects Detected objects /vector/map autoware_map_msgs::msg::LaneletMapBin Map data"},{"location":"perception/autoware_radar_object_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::TrackedObjects Tracked objects"},{"location":"perception/autoware_radar_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_radar_object_tracker/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description publish_rate double 10.0 The rate at which to publish the output messages world_frame_id string \"map\" The frame ID of the world coordinate system enable_delay_compensation bool false Whether to enable delay compensation. If set to true, output topic is published by timer with publish_rate. tracking_config_directory string \"./config/tracking/\" The directory containing the tracking configuration files enable_logging bool false Whether to enable logging logging_file_path string \"/tmp/association_log.json\" The path to the file where logs should be written tracker_lifetime double 1.0 The lifetime of the tracker in seconds use_distance_based_noise_filtering bool true Whether to use distance based filtering minimum_range_threshold double 70.0 Minimum distance threshold for filtering in meters use_map_based_noise_filtering bool true Whether to use map based filtering max_distance_from_lane double 5.0 Maximum distance from lane for filtering in meters max_angle_diff_from_lane double 0.785398 Maximum angle difference from lane for filtering in radians max_lateral_velocity double 5.0 Maximum lateral velocity for filtering in m/s can_assign_matrix array An array of integers used in the data association algorithm max_dist_matrix array An array of doubles used in the data association algorithm max_area_matrix array An array of doubles used in the data association algorithm min_area_matrix array An array of doubles used in the data association algorithm max_rad_matrix array An array of doubles used in the data association algorithm min_iou_matrix array An array of doubles used in the data association algorithm

See more details in the models.md.

"},{"location":"perception/autoware_radar_object_tracker/#tracker-parameters","title":"Tracker parameters","text":"

Currently, this package supports the following trackers:

Default settings for each tracker are defined in the ./config/tracking/, and described in models.md.

"},{"location":"perception/autoware_radar_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_radar_object_tracker/models/","title":"models","text":""},{"location":"perception/autoware_radar_object_tracker/models/#models","title":"models","text":"

Tracking models can be chosen from the ros parameter ~tracking_model:

Each model has its own parameters, which can be set in the ros parameter server.

"},{"location":"perception/autoware_radar_object_tracker/models/#linear-constant-acceleration-model","title":"linear constant acceleration model","text":" \\[ \\begin{bmatrix} x_{k+1} \\\\ y_{k+1} \\\\ v_{x_{k+1}} \\\\ v_{y_{k+1}} \\\\ a_{x_{k+1}} \\\\ a_{y_{k+1}} \\end{bmatrix} = \\begin{bmatrix} 1 & 0 & dt & 0 & \\frac{1}{2}dt^2 & 0 \\\\ 0 & 1 & 0 & dt & 0 & \\frac{1}{2}dt^2 \\\\ 0 & 0 & 1 & 0 & dt & 0 \\\\ 0 & 0 & 0 & 1 & 0 & dt \\\\ 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 \\\\ \\end{bmatrix} \\begin{bmatrix} x_k \\\\ y_k \\\\ v_{x_k} \\\\ v_{y_k} \\\\ a_{x_k} \\\\ a_{y_k} \\end{bmatrix} + noise \\] "},{"location":"perception/autoware_radar_object_tracker/models/#constant-turn-rate-and-velocity-model","title":"constant turn rate and velocity model","text":"

Just idea, not implemented yet.

\\[ \\begin{align} x_{k+1} &= x_k + \\frac{v_k}{\\omega_k} (sin(\\theta_k + \\omega_k dt) - sin(\\theta_k)) \\\\ y_{k+1} &= y_k + \\frac{v_k}{\\omega_k} (cos(\\theta_k) - cos(\\theta_k + \\omega_k dt)) \\\\ v_{k+1} &= v_k \\\\ \\theta_{k+1} &= \\theta_k + \\omega_k dt \\\\ \\omega_{k+1} &= \\omega_k \\end{align} \\]"},{"location":"perception/autoware_radar_object_tracker/models/#noise-filtering","title":"Noise filtering","text":"

Radar sensors often have noisy measurement. So we use the following filter to reduce the false positive objects.

The figure below shows the current noise filtering process.

"},{"location":"perception/autoware_radar_object_tracker/models/#minimum-range-filter","title":"minimum range filter","text":"

In most cases, Radar sensors are used with other sensors such as LiDAR and Camera, and Radar sensors are used to detect objects far away. So we can filter out objects that are too close to the sensor.

use_distance_based_noise_filtering parameter is used to enable/disable this filter, and minimum_range_threshold parameter is used to set the threshold.

"},{"location":"perception/autoware_radar_object_tracker/models/#lanelet-based-filter","title":"lanelet based filter","text":"

With lanelet map information, We can filter out false positive objects that are not likely important obstacles.

We filter out objects that satisfy the following conditions:

Each condition can be set by the following parameters:

"},{"location":"perception/autoware_radar_tracks_msgs_converter/","title":"radar_tracks_msgs_converter","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#radar_tracks_msgs_converter","title":"radar_tracks_msgs_converter","text":"

This package converts from radar_msgs/msg/RadarTracks into autoware_perception_msgs/msg/DetectedObject and autoware_perception_msgs/msg/TrackedObject.

"},{"location":"perception/autoware_radar_tracks_msgs_converter/#design","title":"Design","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#background","title":"Background","text":"

Autoware uses radar_msgs/msg/RadarTracks.msg as radar objects input data. To use radar objects data for Autoware perception module easily, radar_tracks_msgs_converter converts message type from radar_msgs/msg/RadarTracks.msg to autoware_perception_msgs/msg/DetectedObject. In addition, because many detection module have an assumption on base_link frame, radar_tracks_msgs_converter provide the functions of transform frame_id.

"},{"location":"perception/autoware_radar_tracks_msgs_converter/#note","title":"Note","text":"

Radar_tracks_msgs_converter converts the label from radar_msgs/msg/RadarTrack.msg to Autoware label. Label id is defined as below.

RadarTrack Autoware UNKNOWN 32000 0 CAR 32001 1 TRUCK 32002 2 BUS 32003 3 TRAILER 32004 4 MOTORCYCLE 32005 5 BICYCLE 32006 6 PEDESTRIAN 32007 7

Additional vendor-specific classifications are permitted starting from 32000 in radar_msgs/msg/RadarTrack.msg. Autoware objects label is defined in ObjectClassification

"},{"location":"perception/autoware_radar_tracks_msgs_converter/#interface","title":"Interface","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#input","title":"Input","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#output","title":"Output","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#parameter-summary","title":"Parameter Summary","text":"Name Type Description Default Range update_rate_hz float The update rate [hz] of the output topic 20.0 \u22650.0 new_frame_id string The header frame_id of the output topic base_link N/A use_twist_compensation boolean Flag to enable the linear compensation of ego vehicle's twist false N/A use_twist_yaw_compensation boolean Flag to enable the compensation of yaw rotation of ego vehicle's twist false N/A static_object_speed_threshold float Threshold to treat detected objects as static objects 1.0 N/A"},{"location":"perception/autoware_radar_tracks_msgs_converter/#parameter-description","title":"Parameter Description","text":"

This parameter is update rate for the onTimer function. This parameter should be same as the frame rate of input topics.

This parameter is the header frame_id of the output topic.

This parameter is the flag to use the compensation to linear of ego vehicle's twist. If the parameter is true, then the twist of the output objects' topic is compensated by the ego vehicle linear motion.

This parameter is the flag to use the compensation to yaw rotation of ego vehicle's twist. If the parameter is true, then the ego motion compensation will also consider yaw motion of the ego vehicle.

This parameter is the threshold to determine the flag is_stationary. If the velocity is lower than this parameter, the flag is_stationary of DetectedObject is set to true and dealt as a static object.

"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/","title":"low_intensity_cluster_filter","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#low_intensity_cluster_filter","title":"low_intensity_cluster_filter","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#purpose","title":"Purpose","text":"

The low_intensity_cluster_filter is a node that filters clusters based on the intensity of their pointcloud.

Mainly this focuses on filtering out unknown objects with very low intensity pointcloud, such as fail detection of unknown objects caused by raindrop or water splash from ego or other fast moving vehicles.

"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#input","title":"Input","text":"Name Type Description input/object tier4_perception_msgs::msg::DetectedObjectsWithFeature input detected objects"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#output","title":"Output","text":"Name Type Description output/object tier4_perception_msgs::msg::DetectedObjectsWithFeature filtered detected objects"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered. max_x float 60.00 Maximum of x of the filter effective range min_x float -20.00 Minimum of x of the filter effective range max_y float 20.00 Maximum of y of the filter effective range min_y float -20.00 Minium of y of the filter effective range intensity_threshold float 1.0 The threshold of average intensity for filter existence_probability_threshold float 0.2 The existence probability threshold to apply the filter"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_shape_estimation/","title":"autoware_shape_estimation","text":""},{"location":"perception/autoware_shape_estimation/#autoware_shape_estimation","title":"autoware_shape_estimation","text":""},{"location":"perception/autoware_shape_estimation/#purpose","title":"Purpose","text":"

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

"},{"location":"perception/autoware_shape_estimation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_shape_estimation/#fitting-algorithms","title":"Fitting algorithms","text":" "},{"location":"perception/autoware_shape_estimation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_shape_estimation/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster"},{"location":"perception/autoware_shape_estimation/#output","title":"Output","text":"Name Type Description output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape"},{"location":"perception/autoware_shape_estimation/#parameters","title":"Parameters","text":"Name Type Description Default Range use_corrector boolean The flag to apply rule-based corrector. true N/A use_filter boolean The flag to apply rule-based filter true N/A use_vehicle_reference_yaw boolean The flag to use vehicle reference yaw for corrector false N/A use_vehicle_reference_shape_size boolean The flag to use vehicle reference shape size false N/A use_boost_bbox_optimizer boolean The flag to use boost bbox optimizer false N/A model_params.use_ml_shape_estimator boolean The flag to apply use ml bounding box estimator. true N/A model_params.minimum_points integer The minimum number of points to fit a bounding box. 16 N/A model_params.precision string The precision of the model. fp32 N/A model_params.batch_size integer The batch size of the model. 32 N/A model_params.build_only boolean The flag to build the model only. false N/A"},{"location":"perception/autoware_shape_estimation/#ml-based-shape-implementation","title":"ML Based Shape Implementation","text":"

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

"},{"location":"perception/autoware_shape_estimation/#training-ml-based-shape-estimation-model","title":"Training ML Based Shape Estimation Model","text":"

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

"},{"location":"perception/autoware_shape_estimation/#preparing-the-dataset","title":"Preparing the Dataset","text":""},{"location":"perception/autoware_shape_estimation/#install-mmdetection3d-prerequisites","title":"Install MMDetection3D prerequisites","text":"

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y\nconda activate train-shape-estimation\n

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch\n
"},{"location":"perception/autoware_shape_estimation/#install-mmdetection3d","title":"Install mmdetection3d","text":"

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim\nmim install mmengine\nmim install 'mmcv>=2.0.0rc4'\nmim install 'mmdet>=3.0.0rc5, <3.3.0'\n

Step 2. Install Autoware's MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git\ncd mmdetection3d\npip install -v -e .\n
"},{"location":"perception/autoware_shape_estimation/#preparing-nuscenes-dataset-for-training","title":"Preparing NuScenes dataset for training","text":"

Step 1. Download the NuScenes dataset from the official website and extract the dataset to a folder of your choice.

Note: The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.

Step 2. Create a symbolic link to the dataset folder

ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/\n

Step 3. Prepare the NuScenes data by running:

cd mmdetection3d\npython tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --only-gt-database True\n
"},{"location":"perception/autoware_shape_estimation/#clone-bounding-box-estimator-model","title":"Clone Bounding Box Estimator model","text":"
git clone https://github.com/autowarefoundation/bbox_estimator.git\n
"},{"location":"perception/autoware_shape_estimation/#split-the-dataset-into-training-and-validation-sets","title":"Split the dataset into training and validation sets","text":"
cd bbox_estimator\npython3 utils/split_dbinfos.py --dataset_path /path/to/mmdetection3d/data/nuscenes --classes 'car' 'truck' 'bus' 'trailer'  --train_ratio 0.8\n
"},{"location":"perception/autoware_shape_estimation/#training-and-deploying-the-model","title":"Training and Deploying the model","text":""},{"location":"perception/autoware_shape_estimation/#training-the-model","title":"Training the model","text":"
# Detailed training options can be found in the training script\n# For more details, run `python3 train.py --help`\npython3 train.py --dataset_path /path/to/mmdetection3d/data/nuscenes\n
"},{"location":"perception/autoware_shape_estimation/#deploying-the-model","title":"Deploying the model","text":"
# Convert the trained model to ONNX format\npython3 onnx_converter.py --weight_path /path/to/best_checkpoint.pth --output_path /path/to/output.onnx\n

Give the output path of the ONNX model to the model_path parameter in the shape_estimation node launch file.

"},{"location":"perception/autoware_shape_estimation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD

"},{"location":"perception/autoware_shape_estimation/#referencesexternal-links","title":"References/External links","text":"

L-shape fitting implementation of the paper:

@conference{Zhang-2017-26536,\nauthor = {Xiao Zhang and Wenda Xu and Chiyu Dong and John M. Dolan},\ntitle = {Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners},\nbooktitle = {2017 IEEE Intelligent Vehicles Symposium},\nyear = {2017},\nmonth = {June},\nkeywords = {autonomous driving, laser scanner, perception, segmentation},\n}\n

Frustum PointNets for 3D Object Detection from RGB-D Data:

@inproceedings{qi2018frustum,\ntitle={Frustum pointnets for 3d object detection from rgb-d data},\nauthor={Qi, Charles R and Liu, Wei and Wu, Chenxia and Su, Hao and Guibas, Leonidas J},\nbooktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},\npages={918--927},\nyear={2018}\n}```\n
"},{"location":"perception/autoware_simple_object_merger/","title":"autoware_simple_object_merger","text":""},{"location":"perception/autoware_simple_object_merger/#autoware_simple_object_merger","title":"autoware_simple_object_merger","text":"

This package can merge multiple topics of autoware_perception_msgs/msg/DetectedObject with low calculation cost.

"},{"location":"perception/autoware_simple_object_merger/#design","title":"Design","text":""},{"location":"perception/autoware_simple_object_merger/#background","title":"Background","text":"

Object_merger is mainly used for merge process with DetectedObjects. There are 2 characteristics in Object_merger. First, object_merger solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost. Second, object_merger can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node. To merge 6 DetectedObjects topics, 6 object_merger nodes need to stand for now.

Therefore, autoware_simple_object_merger aim to merge multiple DetectedObjects with low calculation cost. The package do not use data association algorithm to reduce the computational cost, and it can handle more than 2 topics in one node to prevent launching a large number of nodes.

"},{"location":"perception/autoware_simple_object_merger/#use-case","title":"Use case","text":"

autoware_simple_object_merger can be used for multiple radar detection. By combining them into one topic from multiple radar topics, the pipeline for faraway detection with radar can be simpler.

"},{"location":"perception/autoware_simple_object_merger/#limitation","title":"Limitation","text":"

Merged objects will not be published until all topic data is received when initializing. In addition, to care sensor data drops and delayed, this package has a parameter to judge timeout. When the latest time of the data of a topic is older than the timeout parameter, it is not merged for output objects. For now specification of this package, if all topic data is received at first and after that the data drops, and the merged objects are published without objects which is judged as timeout.The timeout parameter should be determined by sensor cycle time.

Because this package does not have matching processing, there are overlapping objects depending on the input objects. So output objects can be used only when post-processing is used. For now, clustering processing can be used as post-processing.

"},{"location":"perception/autoware_simple_object_merger/#interface","title":"Interface","text":""},{"location":"perception/autoware_simple_object_merger/#input","title":"Input","text":"

Input topics is defined by the parameter of input_topics (List[string]). The type of input topics is std::vector<autoware_perception_msgs/msg/DetectedObjects.msg>.

"},{"location":"perception/autoware_simple_object_merger/#output","title":"Output","text":""},{"location":"perception/autoware_simple_object_merger/#parameters","title":"Parameters","text":"

This parameter is update rate for the onTimer function. This parameter should be same as the frame rate of input topics.

This parameter is the header frame_id of the output topic. If output topics use for perception module, it should be set for \"base_link\"

This parameter is the threshold for timeout judgement. If the time difference between the first topic of input_topics and an input topic is exceeded to this parameter, then the objects of topic is not merged to output objects.

  for (size_t i = 0; i < input_topic_size; i++) {\ndouble time_diff = rclcpp::Time(objects_data_.at(i)->header.stamp).seconds() -\nrclcpp::Time(objects_data_.at(0)->header.stamp).seconds();\nif (std::abs(time_diff) < node_param_.timeout_threshold) {\n// merge objects\n}\n}\n

This parameter is the name of input topics. For example, when this packages use for radar objects,

input_topics:\n[\n\"/sensing/radar/front_center/detected_objects\",\n\"/sensing/radar/front_left/detected_objects\",\n\"/sensing/radar/rear_left/detected_objects\",\n\"/sensing/radar/rear_center/detected_objects\",\n\"/sensing/radar/rear_right/detected_objects\",\n\"/sensing/radar/front_right/detected_objects\",\n]\n

can be set in config yaml file. For now, the time difference is calculated by the header time between the first topic of input_topics and the input topics, so the most important objects to detect should be set in the first of input_topics list.

"},{"location":"perception/autoware_tensorrt_classifier/","title":"TensorRT Classification for Efficient Dynamic Batched Inference","text":""},{"location":"perception/autoware_tensorrt_classifier/#tensorrt-classification-for-efficient-dynamic-batched-inference","title":"TensorRT Classification for Efficient Dynamic Batched Inference","text":""},{"location":"perception/autoware_tensorrt_classifier/#purpose","title":"Purpose","text":"

This package classifies arbitrary categories using TensorRT for efficient and faster inference. Specifically, this optimizes preprocessing for efficient inference on embedded platform. Moreover, we support dynamic batched inference in GPUs and DLAs.

"},{"location":"perception/autoware_tensorrt_common/","title":"autoware_tensorrt_common","text":""},{"location":"perception/autoware_tensorrt_common/#autoware_tensorrt_common","title":"autoware_tensorrt_common","text":"

This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.

"},{"location":"perception/autoware_tensorrt_common/#usage","title":"Usage","text":"

Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).

#include <autoware/tensorrt_common/tensorrt_common.hpp>\n\n#include <memory>\n#include <utility>\n#include <vector>\n\nusing autoware::tensorrt_common::TrtCommon;\nusing autoware::tensorrt_common::TrtCommonConfig;\nusing autoware::tensorrt_common::TensorInfo;\nusing autoware::tensorrt_common::NetworkIO;\nusing autoware::tensorrt_common::ProfileDims;\n\nstd::unique_ptr<TrtCommon> trt_common_;\n
"},{"location":"perception/autoware_tensorrt_common/#create-a-tensorrt-common-instance-and-setup-engine","title":"Create a tensorrt common instance and setup engine","text":"
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig(\"/path/to/onnx/model.onnx\"));\ntrt_common_->setup();\n
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig(\"/path/to/onnx/model.onnx\", \"fp16\", \"/path/to/engine/model.engine\", (1ULL << 30U), -1, false));\n\nstd::vector<NetworkIO> network_io{\nNetworkIO(\"sample_input\", {3, {-1, 64, 512}}), NetworkIO(\"sample_output\", {1, {50}})};\nstd::vector<ProfileDims> profile_dims{\nProfileDims(\"sample_input\", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};\n\nauto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);\nauto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);\n\ntrt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));\n

By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.

Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.

Network IO or / and profile dimensions can be omitted if not needed.

"},{"location":"perception/autoware_tensorrt_common/#setting-input-and-output-tensors","title":"Setting input and output tensors","text":"
bool success = true;\nsuccess &= trt_common_->setTensor(\"sample_input\", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});\nsuccess &= trt_common_->setTensor(\"sample_output\", sample_output_d_.get());\nreturn success;\n
"},{"location":"perception/autoware_tensorrt_common/#execute-inference","title":"Execute inference","text":"
auto success = trt_common_->enqueueV3(stream_);\nreturn success;\n
"},{"location":"perception/autoware_tensorrt_yolox/","title":"autoware_tensorrt_yolox","text":""},{"location":"perception/autoware_tensorrt_yolox/#autoware_tensorrt_yolox","title":"autoware_tensorrt_yolox","text":""},{"location":"perception/autoware_tensorrt_yolox/#purpose","title":"Purpose","text":"

This package detects target objects e.g., cars, trucks, bicycles, and pedestrians and segment target objects such as cars, trucks, buses and pedestrian, building, vegetation, road, sidewalk on a image based on YOLOX model with multi-header structure.

"},{"location":"perception/autoware_tensorrt_yolox/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_tensorrt_yolox/#cite","title":"Cite","text":"

Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, \"YOLOX: Exceeding YOLO Series in 2021\", arXiv preprint arXiv:2107.08430, 2021 [ref]

"},{"location":"perception/autoware_tensorrt_yolox/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_tensorrt_yolox/#input","title":"Input","text":"Name Type Description in/image sensor_msgs/Image The input image"},{"location":"perception/autoware_tensorrt_yolox/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/image sensor_msgs/Image The image with 2D bounding boxes for visualization out/mask sensor_msgs/Image The semantic segmentation mask out/color_mask sensor_msgs/Image The colorized image of semantic segmentation mask for visualization"},{"location":"perception/autoware_tensorrt_yolox/#parameters","title":"Parameters","text":"Name Type Description Default Range model_path string Path to onnx model. \\((var data_path)/tensorrt_yolox/\\)(var model_name).onnx N/A label_path string Path to label file. $(var data_path)/tensorrt_yolox/label.txt N/A score_threshold float A threshold value of existence probability score, all of objects with score less than this threshold are ignored. 0.35 \u22650.0\u22641.0 nms_threshold float A threshold value of NMS. 0.7 \u22650.0\u22641.0 precision string Operation precision to be used on inference. Valid value is one of: [fp32, fp16, int8]. int8 N/A calibration_algorithm string Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: [Entropy, (Legacy Percentile), MinMax]. Entropy dla_core_id float If positive ID value is specified, the node assign inference task to the DLA core. -1 N/A quantize_first_layer boolean If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8. False N/A quantize_last_layer boolean If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8. False N/A profile_per_layer boolean If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. False N/A clip_value float If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration. 6.0 N/A preprocess_on_gpu boolean If true, pre-processing is performed on GPU. True N/A gpu_id integer GPU ID for selecting CUDA device 0 N/A calibration_image_list_path string Path to a file which contains path to images. Those images will be used for int8 quantization. N/A Name Type Description Default Range ----------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------- --------------- model_path string Path to onnx model. \\((var data_path)/tensorrt_yolox/\\)(var model_name).onnx N/A label_path string Path to label file. $(var data_path)/tensorrt_yolox/label.txt N/A score_threshold float A threshold value of existence probability score, all of objects with score less than this threshold are ignored. 0.35 \u22650.0\u22641.0 nms_threshold float A threshold value of NMS. 0.7 \u22650.0\u22641.0 precision string Operation precision to be used on inference. Valid value is one of: [fp32, fp16, int8]. fp16 N/A calibration_algorithm string Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: [Entropy, (Legacy Percentile), MinMax]. MinMax dla_core_id float If positive ID value is specified, the node assign inference task to the DLA core. -1 N/A quantize_first_layer boolean If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8. False N/A quantize_last_layer boolean If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8. False N/A profile_per_layer boolean If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. False N/A clip_value float If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration. 0.0 N/A preprocess_on_gpu boolean If true, pre-processing is performed on GPU. True N/A gpu_id integer GPU ID for selecting CUDA device 0 N/A calibration_image_list_path string Path to a file which contains path to images. Those images will be used for int8 quantization. N/A"},{"location":"perception/autoware_tensorrt_yolox/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The label contained in detected 2D bounding boxes (i.e., out/objects) will be either one of the followings:

If other labels (case insensitive) are contained in the file specified via the label_file parameter, those are labeled as UNKNOWN, while detected rectangles are drawn in the visualization result (out/image).

The semantic segmentation mask is a gray image whose each pixel is index of one following class:

index semantic name 0 road 1 building 2 wall 3 obstacle 4 traffic_light 5 traffic_sign 6 person 7 vehicle 8 bike 9 road 10 sidewalk 11 roadPaint 12 curbstone 13 crosswalk_others 14 vegetation 15 sky"},{"location":"perception/autoware_tensorrt_yolox/#onnx-model","title":"Onnx model","text":"

A sample model (named yolox-tiny.onnx) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts. To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference, EfficientNMS_TRT module is attached after the ordinal YOLOX (tiny) network. The EfficientNMS_TRT module contains fixed values for score_threshold and nms_threshold in it, hence these parameters are ignored when users specify ONNX models including this module.

This package accepts both EfficientNMS_TRT attached ONNXs and models published from the official YOLOX repository (we referred to them as \"plain\" models).

In addition to yolox-tiny.onnx, a custom model named yolox-sPlus-opt-pseudoV2-T4-960x960-T4-seg16cls is either available. This model is multi-header structure model which is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny. To get better results with this model, users are recommended to use some specific running arguments such as precision:=int8, calibration_algorithm:=Entropy, clip_value:=6.0. Users can refer launch/yolox_sPlus_opt.launch.xml to see how this model can be used. Beside detection result, this model also output image semantic segmentation result for pointcloud filtering purpose.

All models are automatically converted to TensorRT format. These converted files will be saved in the same directory as specified ONNX files with .engine filename extension and reused from the next run. The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run

"},{"location":"perception/autoware_tensorrt_yolox/#package-acceptable-model-generation","title":"Package acceptable model generation","text":"

To convert users' own model that saved in PyTorch's pth format into ONNX, users can exploit the converter offered by the official repository. For the convenience, only procedures are described below. Please refer the official document for more detail.

"},{"location":"perception/autoware_tensorrt_yolox/#for-plain-models","title":"For plain models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n
"},{"location":"perception/autoware_tensorrt_yolox/#for-efficientnms_trt-embedded-models","title":"For EfficientNMS_TRT embedded models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\npip3 install git+ssh://git@github.com/wep21/yolox_onnx_modifier.git --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n  --decode_in_inference\n
  3. Embed EfficientNMS_TRT to the end of YOLOX

    yolox_onnx_modifier YOUR_YOLOX.onnx -o YOUR_YOLOX_WITH_NMS.onnx\n
"},{"location":"perception/autoware_tensorrt_yolox/#label-file","title":"Label file","text":"

A sample label file (named label.txt) and semantic segmentation color map file (name semseg_color_map.csv) are also downloaded automatically during env preparation process (NOTE: This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).

This file represents the correspondence between class index (integer outputted from YOLOX network) and class label (strings making understanding easier). This package maps class IDs (incremented from 0) with labels according to the order in this file.

"},{"location":"perception/autoware_tensorrt_yolox/#reference-repositories","title":"Reference repositories","text":""},{"location":"perception/autoware_tracking_object_merger/","title":"Tracking Object Merger","text":""},{"location":"perception/autoware_tracking_object_merger/#tracking-object-merger","title":"Tracking Object Merger","text":""},{"location":"perception/autoware_tracking_object_merger/#purpose","title":"Purpose","text":"

This package try to merge two tracking objects from different sensor.

"},{"location":"perception/autoware_tracking_object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Merging tracking objects from different sensor is a combination of data association and state fusion algorithms.

Detailed process depends on the merger policy.

"},{"location":"perception/autoware_tracking_object_merger/#decorative_tracker_merger","title":"decorative_tracker_merger","text":"

In decorative_tracker_merger, we assume there are dominant tracking objects and sub tracking objects. The name decorative means that sub tracking objects are used to complement the main objects.

Usually the dominant tracking objects are from LiDAR and sub tracking objects are from Radar or Camera.

Here show the processing pipeline.

"},{"location":"perception/autoware_tracking_object_merger/#time-sync","title":"time sync","text":"

Sub object(Radar or Camera) often has higher frequency than dominant object(LiDAR). So we need to sync the time of sub object to dominant object.

"},{"location":"perception/autoware_tracking_object_merger/#data-association","title":"data association","text":"

In the data association, we use the following rules to determine whether two tracking objects are the same object.

"},{"location":"perception/autoware_tracking_object_merger/#tracklet-update","title":"tracklet update","text":"

Sub tracking objects are merged into dominant tracking objects.

Depends on the tracklet input sensor state, we update the tracklet state with different rules.

state\\priority 1st 2nd 3rd Kinematics except velocity LiDAR Radar Camera Forward velocity Radar LiDAR Camera Object classification Camera LiDAR Radar"},{"location":"perception/autoware_tracking_object_merger/#tracklet-management","title":"tracklet management","text":"

We use the existence_probability to manage tracklet.

These parameter can be set in config/decorative_tracker_merger.param.yaml.

tracker_state_parameter:\nremove_probability_threshold: 0.3\npublish_probability_threshold: 0.6\ndefault_lidar_existence_probability: 0.7\ndefault_radar_existence_probability: 0.6\ndefault_camera_existence_probability: 0.6\ndecay_rate: 0.1\nmax_dt: 1.0\n
"},{"location":"perception/autoware_tracking_object_merger/#inputparameters","title":"input/parameters","text":"topic name message type description ~input/main_object autoware_perception_msgs::TrackedObjects Dominant tracking objects. Output will be published with this dominant object stamps. ~input/sub_object autoware_perception_msgs::TrackedObjects Sub tracking objects. output/object autoware_perception_msgs::TrackedObjects Merged tracking objects. debug/interpolated_sub_object autoware_perception_msgs::TrackedObjects Interpolated sub tracking objects.

Default parameters are set in config/decorative_tracker_merger.param.yaml.

parameter name description default value base_link_frame_id base link frame id. This is used to transform the tracking object. \"base_link\" time_sync_threshold time sync threshold. If the time difference between two tracking objects is smaller than this value, we consider these two tracking objects are the same object. 0.05 sub_object_timeout_sec sub object timeout. If the sub object is not updated for this time, we consider this object is not exist. 0.5 main_sensor_type main sensor type. This is used to determine the dominant tracking object. \"lidar\" sub_sensor_type sub sensor type. This is used to determine the sub tracking object. \"radar\" tracker_state_parameter tracker state parameter. This is used to manage the tracklet. "},{"location":"perception/autoware_tracking_object_merger/#tuning","title":"tuning","text":"

As explained in tracklet management, this tracker merger tend to maintain the both input tracking objects.

If there are many false positive tracking objects,

"},{"location":"perception/autoware_tracking_object_merger/#equivalent_tracker_merger","title":"equivalent_tracker_merger","text":"

This is future work.

"},{"location":"perception/autoware_traffic_light_arbiter/","title":"autoware_traffic_light_arbiter","text":""},{"location":"perception/autoware_traffic_light_arbiter/#autoware_traffic_light_arbiter","title":"autoware_traffic_light_arbiter","text":""},{"location":"perception/autoware_traffic_light_arbiter/#purpose","title":"Purpose","text":"

This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach.

"},{"location":"perception/autoware_traffic_light_arbiter/#trafficlightarbiter","title":"TrafficLightArbiter","text":"

A node that merges traffic light/signal state from image recognition and external (e.g., V2X) systems to provide to a planning component.

"},{"location":"perception/autoware_traffic_light_arbiter/#signal-match-validator","title":"Signal Match Validator","text":"

When enable_signal_matching is set to true, this node validates the match between perception signals and external signals. The table below outlines how the matching process determines the output based on the combination of perception and external signal colors. Each cell represents the outcome when a specific color from a perception signal (columns) intersects with a color from an external signal (rows).

External \\ Perception RED AMBER GREEN UNKNOWN Not Received RED RED UNKNOWN UNKNOWN UNKNOWN UNKNOWN AMBER UNKNOWN AMBER UNKNOWN UNKNOWN UNKNOWN GREEN UNKNOWN UNKNOWN GREEN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN Not Received UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN"},{"location":"perception/autoware_traffic_light_arbiter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_arbiter/#input","title":"Input","text":"Name Type Description ~/sub/vector_map autoware_map_msgs::msg::LaneletMapBin The vector map to get valid traffic signal ids. ~/sub/perception_traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The traffic signals from the image recognition pipeline. ~/sub/external_traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The traffic signals from an external system."},{"location":"perception/autoware_traffic_light_arbiter/#output","title":"Output","text":"Name Type Description ~/pub/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The merged traffic signal state."},{"location":"perception/autoware_traffic_light_arbiter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_arbiter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description external_delay_tolerance double 5.0 The duration in seconds an external message is considered valid for merging in comparison with current time external_time_tolerance double 5.0 The duration in seconds an external message is considered valid for merging in comparison with a timestamp of perception message perception_time_tolerance double 1.0 The duration in seconds a perception message is considered valid for merging in comparison with a timestamp of external message external_priority bool false Whether or not externals signals take precedence over perception-based ones. If false, the merging uses confidence as a criteria enable_signal_matching bool false Decide whether to validate the match between perception signals and external signals. If set to true, verify that the colors match and only publish them if they are identical"},{"location":"perception/autoware_traffic_light_classifier/","title":"autoware_traffic_light_classifier","text":""},{"location":"perception/autoware_traffic_light_classifier/#autoware_traffic_light_classifier","title":"autoware_traffic_light_classifier","text":""},{"location":"perception/autoware_traffic_light_classifier/#purpose","title":"Purpose","text":"

autoware_traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

"},{"location":"perception/autoware_traffic_light_classifier/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If height and width of ~/input/rois is 0, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, CIRCLE, and 0.0. If ~/input/rois is judged as backlight, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, UNKNOWN, and 0.0.

"},{"location":"perception/autoware_traffic_light_classifier/#cnn_classifier","title":"cnn_classifier","text":"

Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.

Name Input Size Test Accuracy EfficientNet-b1 128 x 128 99.76% MobileNet-v2 224 x 224 99.81%

For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here:

Name Input Size Test Accuracy EfficientNet-b1 128 x 128 97.89% MobileNet-v2 224 x 224 99.10%"},{"location":"perception/autoware_traffic_light_classifier/#hsv_classifier","title":"hsv_classifier","text":"

Traffic light colors (green, yellow and red) are classified in HSV model.

"},{"location":"perception/autoware_traffic_light_classifier/#about-label","title":"About Label","text":"

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as \"red-circle, red-cross\".

These colors and shapes are assigned to the message as follows:

"},{"location":"perception/autoware_traffic_light_classifier/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_classifier/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs::msg::Image input image ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights"},{"location":"perception/autoware_traffic_light_classifier/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals tier4_perception_msgs::msg::TrafficLightArray classified signals ~/output/debug/image sensor_msgs::msg::Image image for debugging"},{"location":"perception/autoware_traffic_light_classifier/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_classifier/#node-parameters","title":"Node Parameters","text":"Name Type Description classifier_type int If the value is 1, cnn_classifier is used data_path str Packages data and artifacts directory path backlight_threshold double If the intensity of light is grater than this threshold, the color and shape of the corresponding ROI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to 0.0. The value should be set in the range of [0.0, 1.0]. If you wouldn't like to use this feature, please set it to 1.0. classify_traffic_light_type int If the value is 0, vehicular signals are classified. If the value is 1, pedestrian signals are classified."},{"location":"perception/autoware_traffic_light_classifier/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/autoware_traffic_light_classifier/#cnn_classifier_1","title":"cnn_classifier","text":"Name Type Description classifier_label_path str path to the model file classifier_model_path str path to the label file classifier_precision str TensorRT precision, fp16 or int8 classifier_mean vector\\ 3-channel input image mean classifier_std vector\\ 3-channel input image std apply_softmax bool whether or not apply softmax"},{"location":"perception/autoware_traffic_light_classifier/#hsv_classifier_1","title":"hsv_classifier","text":"Name Type Description green_min_h int the minimum hue of green color green_min_s int the minimum saturation of green color green_min_v int the minimum value (brightness) of green color green_max_h int the maximum hue of green color green_max_s int the maximum saturation of green color green_max_v int the maximum value (brightness) of green color yellow_min_h int the minimum hue of yellow color yellow_min_s int the minimum saturation of yellow color yellow_min_v int the minimum value (brightness) of yellow color yellow_max_h int the maximum hue of yellow color yellow_max_s int the maximum saturation of yellow color yellow_max_v int the maximum value (brightness) of yellow color red_min_h int the minimum hue of red color red_min_s int the minimum saturation of red color red_min_v int the minimum value (brightness) of red color red_max_h int the maximum hue of red color red_max_s int the maximum saturation of red color red_max_v int the maximum value (brightness) of red color"},{"location":"perception/autoware_traffic_light_classifier/#training-traffic-light-classifier-model","title":"Training Traffic Light Classifier Model","text":""},{"location":"perception/autoware_traffic_light_classifier/#overview","title":"Overview","text":"

This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.

"},{"location":"perception/autoware_traffic_light_classifier/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/autoware_traffic_light_classifier/#use-sample-dataset","title":"Use Sample Dataset","text":"

Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.

"},{"location":"perception/autoware_traffic_light_classifier/#use-your-custom-dataset","title":"Use Your Custom Dataset","text":"

To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;

DATASET_ROOT\n    \u251c\u2500\u2500 TRAIN\n    \u2502    \u251c\u2500\u2500 RED\n    \u2502    \u2502   \u251c\u2500\u2500 001.png\n    \u2502    \u2502   \u251c\u2500\u2500 002.png\n    \u2502    \u2502   \u2514\u2500\u2500 ...\n    \u2502    \u2502\n    \u2502    \u251c\u2500\u2500 GREEN\n    \u2502    \u2502    \u251c\u2500\u2500 001.png\n    \u2502    \u2502    \u251c\u2500\u2500 002.png\n    \u2502    \u2502    \u2514\u2500\u2500...\n    \u2502    \u2502\n    \u2502    \u251c\u2500\u2500 YELLOW\n    \u2502    \u2502    \u251c\u2500\u2500 001.png\n    \u2502    \u2502    \u251c\u2500\u2500 002.png\n    \u2502    \u2502    \u2514\u2500\u2500...\n    \u2502    \u2514\u2500\u2500 ...\n    \u2502\n    \u251c\u2500\u2500 VAL\n    \u2502       \u2514\u2500\u2500...\n    \u2502\n    \u2502\n    \u2514\u2500\u2500 TEST\n           \u2514\u2500\u2500 ...\n
"},{"location":"perception/autoware_traffic_light_classifier/#installation","title":"Installation","text":""},{"location":"perception/autoware_traffic_light_classifier/#prerequisites","title":"Prerequisites","text":"

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name tl-classifier python=3.8 -y\nconda activate tl-classifier\n

Step 3. Install PyTorch

Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware

conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia\n
"},{"location":"perception/autoware_traffic_light_classifier/#install-mmlabmmpretrain","title":"Install mmlab/mmpretrain","text":"

Step 1. Install mmpretrain from source

cd ~/\ngit clone https://github.com/open-mmlab/mmpretrain.git\ncd mmpretrain\npip install -U openmim && mim install -e .\n
"},{"location":"perception/autoware_traffic_light_classifier/#training","title":"Training","text":"

MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.

In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.

"},{"location":"perception/autoware_traffic_light_classifier/#create-a-config-file","title":"Create a config file","text":"

Generate a configuration file for your preferred model within the configs folder

touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n

Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences

# Inherit model, schedule and default_runtime from base model\n_base_ = [\n    '../_base_/models/mobilenet_v2_1x.py',\n    '../_base_/schedules/imagenet_bs256_epochstep.py',\n    '../_base_/default_runtime.py'\n]\n\n# Set the number of classes to the model\n# You can also change other model parameters here\n# For detailed descriptions of model parameters, please refer to link below\n# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]\nmodel = dict(head=dict(num_classes=3, topk=(1, 3)))\n\n# Set max epochs and validation interval\ntrain_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)\n\n# Set optimizer and lr scheduler\noptim_wrapper = dict(\n    optimizer=dict(type='SGD', lr=0.001, momentum=0.9))\nparam_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)\n\ndataset_type = 'CustomDataset'\ndata_root = \"/PATH/OF/YOUR/DATASET\"\n\n# Customize data preprocessing and dataloader pipeline for training set\n# These parameters calculated for the sample dataset\ndata_preprocessor = dict(\n    mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],\n    std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],\n    num_classes=3,\n    to_rgb=True,\n)\n\n# Customize data preprocessing and dataloader pipeline for train set\n# For detailed descriptions of data pipeline, please refer to link below\n# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]\ntrain_pipeline = [\n    dict(type='LoadImageFromFile'),\n    dict(type='Resize', scale=224),\n    dict(type='RandomFlip', prob=0.5, direction='horizontal'),\n    dict(type='PackInputs'),\n]\ntrain_dataloader = dict(\n    dataset=dict(\n        type=dataset_type,\n        data_root=data_root,\n        ann_file='',\n        data_prefix='train',\n        with_label=True,\n        pipeline=train_pipeline,\n    ),\n    num_workers=8,\n    batch_size=32,\n    sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\n# Customize data preprocessing and dataloader pipeline for test set\ntest_pipeline = [\n    dict(type='LoadImageFromFile'),\n    dict(type='Resize', scale=224),\n    dict(type='PackInputs'),\n]\n\n# Customize data preprocessing and dataloader pipeline for validation set\nval_cfg = dict()\nval_dataloader = dict(\n    dataset=dict(\n        type=dataset_type,\n        data_root=data_root,\n        ann_file='',\n        data_prefix='val',\n        with_label=True,\n        pipeline=test_pipeline,\n    ),\n    num_workers=8,\n    batch_size=32,\n    sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\nval_evaluator = dict(topk=(1, 3,), type='Accuracy')\n\ntest_dataloader = val_dataloader\ntest_evaluator = val_evaluator\n
"},{"location":"perception/autoware_traffic_light_classifier/#start-training","title":"Start training","text":"
cd ~/mmpretrain\npython tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n

Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom folder.

"},{"location":"perception/autoware_traffic_light_classifier/#convert-pytorch-model-to-onnx-model","title":"Convert PyTorch model to ONNX model","text":""},{"location":"perception/autoware_traffic_light_classifier/#install-mmdeploy","title":"Install mmdeploy","text":"

The 'mmdeploy' toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.

# Activate your conda environment\nconda activate tl-classifier\n\n# Install mmenigne and mmcv\nmim install mmengine\nmim install \"mmcv>=2.0.0rc2\"\n\n# Install mmdeploy\npip install mmdeploy==1.2.0\n\n# Support onnxruntime\npip install mmdeploy-runtime==1.2.0\npip install mmdeploy-runtime-gpu==1.2.0\npip install onnxruntime-gpu==1.8.1\n\n#Clone mmdeploy repository\ncd ~/\ngit clone -b main https://github.com/open-mmlab/mmdeploy.git\n
"},{"location":"perception/autoware_traffic_light_classifier/#convert-pytorch-model-to-onnx-model_1","title":"Convert PyTorch model to ONNX model","text":"
cd ~/mmdeploy\n\n# Run deploy.py script\n# deploy.py script takes 5 main arguments with these order; config file path, train config file path,\n# checkpoint file path, demo image path, and work directory path\npython tools/deploy.py \\\n~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\\\n~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \\\n~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \\\n/SAMPLE/IAMGE/DIRECTORY \\\n--work-dir mmdeploy_model/mobilenet_v2\n

Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2 folder.

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w...). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.

"},{"location":"perception/autoware_traffic_light_classifier/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_traffic_light_classifier/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_traffic_light_classifier/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_traffic_light_classifier/#referencesexternal-links","title":"References/External links","text":"

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

[2] Tan, Mingxing, and Quoc Le. \"EfficientNet: Rethinking model scaling for convolutional neural networks.\" International conference on machine learning. PMLR, 2019.

"},{"location":"perception/autoware_traffic_light_classifier/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_traffic_light_fine_detector/","title":"autoware_traffic_light_fine_detector","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#autoware_traffic_light_fine_detector","title":"autoware_traffic_light_fine_detector","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#purpose","title":"Purpose","text":"

It is a package for traffic light detection using YOLOX-s.

"},{"location":"perception/autoware_traffic_light_fine_detector/#training-information","title":"Training Information","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#pretrained-model","title":"Pretrained Model","text":"

The model is based on YOLOX and the pretrained model could be downloaded from here.

"},{"location":"perception/autoware_traffic_light_fine_detector/#training-data","title":"Training Data","text":"

The model was fine-tuned on around 17,000 TIER IV internal images of Japanese traffic lights.

"},{"location":"perception/autoware_traffic_light_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":"

You can download the ONNX file using these instructions. Please visit autoware-documentation for more information.

"},{"location":"perception/autoware_traffic_light_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Based on the camera image and the global ROI array detected by map_based_detector node, a CNN-based detection method enables highly accurate traffic light detection. If can not detect traffic light, x_offset, y_offset, height and width of output ROI become 0. ROIs detected from YOLOX will be selected by a combination of expect/rois. At this time, evaluate the whole as ROIs, not just the ROI alone.

"},{"location":"perception/autoware_traffic_light_fine_detector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs::msg::Image The full size camera image ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray The array of ROIs detected by map_based_detector ~/expect/rois tier4_perception_msgs::msg::TrafficLightRoiArray The array of ROIs detected by map_based_detector without any offset, used to select the best detection results"},{"location":"perception/autoware_traffic_light_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois tier4_perception_msgs::msg::TrafficLightRoiArray The detected accurate rois ~/debug/exe_time_ms tier4_debug_msgs::msg::Float32Stamped The time taken for inference"},{"location":"perception/autoware_traffic_light_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description fine_detector_score_thresh double 0.3 If the objectness score is less than this value, the object is ignored fine_detector_nms_thresh double 0.65 IoU threshold to perform Non-Maximum Suppression"},{"location":"perception/autoware_traffic_light_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path string \"$(env HOME)/autoware_data\" packages data and artifacts directory path fine_detector_model_path string \"\" The onnx file name for yolo model fine_detector_label_path string \"\" The label file with label names for detected objects written on it fine_detector_precision string \"fp16\" The inference mode: \"fp32\", \"fp16\" approximate_sync bool false Flag for whether to ues approximate sync policy gpu_id integer 0 ID for the selecting CUDA GPU device"},{"location":"perception/autoware_traffic_light_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#reference-repositories","title":"Reference repositories","text":"

YOLOX github repository

"},{"location":"perception/autoware_traffic_light_map_based_detector/","title":"autoware_traffic_light_map_based_detector","text":""},{"location":"perception/autoware_traffic_light_map_based_detector/#autoware_traffic_light_map_based_detector","title":"autoware_traffic_light_map_based_detector","text":""},{"location":"perception/autoware_traffic_light_map_based_detector/#overview","title":"Overview","text":"

autoware_traffic_light_map_based_detector calculates where the traffic lights will appear in the image based on the HD map.

Calibration and vibration errors can be entered as parameters, and the size of the detected RegionOfInterest will change according to the error.

If the node receives route information, it only looks at traffic lights on that route. If the node receives no route information, it looks at them within a radius of max_detection_range and the angle between the traffic light and the camera is less than car_traffic_light_max_angle_range or pedestrian_traffic_light_max_angle_range.

"},{"location":"perception/autoware_traffic_light_map_based_detector/#input-topics","title":"Input topics","text":"Name Type Description ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/camera_info sensor_msgs::msg::CameraInfo target camera parameter ~/input/route autoware_planning_msgs::msg::LaneletRoute optional: route"},{"location":"perception/autoware_traffic_light_map_based_detector/#output-topics","title":"Output topics","text":"Name Type Description ~/output/rois tier4_perception_msgs::msg::TrafficLightRoiArray location of traffic lights in image corresponding to the camera info ~/expect/rois tier4_perception_msgs::msg::TrafficLightRoiArray location of traffic lights in image without any offset ~/debug/markers visualization_msgs::msg::MarkerArray markers which show a line that combines from camera to each traffic light"},{"location":"perception/autoware_traffic_light_map_based_detector/#node-parameters","title":"Node parameters","text":"Parameter Type Description max_vibration_pitch double Maximum error in pitch direction. If -5~+5, it will be 10. max_vibration_yaw double Maximum error in yaw direction. If -5~+5, it will be 10. max_vibration_height double Maximum error in height direction. If -5~+5, it will be 10. max_vibration_width double Maximum error in width direction. If -5~+5, it will be 10. max_vibration_depth double Maximum error in depth direction. If -5~+5, it will be 10. max_detection_range double Maximum detection range in meters. Must be positive. min_timestamp_offset double Minimum timestamp offset when searching for corresponding tf. max_timestamp_offset double Maximum timestamp offset when searching for corresponding tf. timestamp_sample_len double Sampling length between min_timestamp_offset and max_timestamp_offset. car_traffic_light_max_angle_range double Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive. pedestrian_traffic_light_max_angle_range double Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive."},{"location":"perception/autoware_traffic_light_multi_camera_fusion/","title":"autoware_traffic_light_multi_camera_fusion","text":""},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#autoware_traffic_light_multi_camera_fusion","title":"autoware_traffic_light_multi_camera_fusion","text":""},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#overview","title":"Overview","text":"

autoware_traffic_light_multi_camera_fusion performs traffic light signal fusion which can be summarized as the following two tasks:

  1. Multi-Camera-Fusion: fusion each traffic light signal detected by different cameras.
  2. Group-Fusion: Fusion each traffic light signal within the same group, which means traffic lights share the same regulatory element ID defined in lanelet2 map.

The fusion method is below.

  1. Use the results of the new timestamp if the results are from the same sensor
  2. Use the results that are not elements.size() == 1 && color == UNKNOWN && shape == UNKNOWN
  3. Use the results that each vertex of ROI is not at the edge of the image
  4. Use the results of high confidence
"},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#input-topics","title":"Input topics","text":"

For every camera, the following three topics are subscribed:

Name Type Description ~/<camera_namespace>/camera_info sensor_msgs::msg::CameraInfo camera info from map_based_detector ~/<camera_namespace>/detection/rois tier4_perception_msgs::msg::TrafficLightRoiArray detection roi from fine_detector ~/<camera_namespace>/classification/traffic_signals tier4_perception_msgs::msg::TrafficLightArray classification result from classifier

You don't need to configure these topics manually. Just provide the camera_namespaces parameter and the node will automatically extract the <camera_namespace> and create the subscribers.

"},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#output-topics","title":"Output topics","text":"Name Type Description ~/output/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray traffic light signal fusion result"},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#node-parameters","title":"Node parameters","text":"Parameter Type Description camera_namespaces vector\\ Camera Namespaces to be fused message_lifespan double The maximum timestamp span to be fused approximate_sync bool Whether work in Approximate Synchronization Mode perform_group_fusion bool Whether perform Group Fusion"},{"location":"perception/autoware_traffic_light_occlusion_predictor/","title":"autoware_traffic_light_occlusion_predictor","text":""},{"location":"perception/autoware_traffic_light_occlusion_predictor/#autoware_traffic_light_occlusion_predictor","title":"autoware_traffic_light_occlusion_predictor","text":""},{"location":"perception/autoware_traffic_light_occlusion_predictor/#overview","title":"Overview","text":"

autoware_traffic_light_occlusion_predictor receives the detected traffic lights rois and calculates the occlusion ratios of each roi with point cloud. If that rois is judged as occlusion, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, UNKNOWN, and 0.0. This node publishes when each car and pedestrian traffic_signals is arrived and processed.

For each traffic light roi, hundreds of pixels would be selected and projected into the 3D space. Then from the camera point of view, the number of projected pixels that are occluded by the point cloud is counted and used for calculating the occlusion ratio for the roi. As shown in follow image, the red pixels are occluded and the occlusion ratio is the number of red pixels divided by the total pixel numbers.

If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0.

"},{"location":"perception/autoware_traffic_light_occlusion_predictor/#input-topics","title":"Input topics","text":"Name Type Description ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/car/traffic_signals tier4_perception_msgs::msg::TrafficLightArray vehicular traffic light signals ~/input/pedestrian/traffic_signals tier4_perception_msgs::msg::TrafficLightArray pedestrian traffic light signals ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray traffic light detections ~/input/camera_info sensor_msgs::msg::CameraInfo target camera parameter ~/input/cloud sensor_msgs::msg::PointCloud2 LiDAR point cloud"},{"location":"perception/autoware_traffic_light_occlusion_predictor/#output-topics","title":"Output topics","text":"Name Type Description ~/output/traffic_signals tier4_perception_msgs::msg::TrafficLightArray traffic light signals which occluded image results overwritten"},{"location":"perception/autoware_traffic_light_occlusion_predictor/#node-parameters","title":"Node parameters","text":"Parameter Type Description azimuth_occlusion_resolution_deg double azimuth resolution of LiDAR point cloud (degree) elevation_occlusion_resolution_deg double elevation resolution of LiDAR point cloud (degree) max_valid_pt_dist double The points within this distance would be used for calculation max_image_cloud_delay double The maximum delay between LiDAR point cloud and camera image max_wait_t double The maximum time waiting for the LiDAR point cloud max_occlusion_ratio int The maximum occlusion ratio for setting signal as unknown"},{"location":"perception/autoware_traffic_light_visualization/","title":"autoware_traffic_light_visualization","text":""},{"location":"perception/autoware_traffic_light_visualization/#autoware_traffic_light_visualization","title":"autoware_traffic_light_visualization","text":""},{"location":"perception/autoware_traffic_light_visualization/#purpose","title":"Purpose","text":"

The autoware_traffic_light_visualization is a package that includes two visualizing nodes:

"},{"location":"perception/autoware_traffic_light_visualization/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_traffic_light_visualization/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_map_visualizer","title":"traffic_light_map_visualizer","text":""},{"location":"perception/autoware_traffic_light_visualization/#input","title":"Input","text":"Name Type Description ~/input/tl_state tier4_perception_msgs::msg::TrafficLightGroupArray status of traffic lights ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"perception/autoware_traffic_light_visualization/#output","title":"Output","text":"Name Type Description ~/output/traffic_light visualization_msgs::msg::MarkerArray marker array that indicates status of traffic lights"},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_roi_visualizer","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/autoware_traffic_light_visualization/#input_1","title":"Input","text":"Name Type Description ~/input/tl_state tier4_perception_msgs::msg::TrafficLightArray status of traffic lights ~/input/image sensor_msgs::msg::Image the image captured by perception cameras ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_fine_detector ~/input/rough/rois (option) tier4_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_map_based_detector"},{"location":"perception/autoware_traffic_light_visualization/#output_1","title":"Output","text":"Name Type Description ~/output/image sensor_msgs::msg::Image output image with ROIs"},{"location":"perception/autoware_traffic_light_visualization/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_map_visualizer_1","title":"traffic_light_map_visualizer","text":"

None

"},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_roi_visualizer_1","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/autoware_traffic_light_visualization/#node-parameters","title":"Node Parameters","text":"Name Type Description Default Range enable_fine_detection boolean whether to visualize result of the traffic light fine detection false N/A use_image_transport boolean whether to apply image transport to compress the output debugging image in the traffic light fine detection true N/A"},{"location":"perception/autoware_traffic_light_visualization/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/perception_utils/","title":"perception_utils","text":""},{"location":"perception/perception_utils/#perception_utils","title":"perception_utils","text":""},{"location":"perception/perception_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions that are useful across the perception module.

"},{"location":"planning/","title":"Planning Components","text":""},{"location":"planning/#planning-components","title":"Planning Components","text":""},{"location":"planning/#getting-started","title":"Getting Started","text":"

The Autoware.Universe Planning Modules represent a cutting-edge component within the broader open-source autonomous driving software stack. These modules play a pivotal role in autonomous vehicle navigation, skillfully handling route planning, dynamic obstacle avoidance, and real-time adaptation to varied traffic conditions.

"},{"location":"planning/#planning-module","title":"Planning Module","text":"

The Module in the Planning Component refers to the various components that collectively form the planning system of the software. These modules cover a range of functionalities necessary for autonomous vehicle planning. Autoware's planning modules are modularized, meaning users can customize which functions are enabled by changing the configuration. This modular design allows for flexibility and adaptability to different scenarios and requirements in autonomous vehicle operations.

"},{"location":"planning/#how-to-enable-or-disable-planning-module","title":"How to Enable or Disable Planning Module","text":"

Enabling and disabling modules involves managing settings in key configuration and launch files.

"},{"location":"planning/#key-files-for-configuration","title":"Key Files for Configuration","text":"

The default_preset.yaml file acts as the primary configuration file, where planning modules can be disabled or enabled. Furthermore, users can also set the type of motion planner across various motion planners. For example:

Note

Click here to view the default_preset.yaml.

The launch files reference the settings defined in default_preset.yaml to apply the configurations when the behavior path planner's node is running. For instance, the parameter avoidance.enable_module in

<param name=\"avoidance.enable_module\" value=\"$(var launch_avoidance_module)\"/>\n

corresponds to launch_avoidance_module from default_preset.yaml.

"},{"location":"planning/#parameter-configuration","title":"Parameter Configuration","text":"

There are multiple parameters available for configuration, and users have the option to modify them in here. It's important to note that not all parameters are adjustable via rqt_reconfigure. To ensure the changes are effective, modify the parameters and then restart Autoware. Additionally, detailed information about each parameter is available in the corresponding documents under the planning tab.

"},{"location":"planning/#integrating-a-custom-module-into-autoware-a-step-by-step-guide","title":"Integrating a Custom Module into Autoware: A Step-by-Step Guide","text":"

This guide outlines the steps for integrating your custom module into Autoware:

- arg:\nname: launch_intersection_module\ndefault: \"true\"\n
<arg name=\"launch_intersection_module\" default=\"true\"/>\n\n<let\nname=\"behavior_velocity_planner_launch_modules\"\nvalue=\"$(eval &quot;'$(var behavior_velocity_planner_launch_modules)' + 'behavior_velocity_planner::IntersectionModulePlugin, '&quot;)\"\nif=\"$(var launch_intersection_module)\"\n/>\n
<arg name=\"behavior_velocity_planner_intersection_module_param_path\" value=\"$(var behavior_velocity_config_path)/intersection.param.yaml\"/>\n
<param from=\"$(var behavior_velocity_planner_intersection_module_param_path)\"/>\n

Note

Depending on the specific module you wish to add, the relevant files and steps may vary. This guide provides a general overview and serves as a starting point. It's important to adapt these instructions to the specifics of your module.

"},{"location":"planning/#join-our-community-driven-effort","title":"Join Our Community-Driven Effort","text":"

Autoware thrives on community collaboration. Every contribution, big or small, is invaluable to us. Whether it's reporting bugs, suggesting improvements, offering new ideas, or anything else you can think of \u2013 we welcome all contributions with open arms.

"},{"location":"planning/#how-to-contribute","title":"How to Contribute?","text":"

Ready to contribute? Great! To get started, simply visit our Contributing Guidelines where you'll find all the information you need to jump in. This includes instructions on submitting bug reports, proposing feature enhancements, and even contributing to the codebase.

"},{"location":"planning/#join-our-planning-control-working-group-meetings","title":"Join Our Planning & Control Working Group Meetings","text":"

The Planning & Control working group is an integral part of our community. We meet bi-weekly to discuss our current progress, upcoming challenges, and brainstorm new ideas. These meetings are a fantastic opportunity to directly contribute to our discussions and decision-making processes.

Meeting Details:

Interested in joining our meetings? We\u2019d love to have you! For more information on how to participate, visit the following link: How to participate in the working group.

"},{"location":"planning/#citations","title":"Citations","text":"

Occasionally, we publish papers specific to the Planning Component in Autoware. We encourage you to explore these publications and find valuable insights for your work. If you find them useful and incorporate any of our methodologies or algorithms in your projects, citing our papers would be immensely helpful. This support allows us to reach a broader audience and continue contributing to the field.

If you use the Jerk Constrained Velocity Planning algorithm in the Motion Velocity Smoother module in the Planning Component, we kindly request you cite the relevant paper.

Y. Shimizu, T. Horibe, F. Watanabe and S. Kato, \"Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach,\" 2022 International Conference on Robotics and Automation (ICRA)

@inproceedings{shimizu2022,\n  author={Shimizu, Yutaka and Horibe, Takamasa and Watanabe, Fumiya and Kato, Shinpei},\n  booktitle={2022 International Conference on Robotics and Automation (ICRA)},\n  title={Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach},\n  year={2022},\n  pages={5814-5820},\n  doi={10.1109/ICRA46639.2022.9812155}}\n
"},{"location":"planning/autoware_costmap_generator/","title":"costmap_generator","text":""},{"location":"planning/autoware_costmap_generator/#costmap_generator","title":"costmap_generator","text":""},{"location":"planning/autoware_costmap_generator/#costmap_generator_node","title":"costmap_generator_node","text":"

This node reads PointCloud and/or DynamicObjectArray and creates an OccupancyGrid and GridMap. VectorMap(Lanelet2) is optional.

"},{"location":"planning/autoware_costmap_generator/#input-topics","title":"Input topics","text":"Name Type Description ~input/objects autoware_perception_msgs::PredictedObjects predicted objects, for obstacles areas ~input/points_no_ground sensor_msgs::PointCloud2 ground-removed points, for obstacle areas which can't be detected as objects ~input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map, for drivable areas ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/autoware_costmap_generator/#output-topics","title":"Output topics","text":"Name Type Description ~output/grid_map grid_map_msgs::GridMap costmap as GridMap, values are from 0.0 to 1.0 ~output/occupancy_grid nav_msgs::OccupancyGrid costmap as OccupancyGrid, values are from 0 to 100"},{"location":"planning/autoware_costmap_generator/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/autoware_costmap_generator/#how-to-launch","title":"How to launch","text":"
  1. Execute the command source install/setup.bash to setup the environment

  2. Run ros2 launch costmap_generator costmap_generator.launch.xml to launch the node

"},{"location":"planning/autoware_costmap_generator/#parameters","title":"Parameters","text":"Name Type Description update_rate double timer's update rate activate_by_scenario bool if true, activate by scenario = parking. Otherwise, activate if vehicle is inside parking lot. use_objects bool whether using ~input/objects or not use_points bool whether using ~input/points_no_ground or not use_wayarea bool whether using wayarea from ~input/vector_map or not use_parkinglot bool whether using parkinglot from ~input/vector_map or not costmap_frame string created costmap's coordinate vehicle_frame string vehicle's coordinate map_frame string map's coordinate grid_min_value double minimum cost for gridmap grid_max_value double maximum cost for gridmap grid_resolution double resolution for gridmap grid_length_x int size of gridmap for x direction grid_length_y int size of gridmap for y direction grid_position_x int offset from coordinate in x direction grid_position_y int offset from coordinate in y direction maximum_lidar_height_thres double maximum height threshold for pointcloud data (relative to the vehicle_frame) minimum_lidar_height_thres double minimum height threshold for pointcloud data (relative to the vehicle_frame) expand_rectangle_size double expand object's rectangle with this value size_of_expansion_kernel int kernel size for blurring effect on object's costmap"},{"location":"planning/autoware_costmap_generator/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_external_velocity_limit_selector/","title":"External Velocity Limit Selector","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#external-velocity-limit-selector","title":"External Velocity Limit Selector","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#purpose","title":"Purpose","text":"

The external_velocity_limit_selector_node is a node that keeps consistency of external velocity limits. This module subscribes

  1. velocity limit command sent by API,
  2. velocity limit command sent by Autoware internal modules.

VelocityLimit.msg contains not only max velocity but also information about the acceleration/jerk constraints on deceleration. The external_velocity_limit_selector_node integrates the lowest velocity limit and the highest jerk constraint to calculate the hardest velocity limit that protects all the deceleration points and max velocities sent by API and Autoware internal modules.

"},{"location":"planning/autoware_external_velocity_limit_selector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

WIP

"},{"location":"planning/autoware_external_velocity_limit_selector/#inputs","title":"Inputs","text":"Name Type Description ~input/velocity_limit_from_api tier4_planning_msgs::VelocityLimit velocity limit from api ~input/velocity_limit_from_internal tier4_planning_msgs::VelocityLimit velocity limit from autoware internal modules ~input/velocity_limit_clear_command_from_internal tier4_planning_msgs::VelocityLimitClearCommand velocity limit clear command"},{"location":"planning/autoware_external_velocity_limit_selector/#outputs","title":"Outputs","text":"Name Type Description ~output/max_velocity tier4_planning_msgs::VelocityLimit current information of the hardest velocity limit"},{"location":"planning/autoware_external_velocity_limit_selector/#parameters","title":"Parameters","text":"Parameter Type Description max_velocity double default max velocity [m/s] normal.min_acc double minimum acceleration [m/ss] normal.max_acc double maximum acceleration [m/ss] normal.min_jerk double minimum jerk [m/sss] normal.max_jerk double maximum jerk [m/sss] limit.min_acc double minimum acceleration to be observed [m/ss] limit.max_acc double maximum acceleration to be observed [m/ss] limit.min_jerk double minimum jerk to be observed [m/sss] limit.max_jerk double maximum jerk to be observed [m/sss]"},{"location":"planning/autoware_external_velocity_limit_selector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_freespace_planner/","title":"The `autoware_freespace_planner`","text":""},{"location":"planning/autoware_freespace_planner/#the-autoware_freespace_planner","title":"The autoware_freespace_planner","text":""},{"location":"planning/autoware_freespace_planner/#freespace_planner_node","title":"freespace_planner_node","text":"

freespace_planner_node is a global path planner node that plans trajectory in the space having static/dynamic obstacles. This node is currently based on Hybrid A* search algorithm in freespace_planning_algorithms package. Other algorithms such as rrt* will be also added and selectable in the future.

Note Due to the constraint of trajectory following, the output trajectory will be split to include only the single direction path. In other words, the output trajectory doesn't include both forward and backward trajectories at once.

"},{"location":"planning/autoware_freespace_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/route autoware_planning_msgs::Route route and goal pose ~input/occupancy_grid nav_msgs::OccupancyGrid costmap, for drivable areas ~input/odometry nav_msgs::Odometry vehicle velocity, for checking whether vehicle is stopped ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/autoware_freespace_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_planning_msgs::Trajectory trajectory to be followed is_completed bool (implemented as rosparam) whether all split trajectory are published"},{"location":"planning/autoware_freespace_planner/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/autoware_freespace_planner/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in freespace_planner.launch or add args when executing roslaunch
  2. roslaunch freespace_planner freespace_planner.launch
"},{"location":"planning/autoware_freespace_planner/#parameters","title":"Parameters","text":"Name Type Description Default Range planning_algorithm string Planning algorithm to use, options: astar, rrtstar. astar ['astar', 'rrtstar'] waypoints_velocity float velocity in output trajectory (currently, only constant velocity is supported. 5.0 N/A update_rate float timer's update rate 10.0 N/A th_arrived_distance_m float Threshold for considering the vehicle has arrived at its goal. 1.0 N/A th_stopped_time_sec float Time threshold for considering the vehicle as stopped. 1.0 N/A th_stopped_velocity_mps float Velocity threshold for considering the vehicle as stopped. 0.01 N/A th_course_out_distance_m float Threshold distance for considering the vehicle has deviated from its course. 1.0 N/A th_obstacle_time_sec float Time threshold for checking obstacles along the trajectory. 1.0 N/A vehicle_shape_margin_m float Margin around the vehicle shape for planning. 1.0 N/A replan_when_obstacle_found boolean Replan path when an obstacle is found. True N/A replan_when_course_out boolean Replan path when the vehicle deviates from its course. True N/A time_limit float Time limit for the planner. 30000.0 N/A max_turning_ratio float Maximum turning ratio, relative to the actual turning limit of the vehicle. 0.5 N/A turning_steps float Number of steps for turning. 1 N/A theta_size float Number of discretized angles for search. 144 N/A angle_goal_range float Range of acceptable angles at the goal. 6.0 N/A lateral_goal_range float Lateral range of acceptable goal positions. 0.5 N/A longitudinal_goal_range float Longitudinal range of acceptable goal positions. 2.0 N/A curve_weight float Weight for curves in the cost function. 0.5 N/A reverse_weight float Weight for reverse movement in the cost function. 1.0 N/A direction_change_weight float Weight for direction changes in the cost function. 1.5 N/A obstacle_threshold float Threshold for considering an obstacle in the costmap. 100 N/A astar.search_method string Search method to use, options: forward, backward. forward ['forward', 'backward'] astar.only_behind_solutions boolean Consider only solutions behind the vehicle. False N/A astar.use_back boolean Allow reverse motion in A* search. True N/A astar.adapt_expansion_distance boolean Allow varying A* expansion distance based on space configuration. True N/A astar.expansion_distance float Distance for expanding nodes in A* search. 0.5 N/A astar.near_goal_distance float Distance threshold to consider near goal. 4.0 N/A astar.distance_heuristic_weight float Weight for the distance heuristic in A* search. 1.0 N/A astar.smoothness_weight float Weight for the smoothness (change in curvature) in A* search. 0.5 N/A astar.obstacle_distance_weight float Weight for distance to obstacle in A* search. 0.5 N/A astar.goal_lat_distance_weight float Weight for lateral distance from original goal. 0.5 N/A rrtstar.enable_update boolean Enable updates in RRT*. True N/A rrtstar.use_informed_sampling boolean Use informed sampling in RRT*. True N/A rrtstar.max_planning_time float Maximum planning time for RRT*. 150.0 N/A rrtstar.neighbor_radius float Radius for neighboring nodes in RRT*. 8.0 N/A rrtstar.margin float Margin for RRT* sampling. 0.1 N/A"},{"location":"planning/autoware_freespace_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description planning_algorithms string algorithms used in the node vehicle_shape_margin_m float collision margin in planning algorithm update_rate double timer's update rate waypoints_velocity double velocity in output trajectory (currently, only constant velocity is supported) th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped th_course_out_distance_m double threshold distance to check if vehicle is out of course th_obstacle_time_sec double threshold time to check if obstacle is on the trajectory vehicle_shape_margin_m double vehicle margin replan_when_obstacle_found bool whether replanning when obstacle has found on the trajectory replan_when_course_out bool whether replanning when vehicle is out of course"},{"location":"planning/autoware_freespace_planner/#planner-common-parameters","title":"Planner common parameters","text":"Parameter Type Description time_limit double time limit of planning maximum_turning_ratio double max ratio of actual turning range to use turning_steps double number of turning steps within turning range theta_size double the number of angle's discretization lateral_goal_range double goal range of lateral position longitudinal_goal_range double goal range of longitudinal position angle_goal_range double goal range of angle curve_weight double additional cost factor for curve actions reverse_weight double additional cost factor for reverse actions direction_change_weight double additional cost factor for switching direction obstacle_threshold double threshold for regarding a certain grid as obstacle"},{"location":"planning/autoware_freespace_planner/#a-search-parameters","title":"A* search parameters","text":"Parameter Type Description search_method string method of searching, start to goal or vice versa only_behind_solutions bool whether restricting the solutions to be behind the goal use_back bool whether using backward trajectory adapt_expansion_distance bool if true, adapt expansion distance based on environment expansion_distance double length of expansion for node transitions near_goal_distance double near goal distance threshold distance_heuristic_weight double heuristic weight for estimating node's cost smoothness_weight double cost factor for change in curvature obstacle_distance_weight double cost factor for distance to obstacle goal_lat_distance_weight double cost factor for lateral distance from goal"},{"location":"planning/autoware_freespace_planner/#rrt-search-parameters","title":"RRT* search parameters","text":"Parameter Type Description max planning time double maximum planning time [msec] (used only when enable_update is set true) enable_update bool whether update after feasible solution found until max_planning time elapse use_informed_sampling bool Use informed RRT* (of Gammell et al.) neighbor_radius double neighbor radius of RRT* algorithm margin double safety margin ensured in path's collision checking in RRT* algorithm"},{"location":"planning/autoware_freespace_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_freespace_planning_algorithms/","title":"freespace planning algorithms","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#freespace-planning-algorithms","title":"freespace planning algorithms","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#role","title":"Role","text":"

This package is for development of path planning algorithms in free space.

"},{"location":"planning/autoware_freespace_planning_algorithms/#implemented-algorithms","title":"Implemented algorithms","text":"

Please see rrtstar.md for a note on the implementation for informed-RRT*.

NOTE: As for RRT*, one can choose whether update after feasible solution found in RRT*. If not doing so, the algorithm is the almost (but exactly because of rewiring procedure) same as vanilla RRT. If you choose update, then you have option if the sampling after feasible solution found is \"informed\". If set true, then the algorithm is equivalent to informed RRT\\* of Gammell et al. 2014.

"},{"location":"planning/autoware_freespace_planning_algorithms/#algorithm-selection","title":"Algorithm selection","text":"

There is a trade-off between algorithm speed and resulting solution quality. When we sort the algorithms by the spectrum of (high quality solution/ slow) -> (low quality solution / fast) it would be A* -> informed RRT* -> RRT. Note that in almost all case informed RRT* is better than RRT* for solution quality given the same computational time budget. So, RRT* is omitted in the comparison.

Some selection criteria would be:

"},{"location":"planning/autoware_freespace_planning_algorithms/#guide-to-implement-a-new-algorithm","title":"Guide to implement a new algorithm","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#running-the-standalone-tests-and-visualization","title":"Running the standalone tests and visualization","text":"

Building the package with ros-test and run tests:

colcon build --packages-select autoware_freespace_planning_algorithms\ncolcon test --packages-select autoware_freespace_planning_algorithms\n

Inside the test, simulation results are stored in /tmp/fpalgos-{algorithm_type}-case{scenario_number} as a rosbag. Loading these resulting files, by using test/debug_plot.py, one can create plots visualizing the path and obstacles as shown in the figures below. The created figures are then again saved in /tmp.

"},{"location":"planning/autoware_freespace_planning_algorithms/#a-single-curvature-case","title":"A* (single curvature case)","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#informed-rrt-with-200-msec-time-budget","title":"informed RRT* with 200 msec time budget","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#rrt-without-update-almost-same-as-rrt","title":"RRT* without update (almost same as RRT)","text":"

The black cells, green box, and red box, respectively, indicate obstacles, start configuration, and goal configuration. The sequence of the blue boxes indicate the solution path.

"},{"location":"planning/autoware_freespace_planning_algorithms/#extension-to-python-module-only-a-supported","title":"Extension to Python module (only A* supported)","text":"

There is an implementation of the extension to the python module. You can try A* search via Python by setting follows:

Then, you can get

The example code is scripts/example/example.py. Note that you need to build this package and source the setup shell script in advance.

"},{"location":"planning/autoware_freespace_planning_algorithms/#license-notice","title":"License notice","text":"

Files src/reeds_shepp.cpp and include/astar_search/reeds_shepp.h are fetched from pyReedsShepp. Note that the implementation in pyReedsShepp is also heavily based on the code in ompl. Both pyReedsShepp and ompl are distributed under 3-clause BSD license.

"},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/","title":"Rrtstar","text":""},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/#note-on-implementation-of-informed-rrt","title":"Note on implementation of informed RRT*","text":""},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/#preliminary-knowledge-on-informed-rrt","title":"Preliminary knowledge on informed-RRT*","text":"

Let us define \\(f(x)\\) as minimum cost of the path when path is constrained to pass through \\(x\\) (so path will be \\(x_{\\mathrm{start}} \\to \\mathrm{x} \\to \\mathrm{x_{\\mathrm{goal}}}\\)). Also, let us define \\(c_{\\mathrm{best}}\\) as the current minimum cost of the feasible paths. Let us define a set $ X(f) = \\left{ x \\in X | f(x) < c*{\\mathrm{best}} \\right} $. If we could sample a new point from \\(X_f\\) instead of \\(X\\) as in vanilla RRT*, chance that \\(c*{\\mathrm{best}}\\) is updated is increased, thus the convergence rate is improved.

In most case, \\(f(x)\\) is unknown, thus it is straightforward to approximate the function \\(f\\) by a heuristic function \\(\\hat{f}\\). A heuristic function is admissible if \\(\\forall x \\in X, \\hat{f}(x) < f(x)\\), which is sufficient condition of conversion to optimal path. The good heuristic function \\(\\hat{f}\\) has two properties: 1) it is an admissible tight lower bound of \\(f\\) and 2) sampling from \\(X(\\hat{f})\\) is easy.

According to Gammell et al [1], a good heuristic function when path is always straight is \\(\\hat{f}(x) = ||x_{\\mathrm{start}} - x|| + ||x - x_{\\mathrm{goal}}||\\). If we don't assume any obstacle information the heuristic is tightest. Also, \\(X(\\hat{f})\\) is hyper-ellipsoid, and hence sampling from it can be done analytically.

"},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/#modification-to-fit-reeds-sheep-path-case","title":"Modification to fit reeds-sheep path case","text":"

In the vehicle case, state is \\(x = (x_{1}, x_{2}, \\theta)\\). Unlike normal informed-RRT* where we can connect path by a straight line, here we connect the vehicle path by a reeds-sheep path. So, we need some modification of the original algorithm a bit. To this end, one might first consider a heuristic function \\(\\hat{f}_{\\mathrm{RS}}(x) = \\mathrm{RS}(x_{\\mathrm{start}}, x) + \\mathrm{RS}(x, x_{\\mathrm{goal}}) < f(x)\\) where \\(\\mathrm{RS}\\) computes reeds-sheep distance. Though it is good in the sense of tightness, however, sampling from \\(X(\\hat{f}_{RS})\\) is really difficult. Therefore, we use \\(\\hat{f}_{euc} = ||\\mathrm{pos}(x_{\\mathrm{start}}) - \\mathrm{pos}(x)|| + ||\\mathrm{pos}(x)- \\mathrm{pos}(x_{\\mathrm{goal}})||\\), which is admissible because \\(\\forall x \\in X, \\hat{f}_{euc}(x) < \\hat{f}_{\\mathrm{RS}}(x) < f(x)\\). Here, \\(\\mathrm{pos}\\) function returns position \\((x_{1}, x_{2})\\) of the vehicle.

Sampling from \\(X(\\hat{f}_{\\mathrm{euc}})\\) is easy because \\(X(\\hat{f}_{\\mathrm{euc}}) = \\mathrm{Ellipse} \\times (-\\pi, \\pi]\\). Here \\(\\mathrm{Ellipse}\\)'s focal points are \\(x_{\\mathrm{start}}\\) and \\(x_{\\mathrm{goal}}\\) and conjugate diameters is $\\sqrt{c^{2}{\\mathrm{best}} - ||\\mathrm{pos}(x}) - \\mathrm{pos}(x_{\\mathrm{goal}}))|| } $ (similar to normal informed-rrtstar's ellipsoid). Please notice that \\(\\theta\\) can be arbitrary because \\(\\hat{f}_{\\mathrm{euc}}\\) is independent of \\(\\theta\\).

[1] Gammell et al., \"Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic.\" IROS (2014)

"},{"location":"planning/autoware_mission_planner/","title":"Mission Planner","text":""},{"location":"planning/autoware_mission_planner/#mission-planner","title":"Mission Planner","text":""},{"location":"planning/autoware_mission_planner/#purpose","title":"Purpose","text":"

Mission Planner calculates a route that navigates from the current ego pose to the goal pose following the given check points. The route is made of a sequence of lanes on a static map. Dynamic objects (e.g. pedestrians and other vehicles) and dynamic map information (e.g. road construction which blocks some lanes) are not considered during route planning. Therefore, the output topic is only published when the goal pose or check points are given and will be latched until the new goal pose or check points are given.

The core implementation does not depend on a map format. Any planning algorithms can be added as plugin modules. In current Autoware.universe, only the plugin for Lanelet2 map format is supported.

This package also manages routes for MRM. The route_selector node duplicates the mission_planner interface and provides it for normal and MRM respectively. It distributes route requests and planning results according to current MRM operation state.

"},{"location":"planning/autoware_mission_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/autoware_mission_planner/#parameters","title":"Parameters","text":"Name Type Description map_frame string The frame name for map arrival_check_angle_deg double Angle threshold for goal check arrival_check_distance double Distance threshold for goal check arrival_check_duration double Duration threshold for goal check goal_angle_threshold double Max goal pose angle for goal approve enable_correct_goal_pose bool Enabling correction of goal pose according to the closest lanelet orientation reroute_time_threshold double If the time to the rerouting point at the current velocity is greater than this threshold, rerouting is possible minimum_reroute_length double Minimum Length for publishing a new route consider_no_drivable_lanes bool This flag is for considering no_drivable_lanes in planning or not. allow_reroute_in_autonomous_mode bool This is a flag to allow reroute in autonomous driving mode. If false, reroute fails. If true, only safe reroute is allowed"},{"location":"planning/autoware_mission_planner/#services","title":"Services","text":"Name Type Description /planning/mission_planning/mission_planner/clear_route tier4_planning_msgs/srv/ClearRoute route clear request /planning/mission_planning/mission_planner/set_waypoint_route tier4_planning_msgs/srv/SetWaypointRoute route request with lanelet waypoints. /planning/mission_planning/mission_planner/set_lanelet_route tier4_planning_msgs/srv/SetLaneletRoute route request with pose waypoints. /planning/mission_planning/route_selector/main/clear_route tier4_planning_msgs/srv/ClearRoute main route clear request /planning/mission_planning/route_selector/main/set_waypoint_route tier4_planning_msgs/srv/SetWaypointRoute main route request with lanelet waypoints. /planning/mission_planning/route_selector/main/set_lanelet_route tier4_planning_msgs/srv/SetLaneletRoute main route request with pose waypoints. /planning/mission_planning/route_selector/mrm/clear_route tier4_planning_msgs/srv/ClearRoute mrm route clear request /planning/mission_planning/route_selector/mrm/set_waypoint_route tier4_planning_msgs/srv/SetWaypointRoute mrm route request with lanelet waypoints. /planning/mission_planning/route_selector/mrm/set_lanelet_route tier4_planning_msgs/srv/SetLaneletRoute mrm route request with pose waypoints."},{"location":"planning/autoware_mission_planner/#subscriptions","title":"Subscriptions","text":"Name Type Description input/vector_map autoware_map_msgs/msg/LaneletMapBin vector map of Lanelet2 input/modified_goal geometry_msgs/PoseWithUuidStamped modified goal pose input/operation_mode_state autoware_adapi_v1_msgs/OperationModeState operation mode state input/odometry nav_msgs/msg/Odometry vehicle odometry"},{"location":"planning/autoware_mission_planner/#publications","title":"Publications","text":"Name Type Description /planning/mission_planning/state tier4_planning_msgs/msg/RouteState route state /planning/mission_planning/route autoware_planning_msgs/LaneletRoute route /planning/mission_planning/route_selector/main/state tier4_planning_msgs/msg/RouteState main route state /planning/mission_planning/route_selector/main/route autoware_planning_msgs/LaneletRoute main route /planning/mission_planning/route_selector/mrm/state tier4_planning_msgs/msg/RouteState mrm route state /planning/mission_planning/route_selector/mrm/route autoware_planning_msgs/LaneletRoute mrm route ~/debug/route_marker visualization_msgs/msg/MarkerArray route marker for debug ~/debug/goal_footprint visualization_msgs/msg/MarkerArray goal footprint for debug"},{"location":"planning/autoware_mission_planner/#route-section","title":"Route section","text":"

Route section, whose type is autoware_planning_msgs/LaneletSegment, is a \"slice\" of a road that bundles lane changeable lanes. Note that the most atomic unit of route is autoware_planning_msgs/LaneletPrimitive, which has the unique id of a lane in a vector map and its type. Therefore, route message does not contain geometric information about the lane since we did not want to have planning module\u2019s message to have dependency on map data structure.

The ROS message of route section contains following three elements for each route section.

"},{"location":"planning/autoware_mission_planner/#goal-validation","title":"Goal Validation","text":"

The mission planner has control mechanism to validate the given goal pose and create a route. If goal pose angle between goal pose lanelet and goal pose' yaw is greater than goal_angle_threshold parameter, the goal is rejected. Another control mechanism is the creation of a footprint of the goal pose according to the dimensions of the vehicle and checking whether this footprint is within the lanelets. If goal footprint exceeds lanelets, then the goal is rejected.

At the image below, there are sample goal pose validation cases.

"},{"location":"planning/autoware_mission_planner/#implementation","title":"Implementation","text":""},{"location":"planning/autoware_mission_planner/#mission-planner_1","title":"Mission Planner","text":"

Two callbacks (goal and check points) are a trigger for route planning. Routing graph, which plans route in Lanelet2, must be created before those callbacks, and this routing graph is created in vector map callback.

plan route is explained in detail in the following section.

"},{"location":"planning/autoware_mission_planner/#route-planner","title":"Route Planner","text":"

plan route is executed with check points including current ego pose and goal pose.

plan path between each check points firstly calculates closest lanes to start and goal pose. Then routing graph of Lanelet2 plans the shortest path from start and goal pose.

initialize route lanelets initializes route handler, and calculates route_lanelets. route_lanelets, all of which will be registered in route sections, are lanelets next to the lanelets in the planned path, and used when planning lane change. To calculate route_lanelets,

  1. All the neighbor (right and left) lanes for the planned path which is lane-changeable is memorized as route_lanelets.
  2. All the neighbor (right and left) lanes for the planned path which is not lane-changeable is memorized as candidate_lanelets.
  3. If the following and previous lanelets of each candidate_lanelets are route_lanelets, the candidate_lanelet is registered as route_lanelets

get preferred lanelets extracts preferred_primitive from route_lanelets with the route handler.

create route sections extracts primitives from route_lanelets for each route section with the route handler, and creates route sections.

"},{"location":"planning/autoware_mission_planner/#rerouting","title":"Rerouting","text":"

Reroute here means changing the route while driving. Unlike route setting, it is required to keep a certain distance from vehicle to the point where the route is changed. If the ego vehicle is not on autonomous driving state, the safety checking process will be skipped.

And there are three use cases that require reroute.

"},{"location":"planning/autoware_mission_planner/#route-change-api","title":"Route change API","text":"

It is used when changing the destination while driving or when driving a divided loop route. When the vehicle is driving on a MRM route, normal rerouting by this interface is not allowed.

"},{"location":"planning/autoware_mission_planner/#emergency-route","title":"Emergency route","text":"

The interface for the MRM that pulls over the road shoulder. It has to be stopped as soon as possible, so a reroute is required. The MRM route has priority over the normal route. And if MRM route is cleared, try to return to the normal route also with a rerouting safety check.

"},{"location":"planning/autoware_mission_planner/#goal-modification","title":"Goal modification","text":"

This is a goal change to pull over, avoid parked vehicles, and so on by a planning component. If the modified goal is outside the calculated route, a reroute is required. This goal modification is executed by checking the local environment and path safety as the vehicle actually approaches the destination. And this modification is allowed for both normal_route and mrm_route. The new route generated here is sent to the AD API so that it can also be referenced by the application. Note, however, that the specifications here are subject to change in the future.

"},{"location":"planning/autoware_mission_planner/#rerouting-limitations","title":"Rerouting Limitations","text":""},{"location":"planning/autoware_mission_planner/#limitations","title":"Limitations","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/","title":"Objects Of Interest Marker Interface","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#objects-of-interest-marker-interface","title":"Objects Of Interest Marker Interface","text":"

Warning

Under Construction

"},{"location":"planning/autoware_objects_of_interest_marker_interface/#purpose","title":"Purpose","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_obstacle_cruise_planner/","title":"Obstacle Cruise Planner","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#obstacle-cruise-planner","title":"Obstacle Cruise Planner","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#overview","title":"Overview","text":"

The autoware_obstacle_cruise_planner package has following modules.

"},{"location":"planning/autoware_obstacle_cruise_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::Trajectory input trajectory ~/input/objects autoware_perception_msgs::PredictedObjects dynamic objects ~/input/odometry nav_msgs::msg::Odometry ego odometry"},{"location":"planning/autoware_obstacle_cruise_planner/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory autoware_planning_msgs::Trajectory output trajectory ~/output/velocity_limit tier4_planning_msgs::VelocityLimit velocity limit for cruising ~/output/clear_velocity_limit tier4_planning_msgs::VelocityLimitClearCommand clear command for velocity limit"},{"location":"planning/autoware_obstacle_cruise_planner/#design","title":"Design","text":"

Design for the following functions is defined here.

A data structure for cruise and stop planning is as follows. This planner data is created first, and then sent to the planning algorithm.

struct PlannerData\n{\nrclcpp::Time current_time;\nautoware_planning_msgs::msg::Trajectory traj;\ngeometry_msgs::msg::Pose current_pose;\ndouble ego_vel;\ndouble current_acc;\nstd::vector<Obstacle> target_obstacles;\n};\n
struct Obstacle\n{\nrclcpp::Time stamp;  // This is not the current stamp, but when the object was observed.\ngeometry_msgs::msg::Pose pose;  // interpolated with the current stamp\nbool orientation_reliable;\nTwist twist;\nbool twist_reliable;\nObjectClassification classification;\nstd::string uuid;\nShape shape;\nstd::vector<PredictedPath> predicted_paths;\n};\n
"},{"location":"planning/autoware_obstacle_cruise_planner/#behavior-determination-against-obstacles","title":"Behavior determination against obstacles","text":"

Obstacles for cruising, stopping and slowing down are selected in this order based on their pose and velocity. The obstacles not in front of the ego will be ignored.

The behavior determination flowchart is shown below.

"},{"location":"planning/autoware_obstacle_cruise_planner/#determine-cruise-vehicles","title":"Determine cruise vehicles","text":"

The obstacles meeting the following condition are determined as obstacles for cruising.

Parameter Type Description common.cruise_obstacle_type.inside.unknown bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.car bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.truck bool flag to consider unknown objects for cruising ... bool ... common.cruise_obstacle_type.outside.unknown bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.car bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.truck bool flag to consider unknown objects for cruising ... bool ... behavior_determination.cruise.max_lat_margin double maximum lateral margin for cruise obstacles behavior_determination.obstacle_velocity_threshold_from_cruise_to_stop double maximum obstacle velocity for cruise obstacle inside the trajectory behavior_determination.cruise.outside_obstacle.obstacle_velocity_threshold double maximum obstacle velocity for cruise obstacle outside the trajectory behavior_determination.cruise.outside_obstacle.ego_obstacle_overlap_time_threshold double maximum overlap time of the collision between the ego and obstacle"},{"location":"planning/autoware_obstacle_cruise_planner/#yield-for-vehicles-that-might-cut-in-into-the-egos-lane","title":"Yield for vehicles that might cut in into the ego's lane","text":"

It is also possible to yield (cruise) behind vehicles in neighbor lanes if said vehicles might cut in the ego vehicle's current lane.

The obstacles meeting the following condition are determined as obstacles for yielding (cruising).

If the above conditions are met, the ego vehicle will cruise behind the moving obstacle, yielding to it so it can cut in into the ego's lane to avoid the stopped obstacle.

"},{"location":"planning/autoware_obstacle_cruise_planner/#determine-stop-vehicles","title":"Determine stop vehicles","text":"

Among obstacles which are not for cruising, the obstacles meeting the following condition are determined as obstacles for stopping.

Parameter Type Description common.stop_obstacle_type.unknown bool flag to consider unknown objects for stopping common.stop_obstacle_type.car bool flag to consider unknown objects for stopping common.stop_obstacle_type.truck bool flag to consider unknown objects for stopping ... bool ... behavior_determination.stop.max_lat_margin double maximum lateral margin for stop obstacles behavior_determination.crossing_obstacle.obstacle_velocity_threshold double maximum crossing obstacle velocity to ignore behavior_determination.obstacle_velocity_threshold_from_stop_to_cruise double maximum obstacle velocity for stop"},{"location":"planning/autoware_obstacle_cruise_planner/#determine-slow-down-vehicles","title":"Determine slow down vehicles","text":"

Among obstacles which are not for cruising and stopping, the obstacles meeting the following condition are determined as obstacles for slowing down.

Parameter Type Description common.slow_down_obstacle_type.unknown bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.car bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.truck bool flag to consider unknown objects for slowing down ... bool ... behavior_determination.slow_down.max_lat_margin double maximum lateral margin for slow down obstacles"},{"location":"planning/autoware_obstacle_cruise_planner/#note","title":"NOTE","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#1-crossing-obstacles","title":"*1: Crossing obstacles","text":"

Crossing obstacle is the object whose orientation's yaw angle against the ego's trajectory is smaller than behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold.

Parameter Type Description behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold double maximum angle against the ego's trajectory to judge the obstacle is crossing the trajectory [rad]"},{"location":"planning/autoware_obstacle_cruise_planner/#2-enough-collision-time-margin","title":"*2: Enough collision time margin","text":"

We predict the collision area and its time by the ego with a constant velocity motion and the obstacle with its predicted path. Then, we calculate a collision time margin which is the difference of the time when the ego will be inside the collision area and the obstacle will be inside the collision area. When this time margin is smaller than behavior_determination.stop.crossing_obstacle.collision_time_margin, the margin is not enough.

Parameter Type Description behavior_determination.stop.crossing_obstacle.collision_time_margin double maximum collision time margin of the ego and obstacle"},{"location":"planning/autoware_obstacle_cruise_planner/#stop-planning","title":"Stop planning","text":"Parameter Type Description common.min_strong_accel double ego's minimum acceleration to stop [m/ss] common.safe_distance_margin double distance with obstacles for stop [m] common.terminal_safe_distance_margin double terminal_distance with obstacles for stop, which cannot be exceed safe distance margin [m]

The role of the stop planning is keeping a safe distance with static vehicle objects or dynamic/static non vehicle objects.

The stop planning just inserts the stop point in the trajectory to keep a distance with obstacles. The safe distance is parameterized as common.safe_distance_margin. When it stops at the end of the trajectory, and obstacle is on the same point, the safe distance becomes terminal_safe_distance_margin.

When inserting the stop point, the required acceleration for the ego to stop in front of the stop point is calculated. If the acceleration is less than common.min_strong_accel, the stop planning will be cancelled since this package does not assume a strong sudden brake for emergency.

"},{"location":"planning/autoware_obstacle_cruise_planner/#cruise-planning","title":"Cruise planning","text":"Parameter Type Description common.safe_distance_margin double minimum distance with obstacles for cruise [m]

The role of the cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition. This includes not only cruising a front vehicle, but also reacting a cut-in and cut-out vehicle.

The safe distance is calculated dynamically based on the Responsibility-Sensitive Safety (RSS) by the following equation.

\\[ d_{rss} = v_{ego} t_{idling} + \\frac{1}{2} a_{ego} t_{idling}^2 + \\frac{v_{ego}^2}{2 a_{ego}} - \\frac{v_{obstacle}^2}{2 a_{obstacle}}, \\]

assuming that \\(d_{rss}\\) is the calculated safe distance, \\(t_{idling}\\) is the idling time for the ego to detect the front vehicle's deceleration, \\(v_{ego}\\) is the ego's current velocity, \\(v_{obstacle}\\) is the front obstacle's current velocity, \\(a_{ego}\\) is the ego's acceleration, and \\(a_{obstacle}\\) is the obstacle's acceleration. These values are parameterized as follows. Other common values such as ego's minimum acceleration is defined in common.param.yaml.

Parameter Type Description common.idling_time double idling time for the ego to detect the front vehicle starting deceleration [s] common.min_ego_accel_for_rss double ego's acceleration for RSS [m/ss] common.min_object_accel_for_rss double front obstacle's acceleration for RSS [m/ss]

The detailed formulation is as follows.

\\[ \\begin{align} d_{error} & = d - d_{rss} \\\\ d_{normalized} & = lpf(d_{error} / d_{obstacle}) \\\\ d_{quad, normalized} & = sign(d_{normalized}) *d_{normalized}*d_{normalized} \\\\ v_{pid} & = pid(d_{quad, normalized}) \\\\ v_{add} & = v_{pid} > 0 ? v_{pid}* w_{acc} : v_{pid} \\\\ v_{target} & = max(v_{ego} + v_{add}, v_{min, cruise}) \\end{align} \\] Variable Description d actual distance to obstacle d_{rss} ideal distance to obstacle based on RSS v_{min, cruise} min_cruise_target_vel w_{acc} output_ratio_during_accel lpf(val) apply low-pass filter to val pid(val) apply pid to val"},{"location":"planning/autoware_obstacle_cruise_planner/#block-diagram","title":"Block diagram","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#slow-down-planning","title":"Slow down planning","text":"Parameter Type Description slow_down.labels vector(string) A vector of labels for customizing obstacle-label-based slow down behavior. Each label represents an obstacle type that will be treated differently when applying slow down. The possible labels are (\"default\" (Mandatory), \"unknown\",\"car\",\"truck\",\"bus\",\"trailer\",\"motorcycle\",\"bicycle\" or \"pedestrian\") slow_down.default.static.min_lat_velocity double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_velocity double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.min_lat_margin double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_margin double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.moving.min_lat_velocity double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_velocity double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.min_lat_margin double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_margin double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving (optional) slow_down.\"label\".(static & moving).min_lat_velocity double minimum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value (optional) slow_down.\"label\".(static & moving).max_lat_velocity double maximum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value (optional) slow_down.\"label\".(static & moving).min_lat_margin double minimum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value (optional) slow_down.\"label\".(static & moving).max_lat_margin double maximum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value

The role of the slow down planning is inserting slow down velocity in the trajectory where the trajectory points are close to the obstacles. The parameters can be customized depending on the obstacle type (see slow_down.labels), making it possible to adjust the slow down behavior depending if the obstacle is a pedestrian, bicycle, car, etc. Each obstacle type has a static and a moving parameter set, so it is possible to customize the slow down response of the ego vehicle according to the obstacle type and if it is moving or not. If an obstacle is determined to be moving, the corresponding moving set of parameters will be used to compute the vehicle slow down, otherwise, the static parameters will be used. The static and moving separation is useful for customizing the ego vehicle slow down behavior to, for example, slow down more significantly when passing stopped vehicles that might cause occlusion or that might suddenly open its doors.

An obstacle is classified as static if its total speed is less than the moving_object_speed_threshold parameter. Furthermore, a hysteresis based approach is used to avoid chattering, it uses the moving_object_hysteresis_range parameter range and the obstacle's previous state (moving or static) to determine if the obstacle is moving or not. In other words, if an obstacle was previously classified as static, it will not change its classification to moving unless its total speed is greater than moving_object_speed_threshold + moving_object_hysteresis_range. Likewise, an obstacle previously classified as moving, will only change to static if its speed is lower than moving_object_speed_threshold - moving_object_hysteresis_range.

The closest point on the obstacle to the ego's trajectory is calculated. Then, the slow down velocity is calculated by linear interpolation with the distance between the point and trajectory as follows.

Variable Description v_{out} calculated velocity for slow down v_{min} slow_down.min_lat_velocity v_{max} slow_down.max_lat_velocity l_{min} slow_down.min_lat_margin l_{max} slow_down.max_lat_margin l'_{max} behavior_determination.slow_down.max_lat_margin

The calculated velocity is inserted in the trajectory where the obstacle is inside the area with behavior_determination.slow_down.max_lat_margin.

"},{"location":"planning/autoware_obstacle_cruise_planner/#implementation","title":"Implementation","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#flowchart","title":"Flowchart","text":"

Successive functions consist of autoware_obstacle_cruise_planner as follows.

Various algorithms for stop and cruise planning will be implemented, and one of them is designated depending on the use cases. The core algorithm implementation generateTrajectory depends on the designated algorithm.

"},{"location":"planning/autoware_obstacle_cruise_planner/#algorithm-selection-for-cruise-planner","title":"Algorithm selection for cruise planner","text":"

Currently, only a PID-based planner is supported. Each planner will be explained in the following.

Parameter Type Description common.planning_method string cruise and stop planning algorithm, selected from \"pid_base\""},{"location":"planning/autoware_obstacle_cruise_planner/#pid-based-planner","title":"PID-based planner","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#stop-planning_1","title":"Stop planning","text":"

In the pid_based_planner namespace,

Parameter Type Description obstacle_velocity_threshold_from_cruise_to_stop double obstacle velocity threshold to be stopped from cruised [m/s]

Only one obstacle is targeted for the stop planning. It is the obstacle among obstacle candidates whose velocity is less than obstacle_velocity_threshold_from_cruise_to_stop, and which is the nearest to the ego along the trajectory. A stop point is inserted keepingcommon.safe_distance_margin distance between the ego and obstacle.

Note that, as explained in the stop planning design, a stop planning which requires a strong acceleration (less than common.min_strong_accel) will be canceled.

"},{"location":"planning/autoware_obstacle_cruise_planner/#cruise-planning_1","title":"Cruise planning","text":"

In the pid_based_planner namespace,

Parameter Type Description kp double p gain for pid control [-] ki double i gain for pid control [-] kd double d gain for pid control [-] output_ratio_during_accel double The output velocity will be multiplied by the ratio during acceleration to follow the front vehicle. [-] vel_to_acc_weight double target acceleration is target velocity * vel_to_acc_weight [-] min_cruise_target_vel double minimum target velocity during cruise [m/s]

In order to keep the safe distance, the target velocity and acceleration is calculated and sent as an external velocity limit to the velocity smoothing package (motion_velocity_smoother by default). The target velocity and acceleration is respectively calculated with the PID controller according to the error between the reference safe distance and the actual distance.

"},{"location":"planning/autoware_obstacle_cruise_planner/#optimization-based-planner","title":"Optimization-based planner","text":"

under construction

"},{"location":"planning/autoware_obstacle_cruise_planner/#minor-functions","title":"Minor functions","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#prioritization-of-behavior-modules-stop-point","title":"Prioritization of behavior module's stop point","text":"

When stopping for a pedestrian walking on the crosswalk, the behavior module inserts the zero velocity in the trajectory in front of the crosswalk. Also autoware_obstacle_cruise_planner's stop planning also works, and the ego may not reach the behavior module's stop point since the safe distance defined in autoware_obstacle_cruise_planner may be longer than the behavior module's safe distance. To resolve this non-alignment of the stop point between the behavior module and autoware_obstacle_cruise_planner, common.min_behavior_stop_margin is defined. In the case of the crosswalk described above, autoware_obstacle_cruise_planner inserts the stop point with a distance common.min_behavior_stop_margin at minimum between the ego and obstacle.

Parameter Type Description common.min_behavior_stop_margin double minimum stop margin when stopping with the behavior module enabled [m]"},{"location":"planning/autoware_obstacle_cruise_planner/#a-function-to-keep-the-closest-stop-obstacle-in-target-obstacles","title":"A function to keep the closest stop obstacle in target obstacles","text":"

In order to keep the closest stop obstacle in the target obstacles, we check whether it is disappeared or not from the target obstacles in the checkConsistency function. If the previous closest stop obstacle is remove from the lists, we keep it in the lists for stop_obstacle_hold_time_threshold seconds. Note that if a new stop obstacle appears and the previous closest obstacle removes from the lists, we do not add it to the target obstacles again.

Parameter Type Description behavior_determination.stop_obstacle_hold_time_threshold double maximum time for holding closest stop obstacle [s]"},{"location":"planning/autoware_obstacle_cruise_planner/#how-to-debug","title":"How To Debug","text":"

How to debug can be seen here.

"},{"location":"planning/autoware_obstacle_cruise_planner/#known-limits","title":"Known Limits","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/","title":"Debug","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#debug","title":"Debug","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#debug-visualization","title":"Debug visualization","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#detection-area","title":"Detection area","text":"

Green polygons which is a detection area is visualized by detection_polygons in the ~/debug/marker topic. To determine each behavior (cruise, stop, and slow down), if behavior_determination.*.max_lat_margin is not zero, the polygons are expanded with the additional width.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#collision-points","title":"Collision points","text":"

Red points which are collision points with obstacle are visualized by *_collision_points for each behavior in the ~/debug/marker topic.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#obstacle-for-cruise","title":"Obstacle for cruise","text":"

Orange sphere which is an obstacle for cruise is visualized by obstacles_to_cruise in the ~/debug/marker topic.

Orange wall which means a safe distance to cruise if the ego's front meets the wall is visualized in the ~/debug/cruise/virtual_wall topic.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#obstacle-for-stop","title":"Obstacle for stop","text":"

Red sphere which is an obstacle for stop is visualized by obstacles_to_stop in the ~/debug/marker topic.

Red wall which means a safe distance to stop if the ego's front meets the wall is visualized in the ~/virtual_wall topic.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#obstacle-for-slow-down","title":"Obstacle for slow down","text":"

Yellow sphere which is an obstacle for slow_down is visualized by obstacles_to_slow_down in the ~/debug/marker topic.

Yellow wall which means a safe distance to slow_down if the ego's front meets the wall is visualized in the ~/debug/slow_down/virtual_wall topic.

"},{"location":"planning/autoware_obstacle_stop_planner/","title":"Obstacle Stop Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-stop-planner","title":"Obstacle Stop Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#overview","title":"Overview","text":"

obstacle_stop_planner has following modules

"},{"location":"planning/autoware_obstacle_stop_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/trajectory autoware_planning_msgs::Trajectory trajectory ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/odometry nav_msgs::Odometry vehicle velocity ~/input/dynamic_objects autoware_perception_msgs::PredictedObjects dynamic objects ~/input/expand_stop_range tier4_planning_msgs::msg::ExpandStopRange expand stop range"},{"location":"planning/autoware_obstacle_stop_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/autoware_obstacle_stop_planner/#common-parameter","title":"Common Parameter","text":"Name Type Description Default Range max_vel float max velocity limit [m/s] 11.1 N/A normal.min_acc float min deceleration [m/ss] -0.5 N/A normal.max_acc float max acceleration [m/ss] 1 N/A normal.min_jerk float min jerk [m/sss] -0.5 N/A normal.max_jerk float max jerk [m/sss] 1 N/A limit.min_acc float min deceleration limit [m/ss] -2.5 N/A limit.max_acc float max acceleration [m/ss] 1 N/A limit.min_jerk float min jerk [m/sss] -1.5 N/A limit.max_jerk float max jerk [m/sss] 1.5 N/A Parameter Type Description enable_slow_down bool enable slow down planner [-] max_velocity double max velocity [m/s] chattering_threshold double even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] enable_z_axis_obstacle_filtering bool filter obstacles in z axis (height) [-] z_axis_filtering_buffer double additional buffer for z axis filtering [m] use_predicted_objects bool whether to use predicted objects for collision and slowdown detection [-] predicted_object_filtering_threshold double threshold for filtering predicted objects [valid only publish_obstacle_polygon true] [m] publish_obstacle_polygon bool if use_predicted_objects is true, node publishes collision polygon [-]"},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-stop-planner_1","title":"Obstacle Stop Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#role","title":"Role","text":"

This module inserts the stop point before the obstacle with margin. In nominal case, the margin is the sum of baselink_to_front and max_longitudinal_margin. The baselink_to_front means the distance between baselink( center of rear-wheel axis) and front of the car. The detection area is generated along the processed trajectory as following figure. (This module cut off the input trajectory behind the ego position and decimates the trajectory points for reducing computational costs.)

parameters for obstacle stop planner

target for obstacle stop planner

If another stop point has already been inserted by other modules within max_longitudinal_margin, the margin is the sum of baselink_to_front and min_longitudinal_margin. This feature exists to avoid stopping unnaturally position. (For example, the ego stops unnaturally far away from stop line of crosswalk that pedestrians cross to without this feature.)

minimum longitudinal margin

The module searches the obstacle pointcloud within detection area. When the pointcloud is found, Adaptive Cruise Controller modules starts to work. only when Adaptive Cruise Controller modules does not insert target velocity, the stop point is inserted to the trajectory. The stop point means the point with 0 velocity.

"},{"location":"planning/autoware_obstacle_stop_planner/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module, the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/autoware_obstacle_stop_planner/#parameters","title":"Parameters","text":"Name Type Description Default Range chattering_threshold float even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] 0.5 N/A lowpass_gain float gain parameter for low pass filter [-] 0.9 N/A max_velocity float max velocity [m/s] 20.0 N/A enable_slow_down boolean whether to use slow down planner [-] false N/A enable_z_axis_obstacle_filtering boolean filter obstacles in z axis (height) [-] true N/A z_axis_filtering_buffer float additional buffer for z axis filtering [m] 0.0 N/A voxel_grid_x float voxel grid x parameter for filtering pointcloud [m] 0.05 N/A voxel_grid_y float voxel grid y parameter for filtering pointcloud [m] 0.05 N/A voxel_grid_z float voxel grid z parameter for filtering pointcloud [m] 100000.0 N/A use_predicted_objects boolean whether to use predicted objects [-] false N/A publish_obstacle_polygon boolean whether to publish obstacle polygon [-] false N/A predicted_object_filtering_threshold float threshold for filtering predicted objects (valid only publish_obstacle_polygon true) [m] 1.5 N/A stop_position.max_longitudinal_margin float stop margin distance from obstacle on the path [m] 5.0 N/A stop_position.max_longitudinal_margin_behind_goal float stop margin distance from obstacle behind the goal on the path [m] 3.0 N/A stop_position.min_longitudinal_margin float stop margin distance when any other stop point is inserted in stop margin [m] 2.0 N/A stop_position.hold_stop_margin_distance float the ego keeps stopping if the ego is in this margin [m] 0.0 N/A detection_area.lateral_margin float margin [m] 0.0 N/A detection_area.vehicle_lateral_margin float margin of vehicle footprint [m] 0.0 N/A detection_area.pedestrian_lateral_margin float margin of pedestrian footprint [m] 0.0 N/A detection_area.unknown_lateral_margin float margin of unknown footprint [m] 0.0 N/A detection_area.step_length float step length for pointcloud search range [m] 1.0 N/A detection_area.enable_stop_behind_goal_for_obstacle boolean enable extend trajectory after goal lane for obstacle detection true N/A slow_down_section.longitudinal_forward_margin float margin distance from slow down point to vehicle front [m] 5.0 N/A slow_down_section.longitudinal_backward_margin float margin distance from slow down point to vehicle rear [m] 5.0 N/A slow_down_section.longitudinal_margin_span float fineness param for relaxing slow down margin (use this param if consider_constraints is True) [m/s] -0.1 N/A slow_down_section.min_longitudinal_forward_margin float min margin for relaxing slow down margin (use this param if consider_constraints is True) [m/s] 1.0 N/A detection_area.lateral_margin float offset from vehicle side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A detection_area.vehicle_lateral_margin float offset from vehicle side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A detection_area.pedestrian_lateral_margin float offset from pedestrian side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A detection_area.unknown_lateral_margin float offset from unknown side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A target_velocity.max_slow_down_velocity float max slow down velocity (use this param if consider_constraints is False)[m/s] 1.38 N/A target_velocity.min_slow_down_velocity float offset from vehicle side edge for expanding the search area of the surrounding point cloud [m] 0.28 N/A target_velocity.slow_down_velocity float target slow down velocity (use this param if consider_constraints is True)[m/s] 1.38 N/A constraints.jerk_min_slow_down float min slow down jerk constraint [m/sss] -0.3 N/A constraints.jerk_span float fineness param for planning deceleration jerk [m/sss] -0.01 N/A constraints.jerk_start float init jerk used for deceleration planning [m/sss] -0.1 N/A slow_down_planner.consider_constraints boolean set 'True', if no decel plan found under jerk/dec constrains, relax target slow down vel [-] false N/A slow_down_planner.velocity_threshold_decel_complete float use for judge whether the ego velocity converges the target slow down velocity [m/s] 0.2 N/A slow_down_planner.acceleration_threshold_decel_complete float use for judge whether the ego velocity converges the target slow down velocity [m/ss] 0.1 N/A"},{"location":"planning/autoware_obstacle_stop_planner/#stop-position","title":"Stop position","text":"Parameter Type Description max_longitudinal_margin double margin between obstacle and the ego's front [m] max_longitudinal_margin_behind_goal double margin between obstacle and the ego's front when the stop point is behind the goal[m] min_longitudinal_margin double if any obstacle exists within max_longitudinal_margin, this module set margin as the value of stop margin to min_longitudinal_margin [m] hold_stop_margin_distance double parameter for restart prevention (See above section) [m]"},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-detection-area","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for collision obstacle detection area [m] step_length double step length for pointcloud search range [m] enable_stop_behind_goal_for_obstacle bool enabling extend trajectory after goal lane for obstacle detection"},{"location":"planning/autoware_obstacle_stop_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_obstacle_stop_planner/#slow-down-planner","title":"Slow Down Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#role_1","title":"Role","text":"

This module inserts the slow down section before the obstacle with forward margin and backward margin. The forward margin is the sum of baselink_to_front and longitudinal_forward_margin, and the backward margin is the sum of baselink_to_front and longitudinal_backward_margin. The ego keeps slow down velocity in slow down section. The velocity is calculated the following equation.

\\(v_{target} = v_{min} + \\frac{l_{ld} - l_{vw}/2}{l_{margin}} (v_{max} - v_{min} )\\)

The above equation means that the smaller the lateral deviation of the pointcloud, the lower the velocity of the slow down section.

parameters for slow down planner

target for slow down planner"},{"location":"planning/autoware_obstacle_stop_planner/#parameters_1","title":"Parameters","text":"Name Type Description Default Range adaptive_cruise_control.use_object_to_estimate_vel boolean use tracking objects for estimating object velocity or not true N/A adaptive_cruise_control.use_pcl_to_estimate_vel boolean use pcl for estimating object velocity or not true N/A adaptive_cruise_control.consider_obj_velocity boolean consider forward vehicle velocity to ACC or not true N/A adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc float start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] 1.5 N/A adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc float stop adaptive cruise control when the velocity of the forward obstacle falls below this value [m/s] 1.0 N/A adaptive_cruise_control.emergency_stop_acceleration float supposed minimum acceleration (deceleration) in emergency stop [m/ss] -5.0 N/A adaptive_cruise_control.emergency_stop_idling_time float supposed idling time to start emergency stop [s] 0.5 N/A adaptive_cruise_control.min_dist_stop float minimum distance of emergency stop [m] 4.0 N/A adaptive_cruise_control.obstacle_emergency_stop_acceleration float supposed minimum acceleration (deceleration) in emergency stop [m/ss] -5.0 N/A adaptive_cruise_control.max_standard_acceleration float supposed maximum acceleration in active cruise control [m/ss] 0.5 N/A adaptive_cruise_control.min_standard_acceleration float supposed minimum acceleration (deceleration) in active cruise control -1.0 N/A adaptive_cruise_control.standard_idling_time float supposed idling time to react object in active cruise control [s] 0.5 N/A adaptive_cruise_control.min_dist_standard float minimum distance in active cruise control [m] 4.0 N/A adaptive_cruise_control.obstacle_min_standard_acceleration float supposed minimum acceleration of forward obstacle [m/ss] -1.5 N/A adaptive_cruise_control.margin_rate_to_change_vel float margin to insert upper velocity [-] 0.3 N/A adaptive_cruise_control.use_time_compensation_to_calc_distance boolean use time-compensation to calculate distance to forward vehicle true N/A adaptive_cruise_control.p_coefficient_positive float coefficient P in PID control (used when target dist -current_dist >=0) [-] 0.1 N/A adaptive_cruise_control.p_coefficient_negative float coefficient P in PID control (used when target dist -current_dist <0) [-] 0.3 N/A adaptive_cruise_control.d_coefficient_positive float coefficient D in PID control (used when delta_dist >=0) [-] 0.0 N/A adaptive_cruise_control.d_coefficient_negative float coefficient D in PID control (used when delta_dist <0) [-] 0.2 N/A adaptive_cruise_control.object_polygon_length_margin float The distance to extend the polygon length the object in pointcloud-object matching [m] 2.0 N/A adaptive_cruise_control.object_polygon_width_margin float The distance to extend the polygon width the object in pointcloud-object matching [m] 0.5 N/A adaptive_cruise_control.valid_estimated_vel_diff_time float Maximum time difference treated as continuous points in speed estimation using a point cloud [s] 1.0 N/A adaptive_cruise_control.valid_vel_que_time float Time width of information used for speed estimation in speed estimation using a point cloud [s] 0.5 N/A adaptive_cruise_control.valid_estimated_vel_max float Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] 20.0 N/A adaptive_cruise_control.valid_estimated_vel_min float Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] -20.0 N/A adaptive_cruise_control.thresh_vel_to_stop float Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] 1.5 N/A adaptive_cruise_control.lowpass_gain_of_upper_velocity float Lowpass-gain of upper velocity 0.75 N/A adaptive_cruise_control.use_rough_velocity_estimation boolean Use rough estimated velocity if the velocity estimation is failed (#### If this parameter is true, the vehicle may collide with the front car. Be careful. ####) false N/A adaptive_cruise_control.rough_velocity_rate float In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value 0.9 N/A"},{"location":"planning/autoware_obstacle_stop_planner/#slow-down-section","title":"Slow down section","text":"Parameter Type Description longitudinal_forward_margin double margin between obstacle and the ego's front [m] longitudinal_backward_margin double margin between obstacle and the ego's rear [m]"},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-detection-area_1","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for slow down obstacle detection area [m]"},{"location":"planning/autoware_obstacle_stop_planner/#slow-down-target-velocity","title":"Slow down target velocity","text":"Parameter Type Description max_slow_down_velocity double max slow down velocity [m/s] min_slow_down_velocity double min slow down velocity [m/s]"},{"location":"planning/autoware_obstacle_stop_planner/#flowchart_1","title":"Flowchart","text":""},{"location":"planning/autoware_obstacle_stop_planner/#adaptive-cruise-controller","title":"Adaptive Cruise Controller","text":""},{"location":"planning/autoware_obstacle_stop_planner/#role_2","title":"Role","text":"

Adaptive Cruise Controller module embeds maximum velocity in trajectory when there is a dynamic point cloud on the trajectory. The value of maximum velocity depends on the own velocity, the velocity of the point cloud ( = velocity of the front car), and the distance to the point cloud (= distance to the front car).

Parameter Type Description adaptive_cruise_control.use_object_to_estimate_vel bool use dynamic objects for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.use_pcl_to_estimate_vel bool use raw pointclouds for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.consider_obj_velocity bool consider forward vehicle velocity to calculate target velocity in adaptive cruise or not adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc double start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc double stop acc when the velocity of the forward obstacle falls below this value [m/s] adaptive_cruise_control.emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.emergency_stop_idling_time double supposed idling time to start emergency stop [s] adaptive_cruise_control.min_dist_stop double minimum distance of emergency stop [m] adaptive_cruise_control.obstacle_emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.max_standard_acceleration double supposed maximum acceleration in active cruise control [m/ss] adaptive_cruise_control.min_standard_acceleration double supposed minimum acceleration (deceleration) in active cruise control [m/ss] adaptive_cruise_control.standard_idling_time double supposed idling time to react object in active cruise control [s] adaptive_cruise_control.min_dist_standard double minimum distance in active cruise control [m] adaptive_cruise_control.obstacle_min_standard_acceleration double supposed minimum acceleration of forward obstacle [m/ss] adaptive_cruise_control.margin_rate_to_change_vel double rate of margin distance to insert target velocity [-] adaptive_cruise_control.use_time_compensation_to_calc_distance bool use time-compensation to calculate distance to forward vehicle adaptive_cruise_control.p_coefficient_positive double coefficient P in PID control (used when target dist -current_dist >=0) [-] adaptive_cruise_control.p_coefficient_negative double coefficient P in PID control (used when target dist -current_dist <0) [-] adaptive_cruise_control.d_coefficient_positive double coefficient D in PID control (used when delta_dist >=0) [-] adaptive_cruise_control.d_coefficient_negative double coefficient D in PID control (used when delta_dist <0) [-] adaptive_cruise_control.object_polygon_length_margin double The distance to extend the polygon length the object in pointcloud-object matching [m] adaptive_cruise_control.object_polygon_width_margin double The distance to extend the polygon width the object in pointcloud-object matching [m] adaptive_cruise_control.valid_estimated_vel_diff_time double Maximum time difference treated as continuous points in speed estimation using a point cloud [s] adaptive_cruise_control.valid_vel_que_time double Time width of information used for speed estimation in speed estimation using a point cloud [s] adaptive_cruise_control.valid_estimated_vel_max double Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.valid_estimated_vel_min double Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.thresh_vel_to_stop double Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] adaptive_cruise_control.lowpass_gain_of_upper_velocity double Lowpass-gain of target velocity adaptive_cruise_control.use_rough_velocity_estimation: bool Use rough estimated velocity if the velocity estimation is failed (valid only if osp.use_predicted_objects false) adaptive_cruise_control.rough_velocity_rate double In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value"},{"location":"planning/autoware_obstacle_stop_planner/#flowchart_2","title":"Flowchart","text":"

(*1) The target vehicle point is calculated as a closest obstacle PointCloud from ego along the trajectory.

(*2) The sources of velocity estimation can be changed by the following ROS parameters.

This module works only when the target point is found in the detection area of the Obstacle stop planner module.

The first process of this module is to estimate the velocity of the target vehicle point. The velocity estimation uses the velocity information of dynamic objects or the travel distance of the target vehicle point from the previous step. The dynamic object information is primal, and the travel distance estimation is used as a backup in case of the perception failure. If the target vehicle point is contained in the bounding box of a dynamic object geometrically, the velocity of the dynamic object is used as the target point velocity. Otherwise, the target point velocity is calculated by the travel distance of the target point from the previous step; that is (current_position - previous_position) / dt. Note that this travel distance based estimation fails when the target point is detected in the first time (it mainly happens in the cut-in situation). To improve the stability of the estimation, the median of the calculation result for several steps is used.

If the calculated velocity is within the threshold range, it is used as the target point velocity.

Only when the estimation is succeeded and the estimated velocity exceeds the value of obstacle_stop_velocity_thresh_*, the distance to the pointcloud from self-position is calculated. For prevent chattering in the mode transition, obstacle_velocity_thresh_to_start_acc is used for the threshold to start adaptive cruise, and obstacle_velocity_thresh_to_stop_acc is used for the threshold to stop adaptive cruise. When the calculated distance value exceeds the emergency distance \\(d\\_{emergency}\\) calculated by emergency_stop parameters, target velocity to insert is calculated.

The emergency distance \\(d\\_{emergency}\\) is calculated as follows.

\\(d_{emergency} = d_{margin_{emergency}} + t_{idling_{emergency}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {emergency}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{emergency}}})\\)

The target velocity is determined to keep the distance to the obstacle pointcloud from own vehicle at the standard distance \\(d\\_{standard}\\) calculated as following. Therefore, if the distance to the obstacle pointcloud is longer than standard distance, The target velocity becomes higher than the current velocity, and vice versa. For keeping the distance, a PID controller is used.

\\(d_{standard} = d_{margin_{standard}} + t_{idling_{standard}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {standard}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{standard}}})\\)

If the target velocity exceeds the value of thresh_vel_to_stop, the target velocity is embedded in the trajectory.

"},{"location":"planning/autoware_obstacle_stop_planner/#known-limits","title":"Known Limits","text":" "},{"location":"planning/autoware_path_optimizer/","title":"Path optimizer","text":""},{"location":"planning/autoware_path_optimizer/#path-optimizer","title":"Path optimizer","text":""},{"location":"planning/autoware_path_optimizer/#purpose","title":"Purpose","text":"

This package generates a trajectory that is kinematically-feasible to drive and collision-free based on the input path, drivable area. Only position and orientation of trajectory are updated in this module, and velocity is just taken over from the one in the input path.

"},{"location":"planning/autoware_path_optimizer/#feature","title":"Feature","text":"

This package is able to

Note that the velocity is just taken over from the input path.

"},{"location":"planning/autoware_path_optimizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_path_optimizer/#input","title":"input","text":"Name Type Description ~/input/path autoware_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry nav_msgs/msg/Odometry Current Velocity of ego vehicle"},{"location":"planning/autoware_path_optimizer/#output","title":"output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/msg/Trajectory Optimized trajectory that is feasible to drive and collision-free"},{"location":"planning/autoware_path_optimizer/#flowchart","title":"Flowchart","text":"

Flowchart of functions is explained here.

"},{"location":"planning/autoware_path_optimizer/#createplannerdata","title":"createPlannerData","text":"

The following data for planning is created.

struct PlannerData\n{\n// input\nHeader header;\nstd::vector<TrajectoryPoint> traj_points; // converted from the input path\nstd::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n\n// ego\ngeometry_msgs::msg::Pose ego_pose;\ndouble ego_vel;\n};\n
"},{"location":"planning/autoware_path_optimizer/#check-replan","title":"check replan","text":"

When one of the following conditions are met, trajectory optimization will be executed. Otherwise, previously optimized trajectory is used with updating the velocity from the latest input path.

max_path_shape_around_ego_lat_dist

"},{"location":"planning/autoware_path_optimizer/#getmodelpredictivetrajectory","title":"getModelPredictiveTrajectory","text":"

This module makes the trajectory kinematically-feasible and collision-free. We define vehicle pose in the frenet coordinate, and minimize tracking errors by optimization. This optimization considers vehicle kinematics and collision checking with road boundary and obstacles. To decrease the computation cost, the optimization is applied to the shorter trajectory (default: 50 [m]) than the whole trajectory, and concatenate the remained trajectory with the optimized one at last.

The trajectory just in front of the ego must not be changed a lot so that the steering wheel will be stable. Therefore, we use the previously generated trajectory in front of the ego.

Optimization center on the vehicle, that tries to locate just on the trajectory, can be tuned along side the vehicle vertical axis. This parameter mpt.kinematics.optimization center offset is defined as the signed length from the back-wheel center to the optimization center. Some examples are shown in the following figure, and it is shown that the trajectory of vehicle shape differs according to the optimization center even if the reference trajectory (green one) is the same.

More details can be seen here.

"},{"location":"planning/autoware_path_optimizer/#applyinputvelocity","title":"applyInputVelocity","text":"

Velocity is assigned in the optimized trajectory from the velocity in the behavior path. The shapes of the optimized trajectory and the path are different, therefore the each nearest trajectory point to the path is searched and the velocity is interpolated with zero-order hold.

"},{"location":"planning/autoware_path_optimizer/#insertzerovelocityoutsidedrivablearea","title":"insertZeroVelocityOutsideDrivableArea","text":"

Optimized trajectory is too short for velocity planning, therefore extend the trajectory by concatenating the optimized trajectory and the behavior path considering drivability. Generated trajectory is checked if it is inside the drivable area or not, and if outside drivable area, output a trajectory inside drivable area with the behavior path or the previously generated trajectory.

As described above, the behavior path is separated into two paths: one is for optimization and the other is the remain. The first path becomes optimized trajectory, and the second path just is transformed to a trajectory. Then a trajectory inside the drivable area is calculated as follows.

Optimization failure is dealt with the same as if the optimized trajectory is outside the drivable area. The output trajectory is memorized as a previously generated trajectory for the next cycle.

Rationale In the current design, since there are some modelling errors, the constraints are considered to be soft constraints. Therefore, we have to make sure that the optimized trajectory is inside the drivable area or not after optimization.

"},{"location":"planning/autoware_path_optimizer/#limitation","title":"Limitation","text":""},{"location":"planning/autoware_path_optimizer/#comparison-to-other-methods","title":"Comparison to other methods","text":"

Trajectory planning problem that satisfies kinematically-feasibility and collision-free has two main characteristics that makes hard to be solved: one is non-convex and the other is high dimension. Based on the characteristics, we investigate pros/cons of the typical planning methods: optimization-based, sampling-based, and learning-based method.

"},{"location":"planning/autoware_path_optimizer/#optimization-based-method","title":"Optimization-based method","text":""},{"location":"planning/autoware_path_optimizer/#sampling-based-method","title":"Sampling-based method","text":""},{"location":"planning/autoware_path_optimizer/#learning-based-method","title":"Learning-based method","text":"

Based on these pros/cons, we chose the optimization-based planner first. Although it has a cons to converge to the local minima, it can get a good solution by the preprocessing to approximate the problem to convex that almost equals to the original non-convex problem.

"},{"location":"planning/autoware_path_optimizer/#how-to-tune-parameters","title":"How to Tune Parameters","text":""},{"location":"planning/autoware_path_optimizer/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":" "},{"location":"planning/autoware_path_optimizer/#computation-time","title":"Computation time","text":""},{"location":"planning/autoware_path_optimizer/#robustness","title":"Robustness","text":""},{"location":"planning/autoware_path_optimizer/#other-options","title":"Other options","text":""},{"location":"planning/autoware_path_optimizer/#how-to-debug","title":"How To Debug","text":"

How to debug can be seen here.

"},{"location":"planning/autoware_path_optimizer/docs/debug/","title":"Debug","text":""},{"location":"planning/autoware_path_optimizer/docs/debug/#debug","title":"Debug","text":""},{"location":"planning/autoware_path_optimizer/docs/debug/#debug-visualization","title":"Debug visualization","text":"

The visualization markers of the planning flow (Input, Model Predictive Trajectory, and Output) are explained here.

All the following markers can be visualized by

ros2 launch autoware_path_optimizer launch_visualiation.launch.xml vehilce_model:=sample_vehicle\n

The vehicle_model must be specified to make footprints with vehicle's size.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#input","title":"Input","text":" "},{"location":"planning/autoware_path_optimizer/docs/debug/#model-predictive-trajectory-mpt","title":"Model Predictive Trajectory (MPT)","text":" "},{"location":"planning/autoware_path_optimizer/docs/debug/#output","title":"Output","text":" "},{"location":"planning/autoware_path_optimizer/docs/debug/#calculation-time","title":"Calculation time","text":"

The path_optimizer consists of many functions such as boundaries' width calculation, collision-free planning, etc. We can see the calculation time for each function as follows.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#raw-data","title":"Raw data","text":"

Enable option.enable_calculation_time_info or echo the topic as follows.

$ ros2 topic echo /planning/scenario_planning/lane_driving/motion_planning/path_optimizer/debug/calculation_time --field data\n---\n        insertFixedPoint:= 0.008 [ms]\ngetPaddedTrajectoryPoints:= 0.002 [ms]\nupdateConstraint:= 0.741 [ms]\noptimizeTrajectory:= 0.101 [ms]\nconvertOptimizedPointsToTrajectory:= 0.014 [ms]\ngetEBTrajectory:= 0.991 [ms]\nresampleReferencePoints:= 0.058 [ms]\nupdateFixedPoint:= 0.237 [ms]\nupdateBounds:= 0.22 [ms]\nupdateVehicleBounds:= 0.509 [ms]\ncalcReferencePoints:= 1.649 [ms]\ncalcMatrix:= 0.209 [ms]\ncalcValueMatrix:= 0.015 [ms]\ncalcObjectiveMatrix:= 0.305 [ms]\ncalcConstraintMatrix:= 0.641 [ms]\ninitOsqp:= 6.896 [ms]\nsolveOsqp:= 2.796 [ms]\ncalcOptimizedSteerAngles:= 9.856 [ms]\ncalcMPTPoints:= 0.04 [ms]\ngetModelPredictiveTrajectory:= 12.782 [ms]\noptimizeTrajectory:= 12.981 [ms]\napplyInputVelocity:= 0.577 [ms]\ninsertZeroVelocityOutsideDrivableArea:= 0.81 [ms]\ngetDebugMarker:= 0.684 [ms]\npublishDebugMarker:= 4.354 [ms]\npublishDebugMarkerOfOptimization:= 5.047 [ms]\ngenerateOptimizedTrajectory:= 20.374 [ms]\nextendTrajectory:= 0.326 [ms]\npublishDebugData:= 0.008 [ms]\nonPath:= 20.737 [ms]\n
"},{"location":"planning/autoware_path_optimizer/docs/debug/#plot","title":"Plot","text":"

With the following script, any calculation time of the above functions can be plot.

ros2 run autoware_path_optimizer calculation_time_plotter.py\n

You can specify functions to plot with the -f option.

ros2 run autoware_path_optimizer calculation_time_plotter.py -f \"onPath, generateOptimizedTrajectory, calcReferencePoints\"\n
"},{"location":"planning/autoware_path_optimizer/docs/debug/#qa-for-debug","title":"Q&A for Debug","text":""},{"location":"planning/autoware_path_optimizer/docs/debug/#the-output-frequency-is-low","title":"The output frequency is low","text":"

Check the function which is comparatively heavy according to this information.

For your information, the following functions for optimization and its initialization may be heavy in some complicated cases.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#when-a-part-of-the-trajectory-has-high-curvature","title":"When a part of the trajectory has high curvature","text":"

Some of the following may have an issue. Please check if there is something weird by the visualization.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#when-the-trajectorys-shape-is-zigzag","title":"When the trajectory's shape is zigzag","text":"

Some of the following may have an issue. Please check if there is something weird by the visualization.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/","title":"Model predictive trajectory","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#model-predictive-trajectory","title":"Model predictive trajectory","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#abstract","title":"Abstract","text":"

Model Predictive Trajectory (MPT) calculates the trajectory that meets the following conditions.

Conditions for collision free is considered to be not hard constraints but soft constraints. When the optimization failed or the optimized trajectory is not collision free, the output trajectory will be previously generated trajectory.

Trajectory near the ego must be stable, therefore the condition where trajectory points near the ego are the same as previously generated trajectory is considered, and this is the only hard constraints in MPT.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#vehicle-kinematics","title":"Vehicle kinematics","text":"

As the following figure, we consider the bicycle kinematics model in the frenet frame to track the reference path. At time step \\(k\\), we define lateral distance to the reference path, heading angle against the reference path, and steer angle as \\(y_k\\), \\(\\theta_k\\), and \\(\\delta_k\\) respectively.

Assuming that the commanded steer angle is \\(\\delta_{des, k}\\), the kinematics model in the frenet frame is formulated as follows. We also assume that the steer angle \\(\\delta_k\\) is first-order lag to the commanded one.

\\[ \\begin{align} y_{k+1} & = y_{k} + v \\sin \\theta_k dt \\\\\\ \\theta_{k+1} & = \\theta_k + \\frac{v \\tan \\delta_k}{L}dt - \\kappa_k v \\cos \\theta_k dt \\\\\\ \\delta_{k+1} & = \\delta_k - \\frac{\\delta_k - \\delta_{des,k}}{\\tau}dt \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#linearization","title":"Linearization","text":"

Then we linearize these equations. \\(y_k\\) and \\(\\theta_k\\) are tracking errors, so we assume that those are small enough. Therefore \\(\\sin \\theta_k \\approx \\theta_k\\).

Since \\(\\delta_k\\) is a steer angle, it is not always small. By using a reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) calculated by the reference path curvature \\(\\kappa_k\\), we express \\(\\delta_k\\) with a small value \\(\\Delta \\delta_k\\).

Note that the steer angle \\(\\delta_k\\) is within the steer angle limitation \\(\\delta_{\\max}\\). When the reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) is larger than the steer angle limitation \\(\\delta_{\\max}\\), and \\(\\delta_{\\mathrm{ref}, k}\\) is used to linearize the steer angle, \\(\\Delta \\delta_k\\) is \\(\\Delta \\delta_k = \\delta - \\delta_{\\mathrm{ref}, k} = \\delta_{\\max} - \\delta_{\\mathrm{ref}, k}\\), and the absolute \\(\\Delta \\delta_k\\) gets larger. Therefore, we have to apply the steer angle limitation to \\(\\delta_{\\mathrm{ref}, k}\\) as well.

\\[ \\begin{align} \\delta_{\\mathrm{ref}, k} & = \\mathrm{clamp}(\\arctan(L \\kappa_k), -\\delta_{\\max}, \\delta_{\\max}) \\\\\\ \\delta_k & = \\delta_{\\mathrm{ref}, k} + \\Delta \\delta_k, \\ \\Delta \\delta_k \\ll 1 \\\\\\ \\end{align} \\]

\\(\\mathrm{clamp}(v, v_{\\min}, v_{\\max})\\) is a function to convert \\(v\\) to be larger than \\(v_{\\min}\\) and smaller than \\(v_{\\max}\\).

Using this \\(\\delta_{\\mathrm{ref}, k}\\), \\(\\tan \\delta_k\\) is linearized as follows.

\\[ \\begin{align} \\tan \\delta_k & \\approx \\tan \\delta_{\\mathrm{ref}, k} + \\lbrace\\frac{d \\tan \\delta}{d \\delta}\\rbrace|_{\\delta = \\delta_{\\mathrm{ref}, k}} \\Delta \\delta_k \\\\\\ & = \\tan \\delta_{\\mathrm{ref}, k} + \\lbrace\\frac{d \\tan \\delta}{d \\delta}\\rbrace|_{\\delta = \\delta_{\\mathrm{ref}, k}} (\\delta_{\\mathrm{ref}, k} - \\delta_k) \\\\\\ & = \\tan \\delta_{\\mathrm{ref}, k} - \\frac{\\delta_{\\mathrm{ref}, k}}{\\cos^2 \\delta_{\\mathrm{ref}, k}} + \\frac{1}{\\cos^2 \\delta_{\\mathrm{ref}, k}} \\delta_k \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#one-step-state-equation","title":"One-step state equation","text":"

Based on the linearization, the error kinematics is formulated with the following linear equations,

\\[ \\begin{align} \\begin{pmatrix} y_{k+1} \\\\\\ \\theta_{k+1} \\end{pmatrix} = \\begin{pmatrix} 1 & v dt \\\\\\ 0 & 1 \\end{pmatrix} \\begin{pmatrix} y_k \\\\\\ \\theta_k \\end{pmatrix} + \\begin{pmatrix} 0 \\\\\\ \\frac{v dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} \\end{pmatrix} \\delta_{k} + \\begin{pmatrix} 0 \\\\\\ \\frac{v \\tan(\\delta_{\\mathrm{ref}, k}) dt}{L} - \\frac{v \\delta_{\\mathrm{ref}, k} dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} - \\kappa_k v dt \\end{pmatrix} \\end{align} \\]

which can be formulated as follows with the state \\(\\mathbf{x}\\), control input \\(u\\) and some matrices, where \\(\\mathbf{x} = (y_k, \\theta_k)\\)

\\[ \\begin{align} \\mathbf{x}_{k+1} = A_k \\mathbf{x}_k + \\mathbf{b}_k u_k + \\mathbf{w}_k \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#time-series-state-equation","title":"Time-series state equation","text":"

Then, we formulate time-series state equation by concatenating states, control inputs and matrices respectively as

\\[ \\begin{align} \\mathbf{x} = A \\mathbf{x}_0 + B \\mathbf{u} + \\mathbf{w} \\end{align} \\]

where

\\[ \\begin{align} \\mathbf{x} = (\\mathbf{x}^T_1, \\mathbf{x}^T_2, \\mathbf{x}^T_3, \\dots, \\mathbf{x}^T_{n-1})^T \\\\\\ \\mathbf{u} = (u_0, u_1, u_2, \\dots, u_{n-2})^T \\\\\\ \\mathbf{w} = (\\mathbf{w}^T_0, \\mathbf{w}^T_1, \\mathbf{w}^T_2, \\dots, \\mathbf{w}^T_{n-1})^T. \\\\\\ \\end{align} \\]

In detail, each matrices are constructed as follows.

\\[ \\begin{align} \\begin{pmatrix} \\mathbf{x}_1 \\\\\\ \\mathbf{x}_2 \\\\\\ \\mathbf{x}_3 \\\\\\ \\vdots \\\\\\ \\mathbf{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} A_0 \\\\\\ A_1 A_0 \\\\\\ A_2 A_1 A_0\\\\\\ \\vdots \\\\\\ \\prod\\limits_{k=0}^{n-1} A_{k} \\end{pmatrix} \\mathbf{x}_0 + \\begin{pmatrix} B_0 & 0 & & \\dots & 0 \\\\\\ A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\\\ A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} u_0 \\\\\\ u_1 \\\\\\ u_2 \\\\\\ \\vdots \\\\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} I & 0 & & \\dots & 0 \\\\\\ A_0 & I & 0 & \\dots & 0 \\\\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\mathbf{w}_0 \\\\\\ \\mathbf{w}_1 \\\\\\ \\mathbf{w}_2 \\\\\\ \\vdots \\\\\\ \\mathbf{w}_{n-2} \\end{pmatrix} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#free-boundary-conditioned-time-series-state-equation","title":"Free-boundary-conditioned time-series state equation","text":"

For path planning which does not start from the current ego pose, \\(\\mathbf{x}_0\\) should be the design variable of optimization. Therefore, we make \\(\\mathbf{u}'\\) by concatenating \\(\\mathbf{x}_0\\) and \\(\\mathbf{u}\\), and redefine \\(\\mathbf{x}\\) as follows.

\\[ \\begin{align} \\mathbf{u}' & = (\\mathbf{x}^T_0, \\mathbf{u}^T)^T \\\\\\ \\mathbf{x} & = (\\mathbf{x}^T_0, \\mathbf{x}^T_1, \\mathbf{x}^T_2, \\dots, \\mathbf{x}^T_{n-1})^T \\end{align} \\]

Then we get the following state equation

\\[ \\begin{align} \\mathbf{x}' = B \\mathbf{u}' + \\mathbf{w}, \\end{align} \\]

which is in detail

\\[ \\begin{align} \\begin{pmatrix} \\mathbf{x}_0 \\\\\\ \\mathbf{x}_1 \\\\\\ \\mathbf{x}_2 \\\\\\ \\mathbf{x}_3 \\\\\\ \\vdots \\\\\\ \\mathbf{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} I & 0 & \\dots & & & 0 \\\\\\ A_0 & B_0 & 0 & & \\dots & 0 \\\\\\ A_1 A_0 & A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\\\ A_2 A_1 A_0 & A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\\\ \\vdots & \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-1} A_k & \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} \\mathbf{x}_0 \\\\\\ u_0 \\\\\\ u_1 \\\\\\ u_2 \\\\\\ \\vdots \\\\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} 0 & \\dots & & & 0 \\\\\\ I & 0 & & \\dots & 0 \\\\\\ A_0 & I & 0 & \\dots & 0 \\\\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\mathbf{w}_0 \\\\\\ \\mathbf{w}_1 \\\\\\ \\mathbf{w}_2 \\\\\\ \\vdots \\\\\\ \\mathbf{w}_{n-2} \\end{pmatrix}. \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#objective-function","title":"Objective function","text":"

The objective function for smoothing and tracking is shown as follows, which can be formulated with value function matrices \\(Q, R\\).

\\[ \\begin{align} J_1 (\\mathbf{x}', \\mathbf{u}') & = w_y \\sum_{k} y_k^2 + w_{\\theta} \\sum_{k} \\theta_k^2 + w_{\\delta} \\sum_k \\delta_k^2 + w_{\\dot \\delta} \\sum_k \\dot \\delta_k^2 + w_{\\ddot \\delta} \\sum_k \\ddot \\delta_k^2 \\\\\\ & = \\mathbf{x}'^T Q \\mathbf{x}' + \\mathbf{u}'^T R \\mathbf{u}' \\\\\\ & = \\mathbf{u}'^T H \\mathbf{u}' + \\mathbf{u}'^T \\mathbf{f} \\end{align} \\]

As mentioned before, the constraints to be collision free with obstacles and road boundaries are formulated to be soft constraints. Assuming that the lateral distance to the road boundaries or obstacles from the back wheel center, front wheel center, and the point between them are \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) respectively, and slack variables for each point are \\(\\lambda_{\\mathrm{base}}, \\lambda_{\\mathrm{top}}, \\lambda_{\\mathrm{mid}}\\), the soft constraints can be formulated as follows.

\\[ y_{\\mathrm{base}, k, \\min} - \\lambda_{\\mathrm{base}, k} \\leq y_{\\mathrm{base}, k} (y_k) \\leq y_{\\mathrm{base}, k, \\max} + \\lambda_{\\mathrm{base}, k}\\\\\\ y_{\\mathrm{top}, k, \\min} - \\lambda_{\\mathrm{top}, k} \\leq y_{\\mathrm{top}, k} (y_k) \\leq y_{\\mathrm{top}, k, \\max} + \\lambda_{\\mathrm{top}, k}\\\\\\ y_{\\mathrm{mid}, k, \\min} - \\lambda_{\\mathrm{mid}, k} \\leq y_{\\mathrm{mid}, k} (y_k) \\leq y_{\\mathrm{mid}, k, \\max} + \\lambda_{\\mathrm{mid}, k} \\\\\\ 0 \\leq \\lambda_{\\mathrm{base}, k} \\\\\\ 0 \\leq \\lambda_{\\mathrm{top}, k} \\\\\\ 0 \\leq \\lambda_{\\mathrm{mid}, k} \\]

Since \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) is formulated as a linear function of \\(y_k\\), the objective function for soft constraints is formulated as follows.

\\[ \\begin{align} J_2 & (\\mathbf{\\lambda}_\\mathrm{base}, \\mathbf{\\lambda}_\\mathrm{top}, \\mathbf {\\lambda}_\\mathrm{mid})\\\\\\ & = w_{\\mathrm{base}} \\sum_{k} \\lambda_{\\mathrm{base}, k} + w_{\\mathrm{mid}} \\sum_k \\lambda_{\\mathrm{mid}, k} + w_{\\mathrm{top}} \\sum_k \\lambda_{\\mathrm{top}, k} \\end{align} \\]

Slack variables are also design variables for optimization. We define a vector \\(\\mathbf{v}\\), that concatenates all the design variables.

\\[ \\begin{align} \\mathbf{v} = \\begin{pmatrix} \\mathbf{u}'^T & \\mathbf{\\lambda}_\\mathrm{base}^T & \\mathbf{\\lambda}_\\mathrm{top}^T & \\mathbf{\\lambda}_\\mathrm{mid}^T \\end{pmatrix}^T \\end{align} \\]

The summation of these two objective functions is the objective function for the optimization problem.

\\[ \\begin{align} \\min_{\\mathbf{v}} J (\\mathbf{v}) = \\min_{\\mathbf{v}} J_1 (\\mathbf{u}') + J_2 (\\mathbf{\\lambda}_\\mathrm{base}, \\mathbf{\\lambda}_\\mathrm{top}, \\mathbf{\\lambda}_\\mathrm{mid}) \\end{align} \\]

As mentioned before, we use hard constraints where some trajectory points in front of the ego are the same as the previously generated trajectory points. This hard constraints is formulated as follows.

\\[ \\begin{align} \\delta_k = \\delta_{k}^{\\mathrm{prev}} (0 \\leq i \\leq N_{\\mathrm{fix}}) \\end{align} \\]

Finally we transform those objective functions to the following QP problem, and solve it.

\\[ \\begin{align} \\min_{\\mathbf{v}} \\ & \\frac{1}{2} \\mathbf{v}^T \\mathbf{H} \\mathbf{v} + \\mathbf{f} \\mathbf{v} \\\\\\ \\mathrm{s.t.} \\ & \\mathbf{b}_{lower} \\leq \\mathbf{A} \\mathbf{v} \\leq \\mathbf{b}_{upper} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#constraints","title":"Constraints","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#steer-angle-limitation","title":"Steer angle limitation","text":"

Steer angle has a limitation \\(\\delta_{max}\\) and \\(\\delta_{min}\\). Therefore we add linear inequality equations.

\\[ \\begin{align} \\delta_{min} \\leq \\delta_i \\leq \\delta_{max} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#collision-free","title":"Collision free","text":"

To realize collision-free trajectory planning, we have to formulate constraints that the vehicle is inside the road and also does not collide with obstacles in linear equations. For linearity, we implemented some methods to approximate the vehicle shape with a set of circles, that is reliable and easy to implement.

  1. Bicycle Model
  2. Uniform Circles
  3. Fitting Uniform Circles

Now we formulate the linear constraints where a set of circles on each trajectory point is collision-free. By using the drivable area, we calculate upper and lower boundaries along reference points, which will be interpolated on any position on the trajectory. NOTE that upper and lower boundary is left and right, respectively.

Assuming that upper and lower boundaries are \\(b_l\\), \\(b_u\\) respectively, and \\(r\\) is a radius of a circle, lateral deviation of the circle center \\(y'\\) has to be

\\[ b_l + r \\leq y' \\leq b_u - r. \\]

Based on the following figure, \\(y'\\) can be formulated as follows.

\\[ \\begin{align} y' & = L \\sin(\\theta + \\beta) + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\\\ & = L \\sin \\theta \\cos \\beta + L \\cos \\theta \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\\\ & \\approx L \\theta \\cos \\beta + L \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\end{align} \\] \\[ b_l + r - \\lambda \\leq y' \\leq b_u - r + \\lambda. \\] \\[ \\begin{align} y' & = C_1 \\mathbf{x} + C_2 \\\\\\ & = C_1 (B \\mathbf{v} + \\mathbf{w}) + C_2 \\\\\\ & = C_1 B \\mathbf{v} + \\mathbf{w} + C_2 \\end{align} \\]

Note that longitudinal position of the circle center and the trajectory point to calculate boundaries are different. But each boundaries are vertical against the trajectory, resulting in less distortion by the longitudinal position difference since road boundaries does not change so much. For example, if the boundaries are not vertical against the trajectory and there is a certain difference of longitudinal position between the circe center and the trajectory point, we can easily guess that there is much more distortion when comparing lateral deviation and boundaries.

\\[ \\begin{align} A_{blk} & = \\begin{pmatrix} C_1 B & O & \\dots & O & I_{N_{ref} \\times N_{ref}} & O \\dots & O\\\\\\ -C_1 B & O & \\dots & O & I & O \\dots & O\\\\\\ O & O & \\dots & O & I & O \\dots & O \\end{pmatrix} \\in \\mathbf{R}^{3 N_{ref} \\times D_v + N_{circle} N_{ref}} \\\\\\ \\mathbf{b}_{lower, blk} & = \\begin{pmatrix} \\mathbf{b}_{lower} - C_1 \\mathbf{w} - C_2 \\\\\\ -\\mathbf{b}_{upper} + C_1 \\mathbf{w} + C_2 \\\\\\ O \\end{pmatrix} \\in \\mathbf{R}^{3 N_{ref}} \\\\\\ \\mathbf{b}_{upper, blk} & = \\mathbf{\\infty} \\in \\mathbf{R}^{3 N_{ref}} \\end{align} \\]

We will explain options for optimization.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#l-infinity-optimization","title":"L-infinity optimization","text":"

The above formulation is called L2 norm for slack variables. Instead, if we use L-infinity norm where slack variables are shared by enabling l_inf_norm.

\\[ \\begin{align} A_{blk} = \\begin{pmatrix} C_1 B & I_{N_{ref} \\times N_{ref}} \\\\\\ -C_1 B & I \\\\\\ O & I \\end{pmatrix} \\in \\mathbf{R}^{3N_{ref} \\times D_v + N_{ref}} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#tips-for-stable-trajectory-planning","title":"Tips for stable trajectory planning","text":"

In order to make the trajectory optimization problem stabler to solve, the boundary constraint which the trajectory footprints should be inside and optimization weights are modified.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#keep-minimum-boundary-width","title":"Keep minimum boundary width","text":"

The drivable area's width is sometimes smaller than the vehicle width since the behavior module does not consider the width. To realize the stable trajectory optimization, the drivable area's width is guaranteed to be larger than the vehicle width and an additional margin in a rule-based way.

We cannot distinguish the boundary by roads from the boundary by obstacles for avoidance in the motion planner, the drivable area is modified in the following multi steps assuming that \\(l_{width}\\) is the vehicle width and \\(l_{margin}\\) is an additional margin.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#extend-violated-boundary","title":"Extend violated boundary","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#avoid-sudden-steering","title":"Avoid sudden steering","text":"

When the obstacle suddenly appears which is determined to avoid by the behavior module, the drivable area's shape just in front of the ego will change, resulting in the sudden steering. To prevent this, the drivable area's shape close to the ego is fixed as previous drivable area's shape.

Assume that \\(v_{ego}\\) is the ego velocity, and \\(t_{fix}\\) is the time to fix the forward drivable area's shape.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#calculate-avoidance-cost","title":"Calculate avoidance cost","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#change-optimization-weights","title":"Change optimization weights","text":"\\[ \\begin{align} r & = \\mathrm{lerp}(w^{\\mathrm{steer}}_{\\mathrm{normal}}, w^{\\mathrm{steer}}_{\\mathrm{avoidance}}, c) \\\\\\ w^{\\mathrm{lat}} & = \\mathrm{lerp}(w^{\\mathrm{lat}}_{\\mathrm{normal}}, w^{\\mathrm{lat}}_{\\mathrm{avoidance}}, r) \\\\\\ w^{\\mathrm{yaw}} & = \\mathrm{lerp}(w^{\\mathrm{yaw}}_{\\mathrm{normal}}, w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}, r) \\end{align} \\]

Assume that \\(c\\) is the normalized avoidance cost, \\(w^{\\mathrm{lat}}\\) is the weight for lateral error, \\(w^{\\mathrm{yaw}}\\) is the weight for yaw error, and other variables are as follows.

Parameter Type Description \\(w^{\\mathrm{steer}}_{\\mathrm{normal}}\\) double weight for steering minimization in normal cases \\(w^{\\mathrm{steer}}_{\\mathrm{avoidance}}\\) double weight for steering minimization in avoidance cases \\(w^{\\mathrm{lat}}_{\\mathrm{normal}}\\) double weight for lateral error minimization in normal cases \\(w^{\\mathrm{lat}}_{\\mathrm{avoidance}}\\) double weight for lateral error minimization in avoidance cases \\(w^{\\mathrm{yaw}}_{\\mathrm{normal}}\\) double weight for yaw error minimization in normal cases \\(w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}\\) double weight for yaw error minimization in avoidance cases"},{"location":"planning/autoware_path_smoother/","title":"Path Smoothing","text":""},{"location":"planning/autoware_path_smoother/#path-smoothing","title":"Path Smoothing","text":""},{"location":"planning/autoware_path_smoother/#purpose","title":"Purpose","text":"

This package contains code to smooth a path or trajectory.

"},{"location":"planning/autoware_path_smoother/#features","title":"Features","text":""},{"location":"planning/autoware_path_smoother/#elastic-band","title":"Elastic Band","text":"

More details about the elastic band can be found here.

"},{"location":"planning/autoware_path_smoother/docs/eb/","title":"Elastic band","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#elastic-band","title":"Elastic band","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#abstract","title":"Abstract","text":"

Elastic band smooths the input path. Since the latter optimization (model predictive trajectory) is calculated on the frenet frame, path smoothing is applied here so that the latter optimization will be stable.

Note that this smoothing process does not consider collision checking. Therefore the output path may have a collision with road boundaries or obstacles.

"},{"location":"planning/autoware_path_smoother/docs/eb/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#general-parameters","title":"General parameters","text":"Parameter Type Description eb.common.num_points int points for elastic band optimization eb.common.delta_arc_length double delta arc length for elastic band optimization"},{"location":"planning/autoware_path_smoother/docs/eb/#parameters-for-optimization","title":"Parameters for optimization","text":"Parameter Type Description eb.option.enable_warm_start bool flag to use warm start eb.weight.smooth_weight double weight for smoothing eb.weight.lat_error_weight double weight for minimizing the lateral error"},{"location":"planning/autoware_path_smoother/docs/eb/#parameters-for-validation","title":"Parameters for validation","text":"Parameter Type Description eb.option.enable_optimization_validation bool flag to validate optimization eb.validation.max_error double max lateral error by optimization"},{"location":"planning/autoware_path_smoother/docs/eb/#formulation","title":"Formulation","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#objective-function","title":"Objective function","text":"

We formulate a quadratic problem minimizing the diagonal length of the rhombus on each point generated by the current point and its previous and next points, shown as the red vector's length.

Assuming that \\(k\\)'th point is \\(\\mathbf{p}_k = (x_k, y_k)\\), the objective function is as follows.

\\[ \\begin{align} \\ J & = \\min \\sum_{k=1}^{n-2} ||(\\mathbf{p}_{k+1} - \\mathbf{p}_{k}) - (\\mathbf{p}_{k} - \\mathbf{p}_{k-1})||^2 \\\\\\ \\ & = \\min \\sum_{k=1}^{n-2} ||\\mathbf{p}_{k+1} - 2 \\mathbf{p}_{k} + \\mathbf{p}_{k-1}||^2 \\\\\\ \\ & = \\min \\sum_{k=1}^{n-2} \\{(x_{k+1} - x_k + x_{k-1})^2 + (y_{k+1} - y_k + y_{k-1})^2\\} \\\\\\ \\ & = \\min \\begin{pmatrix} \\ x_0 \\\\\\ \\ x_1 \\\\\\ \\ x_2 \\\\\\ \\vdots \\\\\\ \\ x_{n-3}\\\\\\ \\ x_{n-2} \\\\\\ \\ x_{n-1} \\\\\\ \\ y_0 \\\\\\ \\ y_1 \\\\\\ \\ y_2 \\\\\\ \\vdots \\\\\\ \\ y_{n-3}\\\\\\ \\ y_{n-2} \\\\\\ \\ y_{n-1} \\\\\\ \\end{pmatrix}^T \\begin{pmatrix} 1 & -2 & 1 & 0 & \\dots& \\\\\\ -2 & 5 & -4 & 1 & 0 &\\dots \\\\\\ 1 & -4 & 6 & -4 & 1 & \\\\\\ 0 & 1 & -4 & 6 & -4 & \\\\\\ \\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\\\ & \\vdots & & & \\\\\\ & & & 1 & -4 & 6 & -4 & 1 \\\\\\ & & & & 1 & -4 & 5 & -2 \\\\\\ & & & & & 1 & -2 & 1& \\\\\\ & & & & & & & &1 & -2 & 1 & 0 & \\dots& \\\\\\ & & & & & & & &-2 & 5 & -4 & 1 & 0 &\\dots \\\\\\ & & & & & & & &1 & -4 & 6 & -4 & 1 & \\\\\\ & & & & & & & &0 & 1 & -4 & 6 & -4 & \\\\\\ & & & & & & & &\\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\\\ & & & & & & & & & \\vdots & & & \\\\\\ & & & & & & & & & & & 1 & -4 & 6 & -4 & 1 \\\\\\ & & & & & & & & & & & & 1 & -4 & 5 & -2 \\\\\\ & & & & & & & & & & & & & 1 & -2 & 1& \\\\\\ \\end{pmatrix} \\begin{pmatrix} \\ x_0 \\\\\\ \\ x_1 \\\\\\ \\ x_2 \\\\\\ \\vdots \\\\\\ \\ x_{n-3}\\\\\\ \\ x_{n-2} \\\\\\ \\ x_{n-1} \\\\\\ \\ y_0 \\\\\\ \\ y_1 \\\\\\ \\ y_2 \\\\\\ \\vdots \\\\\\ \\ y_{n-3}\\\\\\ \\ y_{n-2} \\\\\\ \\ y_{n-1} \\\\\\ \\end{pmatrix} \\end{align} \\]"},{"location":"planning/autoware_path_smoother/docs/eb/#constraint","title":"Constraint","text":"

The distance that each point can move is limited so that the path will not changed a lot but will be smoother. In detail, the longitudinal distance that each point can move is zero, and the lateral distance is parameterized as eb.clearance.clearance_for_fix, eb.clearance.clearance_for_joint and eb.clearance.clearance_for_smooth.

The following figure describes how to constrain the lateral distance to move. The red line is where the point can move. The points for the upper and lower bound are described as \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), respectively.

Based on the line equation whose slope angle is \\(\\theta_k\\) and that passes through \\((x_k, y_k)\\), \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), the lateral constraint is formulated as follows.

\\[ C_k^l \\leq C_k \\leq C_k^u \\]

In addition, the beginning point is fixed and the end point as well if the end point is considered as the goal. This constraint can be applied with the upper equation by changing the distance that each point can move.

"},{"location":"planning/autoware_path_smoother/docs/eb/#debug","title":"Debug","text":" "},{"location":"planning/autoware_planning_factor_interface/","title":"planning factor interface","text":""},{"location":"planning/autoware_planning_test_manager/","title":"Autoware Planning Test Manager","text":""},{"location":"planning/autoware_planning_test_manager/#autoware-planning-test-manager","title":"Autoware Planning Test Manager","text":""},{"location":"planning/autoware_planning_test_manager/#background","title":"Background","text":"

In each node of the planning module, when exceptional input, such as unusual routes or significantly deviated ego-position, is given, the node may not be prepared for such input and could crash. As a result, debugging node crashes can be time-consuming. For example, if an empty trajectory is given as input and it was not anticipated during implementation, the node might crash due to the unaddressed exceptional input when changes are merged, during scenario testing or while the system is running on an actual vehicle.

"},{"location":"planning/autoware_planning_test_manager/#purpose","title":"Purpose","text":"

The purpose is to provide a utility for implementing tests to ensure that node operates correctly when receiving exceptional input. By utilizing this utility and implementing tests for exceptional input, the purpose is to reduce bugs that are only discovered when actually running the system, by requiring measures for exceptional input before merging PRs.

"},{"location":"planning/autoware_planning_test_manager/#features","title":"Features","text":""},{"location":"planning/autoware_planning_test_manager/#confirmation-of-normal-operation","title":"Confirmation of normal operation","text":"

For the test target node, confirm that the node operates correctly and publishes the required messages for subsequent nodes. To do this, test_node publish the necessary messages and confirm that the node's output is being published.

"},{"location":"planning/autoware_planning_test_manager/#robustness-confirmation-for-special-inputs","title":"Robustness confirmation for special inputs","text":"

After confirming normal operation, ensure that the test target node does not crash when given exceptional input. To do this, provide exceptional input from the test_node and confirm that the node does not crash.

(WIP)

"},{"location":"planning/autoware_planning_test_manager/#usage","title":"Usage","text":"
TEST(PlanningModuleInterfaceTest, NodeTestWithExceptionTrajectory)\n{\nrclcpp::init(0, nullptr);\n\n// instantiate test_manager with PlanningInterfaceTestManager type\nauto test_manager = std::make_shared<autoware::planning_test_manager::PlanningInterfaceTestManager>();\n\n// get package directories for necessary configuration files\nconst auto autoware_test_utils_dir =\nament_index_cpp::get_package_share_directory(\"autoware_test_utils\");\nconst auto target_node_dir =\nament_index_cpp::get_package_share_directory(\"target_node\");\n\n// set arguments to get the config file\nnode_options.arguments(\n{\"--ros-args\", \"--params-file\",\nautoware_test_utils_dir + \"/config/test_vehicle_info.param.yaml\", \"--params-file\",\nautoware_planning_validator_dir + \"/config/planning_validator.param.yaml\"});\n\n// instantiate the TargetNode with node_options\nauto test_target_node = std::make_shared<TargetNode>(node_options);\n\n// publish the necessary topics from test_manager second argument is topic name\ntest_manager->publishOdometry(test_target_node, \"/localization/kinematic_state\");\ntest_manager->publishMaxVelocity(\ntest_target_node, \"velocity_smoother/input/external_velocity_limit_mps\");\n\n// set scenario_selector's input topic name(this topic is changed to test node)\ntest_manager->setTrajectoryInputTopicName(\"input/parking/trajectory\");\n\n// test with normal trajectory\nASSERT_NO_THROW(test_manager->testWithNominalTrajectory(test_target_node));\n\n// make sure target_node is running\nEXPECT_GE(test_manager->getReceivedTopicNum(), 1);\n\n// test with trajectory input with empty/one point/overlapping point\nASSERT_NO_THROW(test_manager->testWithAbnormalTrajectory(test_target_node));\n\n// shutdown ROS context\nrclcpp::shutdown();\n}\n
"},{"location":"planning/autoware_planning_test_manager/#implemented-tests","title":"Implemented tests","text":"Node Test name exceptional input output Exceptional input pattern autoware_planning_validator NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points velocity_smoother NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_cruise_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_stop_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_velocity_limiter NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points path_optimizer NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points scenario_selector NodeTestWithExceptionTrajectoryLaneDrivingMode NodeTestWithExceptionTrajectoryParkingMode trajectory scenario Empty, single point, path with duplicate points for scenarios:LANEDRIVING and PARKING freespace_planner NodeTestWithExceptionRoute route trajectory Empty route behavior_path_planner NodeTestWithExceptionRoute NodeTestWithOffTrackEgoPose route route odometry Empty route Off-lane ego-position behavior_velocity_planner NodeTestWithExceptionPathWithLaneID path_with_lane_id path Empty path"},{"location":"planning/autoware_planning_test_manager/#important-notes","title":"Important Notes","text":"

During test execution, when launching a node, parameters are loaded from the parameter file within each package. Therefore, when adding parameters, it is necessary to add the required parameters to the parameter file in the target node package. This is to prevent the node from being unable to launch if there are missing parameters when retrieving them from the parameter file during node launch.

"},{"location":"planning/autoware_planning_test_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"

(WIP)

"},{"location":"planning/autoware_planning_topic_converter/","title":"Planning Topic Converter","text":""},{"location":"planning/autoware_planning_topic_converter/#planning-topic-converter","title":"Planning Topic Converter","text":""},{"location":"planning/autoware_planning_topic_converter/#purpose","title":"Purpose","text":"

This package provides tools that convert topic type among types are defined in https://github.com/autowarefoundation/autoware_msgs.

"},{"location":"planning/autoware_planning_topic_converter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_planning_topic_converter/#usage-example","title":"Usage example","text":"

The tools in this package are provided as composable ROS 2 component nodes, so that they can be spawned into an existing process, launched from launch files, or invoked from the command line.

<load_composable_node target=\"container_name\">\n<composable_node pkg=\"planning_topic_converter\" plugin=\"autoware::planning_topic_converter::PathToTrajectory\" name=\"path_to_trajectory_converter\" namespace=\"\">\n<!-- params -->\n<param name=\"input_topic\" value=\"foo\"/>\n<param name=\"output_topic\" value=\"bar\"/>\n<!-- composable node config -->\n<extra_arg name=\"use_intra_process_comms\" value=\"false\"/>\n</composable_node>\n</load_composable_node>\n
"},{"location":"planning/autoware_planning_topic_converter/#parameters","title":"Parameters","text":"Name Type Description input_topic string input topic name. output_topic string output topic name."},{"location":"planning/autoware_planning_topic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_planning_topic_converter/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_planning_validator/","title":"Planning Validator","text":""},{"location":"planning/autoware_planning_validator/#planning-validator","title":"Planning Validator","text":"

The autoware_planning_validator is a module that checks the validity of a trajectory before it is published. The status of the validation can be viewed in the /diagnostics and /validation_status topics. When an invalid trajectory is detected, the autoware_planning_validator will process the trajectory following the selected option: \"0. publish the trajectory as it is\", \"1. stop publishing the trajectory\", \"2. publish the last validated trajectory\".

"},{"location":"planning/autoware_planning_validator/#supported-features","title":"Supported features","text":"

The following features are supported for trajectory validation and can have thresholds set by parameters:

The following features are to be implemented.

"},{"location":"planning/autoware_planning_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"planning/autoware_planning_validator/#inputs","title":"Inputs","text":"

The autoware_planning_validator takes in the following inputs:

Name Type Description ~/input/kinematics nav_msgs/Odometry ego pose and twist ~/input/trajectory autoware_planning_msgs/Trajectory target trajectory to be validated in this node"},{"location":"planning/autoware_planning_validator/#outputs","title":"Outputs","text":"

It outputs the following:

Name Type Description ~/output/trajectory autoware_planning_msgs/Trajectory validated trajectory ~/output/validation_status planning_validator/PlanningValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"planning/autoware_planning_validator/#parameters","title":"Parameters","text":"

The following parameters can be set for the autoware_planning_validator:

"},{"location":"planning/autoware_planning_validator/#system-parameters","title":"System parameters","text":"Name Type Description Default value invalid_trajectory_handling_type int set the operation when the invalid trajectory is detected. 0: publish the trajectory even if it is invalid, 1: stop publishing the trajectory, 2: publish the last validated trajectory. 0 publish_diag bool the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true diag_error_count_threshold int if true, diagnostics msg is published. true display_on_terminal bool show error msg on terminal true"},{"location":"planning/autoware_planning_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"planning/autoware_planning_validator/#thresholds","title":"Thresholds","text":"

The input trajectory is detected as invalid if the index exceeds the following thresholds.

Name Type Description Default value thresholds.interval double invalid threshold of the distance of two neighboring trajectory points [m] 100.0 thresholds.relative_angle double invalid threshold of the relative angle of two neighboring trajectory points [rad] 2.0 thresholds.curvature double invalid threshold of the curvature in each trajectory point [1/m] 1.0 thresholds.lateral_acc double invalid threshold of the lateral acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_max_acc double invalid threshold of the maximum longitudinal acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_min_acc double invalid threshold of the minimum longitudinal deceleration in each trajectory point [m/ss] -9.8 thresholds.steering double invalid threshold of the steering angle in each trajectory point [rad] 1.414 thresholds.steering_rate double invalid threshold of the steering angle rate in each trajectory point [rad/s] 10.0 thresholds.velocity_deviation double invalid threshold of the velocity deviation between the ego velocity and the trajectory point closest to ego [m/s] 100.0 thresholds.distance_deviation double invalid threshold of the distance deviation between the ego position and the trajectory point closest to ego [m] 100.0 parameters.longitudinal_distance_deviation double invalid threshold of the longitudinal distance deviation between the ego position and the trajectory [m] 2.0"},{"location":"planning/autoware_planning_validator/#parameters_1","title":"Parameters","text":"

For parameters used e.g. to calculate threshold.

| parameters.forward_trajectory_length_acceleration | double | This value is used to calculate required trajectory length. | -5.0 | | parameters.forward_trajectory_length_margin | double | A margin of the required trajectory length not to raise an error when the ego slightly exceeds the trajectory end point. | 2.0 |

"},{"location":"planning/autoware_remaining_distance_time_calculator/","title":"Index","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#remaining-distance-and-time-calculator","title":"Remaining Distance and Time Calculator","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#role","title":"Role","text":"

This package aims to provide mission remaining distance and remaining time calculations.

"},{"location":"planning/autoware_remaining_distance_time_calculator/#activation-and-timing","title":"Activation and Timing","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#module-parameters","title":"Module Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period. [Hz]"},{"location":"planning/autoware_remaining_distance_time_calculator/#inner-workings","title":"Inner-workings","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#remaining-distance-calculation","title":"Remaining Distance Calculation","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#remaining-time-calculation","title":"Remaining Time Calculation","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#future-work","title":"Future Work","text":""},{"location":"planning/autoware_route_handler/","title":"route handler","text":""},{"location":"planning/autoware_route_handler/#route-handler","title":"route handler","text":"

route_handler is a library for calculating driving route on the lanelet map.

"},{"location":"planning/autoware_route_handler/#unit-testing","title":"Unit Testing","text":"

The unit testing depends on autoware_test_utils package. autoware_test_utils is a library that provides several common functions to simplify unit test creating.

By default, route file is necessary to create tests. The following illustrates the route that are used in the unit test

"},{"location":"planning/autoware_route_handler/#lane-change-test-route","title":"Lane change test route","text":""},{"location":"planning/autoware_rtc_interface/","title":"RTC Interface","text":""},{"location":"planning/autoware_rtc_interface/#rtc-interface","title":"RTC Interface","text":""},{"location":"planning/autoware_rtc_interface/#purpose","title":"Purpose","text":"

RTC Interface is an interface to publish the decision status of behavior planning modules and receive execution command from external of an autonomous driving system.

"},{"location":"planning/autoware_rtc_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_rtc_interface/#usage-example","title":"Usage example","text":"
// Generate instance (in this example, \"intersection\" is selected)\nautoware::rtc_interface::RTCInterface rtc_interface(node, \"intersection\");\n\n// Generate UUID\nconst unique_identifier_msgs::msg::UUID uuid = generateUUID(getModuleId());\n\n// Repeat while module is running\nwhile (...) {\n// Get safety status of the module corresponding to the module id\nconst bool safe = ...\n\n// Get distance to the object corresponding to the module id\nconst double start_distance = ...\nconst double finish_distance = ...\n\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Update status\nrtc_interface.updateCooperateStatus(uuid, safe, start_distance, finish_distance, stamp);\n\nif (rtc_interface.isActivated(uuid)) {\n// Execute planning\n} else {\n// Stop planning\n}\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Publish status topic\nrtc_interface.publishCooperateStatus(stamp);\n}\n\n// Remove the status from array\nrtc_interface.removeCooperateStatus(uuid);\n
"},{"location":"planning/autoware_rtc_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_rtc_interface/#rtcinterface-constructor","title":"RTCInterface (Constructor)","text":"
autoware::rtc_interface::RTCInterface(rclcpp::Node & node, const std::string & name);\n
"},{"location":"planning/autoware_rtc_interface/#description","title":"Description","text":"

A constructor for autoware::rtc_interface::RTCInterface.

"},{"location":"planning/autoware_rtc_interface/#input","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output","title":"Output","text":"

An instance of RTCInterface

"},{"location":"planning/autoware_rtc_interface/#publishcooperatestatus","title":"publishCooperateStatus","text":"
autoware::rtc_interface::publishCooperateStatus(const rclcpp::Time & stamp)\n
"},{"location":"planning/autoware_rtc_interface/#description_1","title":"Description","text":"

Publish registered cooperate status.

"},{"location":"planning/autoware_rtc_interface/#input_1","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_1","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#updatecooperatestatus","title":"updateCooperateStatus","text":"
autoware::rtc_interface::updateCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid, const bool safe, const double start_distance, const double finish_distance, const rclcpp::Time & stamp)\n
"},{"location":"planning/autoware_rtc_interface/#description_2","title":"Description","text":"

Update cooperate status corresponding to uuid. If cooperate status corresponding to uuid is not registered yet, add new cooperate status.

"},{"location":"planning/autoware_rtc_interface/#input_2","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_2","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#removecooperatestatus","title":"removeCooperateStatus","text":"
autoware::rtc_interface::removeCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/autoware_rtc_interface/#description_3","title":"Description","text":"

Remove cooperate status corresponding to uuid from registered statuses.

"},{"location":"planning/autoware_rtc_interface/#input_3","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_3","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#clearcooperatestatus","title":"clearCooperateStatus","text":"
autoware::rtc_interface::clearCooperateStatus()\n
"},{"location":"planning/autoware_rtc_interface/#description_4","title":"Description","text":"

Remove all cooperate statuses.

"},{"location":"planning/autoware_rtc_interface/#input_4","title":"Input","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#output_4","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#isactivated","title":"isActivated","text":"
autoware::rtc_interface::isActivated(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/autoware_rtc_interface/#description_5","title":"Description","text":"

Return received command status corresponding to uuid.

"},{"location":"planning/autoware_rtc_interface/#input_5","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_5","title":"Output","text":"

If auto mode is enabled, return based on the safety status. If not, if received command is ACTIVATED, return true. If not, return false.

"},{"location":"planning/autoware_rtc_interface/#isregistered","title":"isRegistered","text":"
autoware::rtc_interface::isRegistered(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/autoware_rtc_interface/#description_6","title":"Description","text":"

Return true if uuid is registered.

"},{"location":"planning/autoware_rtc_interface/#input_6","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_6","title":"Output","text":"

If uuid is registered, return true. If not, return false.

"},{"location":"planning/autoware_rtc_interface/#debugging-tools","title":"Debugging Tools","text":"

There is a debugging tool called RTC Replayer for the RTC interface.

"},{"location":"planning/autoware_rtc_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_rtc_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_scenario_selector/","title":"autoware_scenario_selector","text":""},{"location":"planning/autoware_scenario_selector/#autoware_scenario_selector","title":"autoware_scenario_selector","text":""},{"location":"planning/autoware_scenario_selector/#scenario_selector_node","title":"scenario_selector_node","text":"

scenario_selector_node is a node that switches trajectories from each scenario.

"},{"location":"planning/autoware_scenario_selector/#input-topics","title":"Input topics","text":"Name Type Description ~input/lane_driving/trajectory autoware_planning_msgs::Trajectory trajectory of LaneDriving scenario ~input/parking/trajectory autoware_planning_msgs::Trajectory trajectory of Parking scenario ~input/lanelet_map autoware_map_msgs::msg::LaneletMapBin ~input/route autoware_planning_msgs::LaneletRoute route and goal pose ~input/odometry nav_msgs::Odometry for checking whether vehicle is stopped is_parking_completed bool (implemented as rosparam) whether all split trajectory of Parking are published"},{"location":"planning/autoware_scenario_selector/#output-topics","title":"Output topics","text":"Name Type Description ~output/scenario tier4_planning_msgs::Scenario current scenario and scenarios to be activated ~output/trajectory autoware_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/autoware_scenario_selector/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/autoware_scenario_selector/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in scenario_selector.launch or add args when executing roslaunch
  2. roslaunch autoware_scenario_selector scenario_selector.launch
"},{"location":"planning/autoware_scenario_selector/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float timer's update rate 10 \u22650.0 th_max_message_delay_sec float threshold time of input messages' maximum delay 1 \u22650.0 th_arrived_distance_m float threshold distance to check if vehicle has arrived at the trajectory's endpoint 1 \u22650.0 th_stopped_time_sec float threshold time to check if vehicle is stopped 1 \u22650.0 th_stopped_velocity_mps float threshold velocity to check if vehicle is stopped 0.01 \u22650.0 enable_mode_switching boolean enable switching between scenario modes when ego is stuck in parking area 1 N/A"},{"location":"planning/autoware_scenario_selector/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_surround_obstacle_checker/","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/#surround-obstacle-checker","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/#purpose","title":"Purpose","text":"

This module subscribes required data (ego-pose, obstacles, etc), and publishes zero velocity limit to keep stopping if any of stop conditions are satisfied.

"},{"location":"planning/autoware_surround_obstacle_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_surround_obstacle_checker/#algorithms","title":"Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/#check-data","title":"Check data","text":"

Check that surround_obstacle_checker receives no ground pointcloud, dynamic objects and current velocity data.

"},{"location":"planning/autoware_surround_obstacle_checker/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.

"},{"location":"planning/autoware_surround_obstacle_checker/#stop-requirement","title":"Stop requirement","text":"

If it satisfies all following conditions, it plans stopping.

"},{"location":"planning/autoware_surround_obstacle_checker/#states","title":"States","text":"

To prevent chattering, surround_obstacle_checker manages two states. As mentioned in stop condition section, it prevents chattering by changing threshold to find surround obstacle depending on the states.

"},{"location":"planning/autoware_surround_obstacle_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_surround_obstacle_checker/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/autoware_surround_obstacle_checker/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization ~/debug/footprint geometry_msgs::msg::PolygonStamped Ego vehicle base footprint for visualization ~/debug/footprint_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_distance offset for visualization ~/debug/footprint_recover_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_recover_distance offset for visualization"},{"location":"planning/autoware_surround_obstacle_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range pointcloud.enable_check boolean enable to check surrounding pointcloud false N/A pointcloud.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.enable_check boolean enable to check surrounding unknown objects true N/A unknown.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.enable_check boolean enable to check surrounding car true N/A car.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.enable_check boolean enable to check surrounding truck true N/A truck.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.enable_check boolean enable to check surrounding bus true N/A bus.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.enable_check boolean enable to check surrounding trailer true N/A trailer.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.enable_check boolean enable to check surrounding motorcycle true N/A motorcycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.enable_check boolean enable to check surrounding bicycle true N/A bicycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_side_distance float f objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.enable_check boolean enable to check surrounding pedestrian true N/A pedestrian.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 surround_check_hysteresis_distance float If no object exists in this hysteresis distance added to the above distance, transit to \"non-surrounding-obstacle\" status [m] 0.3 \u22650.0 state_clear_time float Threshold to clear stop state [s] 2.0 \u22650.0 stop_state_ego_speed float Threshold to check ego vehicle stopped [m/s] 0.1 \u22650.0 publish_debug_footprints boolean Publish vehicle footprint & footprints with surround_check_distance and surround_check_recover_distance offsets. true N/A debug_footprint_label string select the label for debug footprint car ['pointcloud', 'unknown', 'car', 'truck', 'bus', 'trailer', 'motorcycle', 'bicycle', 'pedestrian'] Name Type Description Default value enable_check bool Indicates whether each object is considered in the obstacle check target. true for objects; false for point clouds surround_check_front_distance bool If there are objects or point clouds within this distance in front, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_side_distance double If there are objects or point clouds within this side distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_back_distance double If there are objects or point clouds within this back distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_hysteresis_distance double If no object exists within surround_check_xxx_distance plus this additional distance, transition to the \"non-surrounding-obstacle\" status [m]. 0.3 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1 publish_debug_footprints bool Publish vehicle footprint with/without offsets true"},{"location":"planning/autoware_surround_obstacle_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform stop planning, it is necessary to get obstacle pointclouds data. Hence, it does not plan stopping if the obstacle is in blind spot.

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#surround-obstacle-checker","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#purpose","title":"Purpose","text":"

surround_obstacle_checker \u306f\u3001\u81ea\u8eca\u304c\u505c\u8eca\u4e2d\u3001\u81ea\u8eca\u306e\u5468\u56f2\u306b\u969c\u5bb3\u7269\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\u306b\u767a\u9032\u3057\u306a\u3044\u3088\u3046\u306b\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#algorithms","title":"Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#check-data","title":"Check data","text":"

\u70b9\u7fa4\u3001\u52d5\u7684\u7269\u4f53\u3001\u81ea\u8eca\u901f\u5ea6\u306e\u30c7\u30fc\u30bf\u304c\u53d6\u5f97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3059\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

\u81ea\u8eca\u3068\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u8a08\u7b97\u3059\u308b\u3002 \u3053\u3053\u3067\u306f\u3001\u81ea\u8eca\u306e\u30dd\u30ea\u30b4\u30f3\u3092\u8a08\u7b97\u3057\u3001\u70b9\u7fa4\u306e\u5404\u70b9\u304a\u3088\u3073\u5404\u52d5\u7684\u7269\u4f53\u306e\u30dd\u30ea\u30b4\u30f3\u3068\u306e\u8ddd\u96e2\u3092\u305d\u308c\u305e\u308c\u8a08\u7b97\u3059\u308b\u3053\u3068\u3067\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u6c42\u3081\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#stop-condition","title":"Stop condition","text":"

\u6b21\u306e\u6761\u4ef6\u3092\u3059\u3079\u3066\u6e80\u305f\u3059\u3068\u304d\u3001\u81ea\u8eca\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#states","title":"States","text":"

\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u9632\u6b62\u306e\u305f\u3081\u3001surround_obstacle_checker \u3067\u306f\u72b6\u614b\u3092\u7ba1\u7406\u3057\u3066\u3044\u308b\u3002 Stop condition \u306e\u9805\u3067\u8ff0\u3079\u305f\u3088\u3046\u306b\u3001\u72b6\u614b\u306b\u3088\u3063\u3066\u969c\u5bb3\u7269\u5224\u5b9a\u306e\u3057\u304d\u3044\u5024\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u3092\u9632\u6b62\u3057\u3066\u3044\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true surround_check_distance double If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance double If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

\u3053\u306e\u6a5f\u80fd\u304c\u52d5\u4f5c\u3059\u308b\u305f\u3081\u306b\u306f\u969c\u5bb3\u7269\u70b9\u7fa4\u306e\u89b3\u6e2c\u304c\u5fc5\u8981\u306a\u305f\u3081\u3001\u969c\u5bb3\u7269\u304c\u6b7b\u89d2\u306b\u5165\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u308f\u306a\u3044\u3002

"},{"location":"planning/autoware_velocity_smoother/","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/#velocity-smoother","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/#purpose","title":"Purpose","text":"

autoware_velocity_smoother outputs a desired velocity profile on a reference trajectory. This module plans a velocity profile within the limitations of the velocity, the acceleration and the jerk to realize both the maximization of velocity and the ride quality. We call this module autoware_velocity_smoother because the limitations of the acceleration and the jerk means the smoothness of the velocity profile.

"},{"location":"planning/autoware_velocity_smoother/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_velocity_smoother/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_velocity_smoother/#extract-trajectory","title":"Extract trajectory","text":"

For the point on the reference trajectory closest to the center of the rear wheel axle of the vehicle, it extracts the reference path between extract_behind_dist behind and extract_ahead_dist ahead.

"},{"location":"planning/autoware_velocity_smoother/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

It applies the velocity limit input from the external of autoware_velocity_smoother. Remark that the external velocity limit is different from the velocity limit already set on the map and the reference trajectory. The external velocity is applied at the position that it is able to reach the velocity limit with the deceleration and the jerk constraints set as the parameter.

"},{"location":"planning/autoware_velocity_smoother/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

It applies the velocity limit near the stopping point. This function is used to approach near the obstacle or improve the accuracy of stopping.

"},{"location":"planning/autoware_velocity_smoother/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

It applies the velocity limit to decelerate at the curve. It calculates the velocity limit from the curvature of the reference trajectory and the maximum lateral acceleration max_lateral_accel. The velocity limit is set as not to fall under min_curve_velocity.

Note: velocity limit that requests larger than nominal.jerk is not applied. In other words, even if a sharp curve is planned just in front of the ego, no deceleration is performed.

"},{"location":"planning/autoware_velocity_smoother/#apply-steering-rate-limit","title":"Apply steering rate limit","text":"

It calculates the desired steering angles of trajectory points. and it applies the steering rate limit. If the (steering_angle_rate > max_steering_angle_rate), it decreases the velocity of the trajectory point to acceptable velocity.

"},{"location":"planning/autoware_velocity_smoother/#resample-trajectory","title":"Resample trajectory","text":"

It resamples the points on the reference trajectory with designated time interval. Note that the range of the length of the trajectory is set between min_trajectory_length and max_trajectory_length, and the distance between two points is longer than min_trajectory_interval_distance. It samples densely up to the distance traveled between resample_time with the current velocity, then samples sparsely after that. By sampling according to the velocity, both calculation load and accuracy are achieved since it samples finely at low velocity and coarsely at high velocity.

"},{"location":"planning/autoware_velocity_smoother/#calculate-initial-state","title":"Calculate initial state","text":"

Calculate initial values for velocity planning. The initial values are calculated according to the situation as shown in the following table.

Situation Initial velocity Initial acceleration First calculation Current velocity 0.0 Engaging engage_velocity engage_acceleration Deviate between the planned velocity and the current velocity Current velocity Previous planned value Normal Previous planned value Previous planned value"},{"location":"planning/autoware_velocity_smoother/#smooth-velocity","title":"Smooth velocity","text":"

It plans the velocity. The algorithm of velocity planning is chosen from JerkFiltered, L2 and Linf, and it is set in the launch file. In these algorithms, they use OSQP[1] as the solver of the optimization.

"},{"location":"planning/autoware_velocity_smoother/#jerkfiltered","title":"JerkFiltered","text":"

It minimizes the sum of the minus of the square of the velocity and the square of the violation of the velocity limit, the acceleration limit and the jerk limit.

"},{"location":"planning/autoware_velocity_smoother/#l2","title":"L2","text":"

It minimizes the sum of the minus of the square of the velocity, the square of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/autoware_velocity_smoother/#linf","title":"Linf","text":"

It minimizes the sum of the minus of the square of the velocity, the maximum absolute value of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/autoware_velocity_smoother/#post-process","title":"Post process","text":"

It performs the post-process of the planned velocity.

After the optimization, a resampling called post resampling is performed before passing the optimized trajectory to the next node. Since the required path interval from optimization may be different from the one for the next module, post resampling helps to fill this gap. Therefore, in post resampling, it is necessary to check the path specification of the following module to determine the parameters. Note that if the computational load of the optimization algorithm is high and the path interval is sparser than the path specification of the following module in the first resampling, post resampling would resample the trajectory densely. On the other hand, if the computational load of the optimization algorithm is small and the path interval is denser than the path specification of the following module in the first resampling, the path is sparsely resampled according to the specification of the following module.

"},{"location":"planning/autoware_velocity_smoother/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_velocity_smoother/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/autoware_velocity_smoother/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_steering_rate_limited autoware_planning_msgs/Trajectory Steering angle rate limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/autoware_velocity_smoother/#parameters","title":"Parameters","text":""},{"location":"planning/autoware_velocity_smoother/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/autoware_velocity_smoother/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/autoware_velocity_smoother/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value enable_lateral_acc_limit bool To toggle the lateral acceleration filter on and off. You can switch it dynamically at runtime. true max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0 min_decel_for_lateral_acc_lim_filter double Deceleration limit to avoid sudden braking by the lateral acceleration filter [m/ss]. Strong limitation degrades the deceleration response to the appearance of sharp curves due to obstacle avoidance, etc. -2.5"},{"location":"planning/autoware_velocity_smoother/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/autoware_velocity_smoother/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/autoware_velocity_smoother/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/autoware_velocity_smoother/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/autoware_velocity_smoother/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/autoware_velocity_smoother/#limit-steering-angle-rate-parameters","title":"Limit steering angle rate parameters","text":"Name Type Description Default value enable_steering_rate_limit bool To toggle the steer rate filter on and off. You can switch it dynamically at runtime. true max_steering_angle_rate double Maximum steering angle rate [degree/s] 40.0 resample_ds double Distance between trajectory points [m] 0.1 curvature_threshold double If curvature > curvature_threshold, steeringRateLimit is triggered [1/m] 0.02 curvature_calculation_distance double Distance of points while curvature is calculating [m] 1.0"},{"location":"planning/autoware_velocity_smoother/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/autoware_velocity_smoother/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/autoware_velocity_smoother/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_velocity_smoother/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/autoware_velocity_smoother/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/autoware_velocity_smoother/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/autoware_velocity_smoother/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#velocity-smoother","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#purpose","title":"Purpose","text":"

autoware_velocity_smoother\u306f\u76ee\u6a19\u8ecc\u9053\u4e0a\u306e\u5404\u70b9\u306b\u304a\u3051\u308b\u671b\u307e\u3057\u3044\u8eca\u901f\u3092\u8a08\u753b\u3057\u3066\u51fa\u529b\u3059\u308b\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002 \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f\u3001\u901f\u5ea6\u306e\u6700\u5927\u5316\u3068\u4e57\u308a\u5fc3\u5730\u306e\u826f\u3055\u3092\u4e21\u7acb\u3059\u308b\u305f\u3081\u306b\u3001\u4e8b\u524d\u306b\u6307\u5b9a\u3055\u308c\u305f\u5236\u9650\u901f\u5ea6\u3001\u5236\u9650\u52a0\u901f\u5ea6\u304a\u3088\u3073\u5236\u9650\u8e8d\u5ea6\u306e\u7bc4\u56f2\u3067\u8eca\u901f\u3092\u8a08\u753b\u3059\u308b\u3002 \u52a0\u901f\u5ea6\u3084\u8e8d\u5ea6\u306e\u5236\u9650\u3092\u4e0e\u3048\u308b\u3053\u3068\u306f\u8eca\u901f\u306e\u5909\u5316\u3092\u6ed1\u3089\u304b\u306b\u3059\u308b\u3053\u3068\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u3001\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u3092autoware_velocity_smoother\u3068\u547c\u3093\u3067\u3044\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#extract-trajectory","title":"Extract trajectory","text":"

\u81ea\u8eca\u5f8c\u8f2a\u8ef8\u4e2d\u5fc3\u4f4d\u7f6e\u306b\u6700\u3082\u8fd1\u3044\u53c2\u7167\u7d4c\u8def\u4e0a\u306e\u70b9\u306b\u5bfe\u3057\u3001extract_behind_dist\u3060\u3051\u623b\u3063\u305f\u70b9\u304b\u3089extract_ahead_dist\u3060\u3051\u9032\u3093\u3060\u70b9\u307e\u3067\u306e\u53c2\u7167\u7d4c\u8def\u3092\u629c\u304d\u51fa\u3059\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

\u30e2\u30b8\u30e5\u30fc\u30eb\u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002 \u3053\u3053\u3067\u6271\u3046\u5916\u90e8\u306e\u901f\u5ea6\u5236\u9650\u306f/planning/scenario_planning/max_velocity\u306e topic \u3067\u6e21\u3055\u308c\u308b\u3082\u306e\u3067\u3001\u5730\u56f3\u4e0a\u3067\u8a2d\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u306a\u3069\u3001\u53c2\u7167\u7d4c\u8def\u306b\u3059\u3067\u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u308b\u5236\u9650\u901f\u5ea6\u3068\u306f\u5225\u3067\u3042\u308b\u3002 \u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u308b\u901f\u5ea6\u5236\u9650\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u6307\u5b9a\u3055\u308c\u3066\u3044\u308b\u6e1b\u901f\u5ea6\u304a\u3088\u3073\u8e8d\u5ea6\u306e\u5236\u9650\u306e\u7bc4\u56f2\u3067\u6e1b\u901f\u53ef\u80fd\u306a\u4f4d\u7f6e\u304b\u3089\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

\u505c\u6b62\u70b9\u306b\u8fd1\u3065\u3044\u305f\u3068\u304d\u306e\u901f\u5ea6\u3092\u8a2d\u5b9a\u3059\u308b\u3002\u969c\u5bb3\u7269\u8fd1\u508d\u307e\u3067\u8fd1\u3065\u304f\u5834\u5408\u3084\u3001\u6b63\u7740\u7cbe\u5ea6\u5411\u4e0a\u306a\u3069\u306e\u76ee\u7684\u306b\u7528\u3044\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

\u7d4c\u8def\u306e\u66f2\u7387\u306b\u5fdc\u3058\u3066\u3001\u6307\u5b9a\u3055\u308c\u305f\u6700\u5927\u6a2a\u52a0\u901f\u5ea6max_lateral_accel\u3092\u8d85\u3048\u306a\u3044\u901f\u5ea6\u3092\u5236\u9650\u901f\u5ea6\u3068\u3057\u3066\u8a2d\u5b9a\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u5236\u9650\u901f\u5ea6\u306fmin_curve_velocity\u3092\u4e0b\u56de\u3089\u306a\u3044\u3088\u3046\u306b\u8a2d\u5b9a\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#resample-trajectory","title":"Resample trajectory","text":"

\u6307\u5b9a\u3055\u308c\u305f\u6642\u9593\u9593\u9694\u3067\u7d4c\u8def\u306e\u70b9\u3092\u518d\u30b5\u30f3\u30d7\u30eb\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u7d4c\u8def\u5168\u4f53\u306e\u9577\u3055\u306fmin_trajectory_length\u304b\u3089max_trajectory_length\u306e\u9593\u3068\u306a\u308b\u3088\u3046\u306b\u518d\u30b5\u30f3\u30d7\u30eb\u3092\u884c\u3044\u3001\u70b9\u306e\u9593\u9694\u306fmin_trajectory_interval_distance\u3088\u308a\u5c0f\u3055\u304f\u306a\u3089\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u3002 \u73fe\u5728\u8eca\u901f\u3067resample_time\u306e\u9593\u9032\u3080\u8ddd\u96e2\u307e\u3067\u306f\u5bc6\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u3001\u305d\u308c\u4ee5\u964d\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002 \u3053\u306e\u65b9\u6cd5\u3067\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u4f4e\u901f\u6642\u306f\u5bc6\u306b\u3001\u9ad8\u901f\u6642\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30eb\u3055\u308c\u308b\u305f\u3081\u3001\u505c\u6b62\u7cbe\u5ea6\u3068\u8a08\u7b97\u8ca0\u8377\u8efd\u6e1b\u306e\u4e21\u7acb\u3092\u56f3\u3063\u3066\u3044\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#calculate-initial-state","title":"Calculate initial state","text":"

\u901f\u5ea6\u8a08\u753b\u306e\u305f\u3081\u306e\u521d\u671f\u5024\u3092\u8a08\u7b97\u3059\u308b\u3002\u521d\u671f\u5024\u306f\u72b6\u6cc1\u306b\u5fdc\u3058\u3066\u4e0b\u8868\u306e\u3088\u3046\u306b\u8a08\u7b97\u3059\u308b\u3002

\u72b6\u6cc1 \u521d\u671f\u901f\u5ea6 \u521d\u671f\u52a0\u901f\u5ea6 \u6700\u521d\u306e\u8a08\u7b97\u6642 \u73fe\u5728\u8eca\u901f 0.0 \u767a\u9032\u6642 engage_velocity engage_acceleration \u73fe\u5728\u8eca\u901f\u3068\u8a08\u753b\u8eca\u901f\u304c\u4e56\u96e2 \u73fe\u5728\u8eca\u901f \u524d\u56de\u8a08\u753b\u5024 \u901a\u5e38\u6642 \u524d\u56de\u8a08\u753b\u5024 \u524d\u56de\u8a08\u753b\u5024"},{"location":"planning/autoware_velocity_smoother/README.ja/#smooth-velocity","title":"Smooth velocity","text":"

\u901f\u5ea6\u306e\u8a08\u753b\u3092\u884c\u3046\u3002\u901f\u5ea6\u8a08\u753b\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306fJerkFiltered, L2, Linf\u306e 3 \u7a2e\u985e\u306e\u3046\u3061\u304b\u3089\u30b3\u30f3\u30d5\u30a3\u30b0\u3067\u6307\u5b9a\u3059\u308b\u3002 \u6700\u9069\u5316\u306e\u30bd\u30eb\u30d0\u306f OSQP[1]\u3092\u5229\u7528\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#jerkfiltered","title":"JerkFiltered","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u30b8\u30e3\u30fc\u30af\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u30b8\u30e3\u30fc\u30af\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#l2","title":"L2","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#linf","title":"Linf","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u306e\u7dcf\u548c\u3068\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e\u7d76\u5bfe\u6700\u5927\u5024\u306e\u548c\u306e\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#post-process","title":"Post process","text":"

\u8a08\u753b\u3055\u308c\u305f\u8ecc\u9053\u306e\u5f8c\u51e6\u7406\u3092\u884c\u3046\u3002

\u6700\u9069\u5316\u306e\u8a08\u7b97\u304c\u7d42\u308f\u3063\u305f\u3042\u3068\u3001\u6b21\u306e\u30ce\u30fc\u30c9\u306b\u7d4c\u8def\u3092\u6e21\u3059\u524d\u306bpost resampling\u3068\u547c\u3070\u308c\u308b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3046\u3002\u3053\u3053\u3067\u518d\u5ea6\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u6700\u9069\u5316\u524d\u3067\u5fc5\u8981\u306a\u7d4c\u8def\u9593\u9694\u3068\u5f8c\u6bb5\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306b\u6e21\u3059\u7d4c\u8def\u9593\u9694\u304c\u5fc5\u305a\u3057\u3082\u4e00\u81f4\u3057\u3066\u3044\u306a\u3044\u304b\u3089\u3067\u3042\u308a\u3001\u305d\u306e\u5dee\u3092\u57cb\u3081\u308b\u305f\u3081\u306b\u518d\u5ea6\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u3002\u305d\u306e\u305f\u3081\u3001post resampling\u3067\u306f\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3092\u78ba\u8a8d\u3057\u3066\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u6c7a\u3081\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\u306a\u304a\u3001\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u9ad8\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u758e\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u3001post resampling\u3067\u7d4c\u8def\u3092\u871c\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002\u9006\u306b\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u5c0f\u3055\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u871c\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u3001post resampling\u3067\u7d4c\u8def\u3092\u305d\u306e\u4ed5\u69d8\u306b\u5408\u308f\u305b\u3066\u758e\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/autoware_velocity_smoother/README.ja/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/autoware_velocity_smoother/README.ja/#parameters","title":"Parameters","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/autoware_velocity_smoother/README.ja/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/autoware_velocity_smoother/README.ja/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/autoware_velocity_smoother/README.ja/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/autoware_velocity_smoother/README.ja/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_resample_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_resample_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_resample_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_resample_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/autoware_velocity_smoother/README.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/","title":"Avoidance by lane change design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#avoidance-by-lane-change-design","title":"Avoidance by lane change design","text":"

This is a sub-module to avoid obstacles by lane change maneuver.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#purpose-role","title":"Purpose / Role","text":"

This module is designed as one of the obstacle avoidance features and generates a lane change path if the following conditions are satisfied.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Basically, this module is implemented by reusing the avoidance target filtering logic of the existing Static Object Avoidance Module and the path generation logic of the Normal Lane Change Module. On the other hand, the conditions under which the module is activated differ from those of a normal avoidance module.

Check that the following conditions are satisfied after the filtering process for the avoidance target.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#number-of-the-avoidance-target-objects","title":"Number of the avoidance target objects","text":"

This module is launched when the number of avoidance target objects on EGO DRIVING LANE is greater than execute_object_num. If there are no avoidance targets in the ego driving lane or their number is less than the parameter, the obstacle is avoided by normal avoidance behavior (if the normal avoidance module is registered).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#lane-change-end-point-condition","title":"Lane change end point condition","text":"

Unlike the normal avoidance module, which specifies the shift line end point, this module does not specify its end point when generating a lane change path. On the other hand, setting execute_only_when_lane_change_finish_before_object to true will activate this module only if the lane change can be completed before the avoidance target object.

Although setting the parameter to false would increase the scene of avoidance by lane change, it is assumed that sufficient lateral margin may not be ensured in some cases because the vehicle passes by the side of obstacles during the lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#parameters","title":"Parameters","text":"Name Unit Type Description Default value execute_object_num [-] int Number of avoidance target objects on ego driving lane is greater than this value, this module will be launched. 1 execute_object_longitudinal_margin [m] double [maybe unused] Only when distance between the ego and avoidance target object is longer than this value, this module will be launched. 0.0 execute_only_when_lane_change_finish_before_object [-] bool If this flag set true, this module will be launched only when the lane change end point is NOT behind the avoidance target object. true"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/","title":"Avoidance module for dynamic objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#avoidance-module-for-dynamic-objects","title":"Avoidance module for dynamic objects","text":"

This module is under development.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#purpose-role","title":"Purpose / Role","text":"

This module provides avoidance functions for vehicles, pedestrians, and obstacles in the vicinity of the ego's path in combination with the autoware_path_optimizer. Each module performs the following roles. Dynamic Avoidance module cuts off the drivable area according to the position and velocity of the target to be avoided. Obstacle Avoidance module modifies the path to be followed so that it fits within the received drivable area.

Static obstacle's avoidance functions are also provided by the Static Avoidance module, but these modules have different roles. The Static Obstacle Avoidance module performs avoidance through the outside of own lanes but cannot avoid the moving objects. On the other hand, this module can avoid moving objects. For this reason, the word \"dynamic\" is used in the module's name. The table below lists the avoidance modules that can handle each situation.

avoid within the own lane avoid through the outside of own lanes avoid not-moving objects Avoidance Module Dynamic Avoidance Module + Obstacle Avoidance Module Avoidance Module avoid moving objects Dynamic Avoidance Module + Obstacle Avoidance Module No Module (Under Development)"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#policy-of-algorithms","title":"Policy of algorithms","text":"

Here, we describe the policy of inner algorithms. The inner algorithms can be separated into two parts: The first decides whether to avoid the obstacles and the second cuts off the drivable area against the corresponding obstacle.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#select-obstacles-to-avoid","title":"Select obstacles to avoid","text":"

To decide whether to avoid an object, both the predicted path and the state (pose and twist) of each object are used. The type of objects the user wants this module to avoid is also required. Using this information, the module decides to avoid objects that obstruct the ego's passage and can be avoided.

The definition of obstruct the ego's passage is implemented as the object that collides within seconds. The other, can be avoided denotes whether it can be avoided without risk to the passengers or the other vehicles. For this purpose, the module judges whether the obstacle can be avoided with satisfying the constraints of lateral acceleration and lateral jerk. For example, the module decides not to avoid an object that is too close or fast in the lateral direction.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#cuts-off-the-drivable-area-against-the-selected-vehicles","title":"Cuts off the drivable area against the selected vehicles","text":"

For the selected obstacles to be avoided, the module cuts off the drivable area. As inputs to decide the shapes of cut-off polygons, poses of the obstacles are mainly used, assuming they move in parallel to the ego's path, instead of its predicted path. This design arises from that the predicted path of objects is not accurate enough to use the path modifications (at least currently). Furthermore, the output drivable area shape is designed as a rectangular cutout along the ego's path to make the computation scalar rather than planar.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#determination-of-lateral-dimension","title":"Determination of lateral dimension","text":"

The lateral dimensions of the polygon are calculated as follows. The polygon's width to extract from the drivable area is the obstacle width and drivable_area_generation.lat_offset_from_obstacle. We can limit the lateral shift length by drivable_area_generation.max_lat_offset_to_avoid.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#determination-of-longitudinal-dimension","title":"Determination of longitudinal dimension","text":"

Then, extracting the same directional and opposite directional obstacles from the drivable area will work as follows considering TTC (time to collision).

Regarding the same directional obstacles, obstacles whose TTC is negative will be ignored (e.g., The obstacle is in front of the ego, and the obstacle's velocity is larger than the ego's velocity.).

Same directional obstacles (Parameter names may differ from implementation)

Opposite directional obstacles (Parameter names may differ from implementation)

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#cuts-off-the-drivable-area-against-the-selected-pedestrians","title":"Cuts off the drivable area against the selected pedestrians","text":"

Then, we describe the logic to generate the drivable areas against pedestrians to be avoided. Objects of this type are considered to have priority right of way over the ego's vehicle while ensuring a minimum safety of the ego's vehicle. In other words, the module assigns a drivable area to an obstacle with a specific margin based on the predicted paths with specific confidences for a specific time interval, as shown in the following figure.

Restriction areas are generated from each pedestrian's predicted paths

Apart from polygons for objects, the module also generates another polygon to ensure the ego's safety, i.e., to avoid abrupt steering or significant changes from the path. This is similar to avoidance against the vehicles and takes precedence over keeping a safe distance from the object to be avoided. As a result, as shown in the figure below, the polygons around the objects reduced by the secured polygon of the ego are subtracted from the ego's drivable area.

Ego's minimum requirements are prioritized against object margin"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#example","title":"Example","text":"Avoidance for the bus departure Avoidance on curve Avoidance against the opposite direction vehicle Avoidance for multiple vehicle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#future-works","title":"Future works","text":"

Currently, the path shifting length is limited to 0.5 meters or less by drivable_area_generation.max_lat_offset_to_avoid. This is caused by the lack of functionality to work with other modules and the structure of the planning component. Due to this issue, this module can only handle situations where a small avoidance width is sufficient. This issue is the most significant for this module. In addition, the ability of this module to extend the drivable area as needed is also required.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#parameters","title":"Parameters","text":"

Under development

Name Unit Type Description Default value target_object.car [-] bool The flag whether to avoid cars or not true target_object.truck [-] bool The flag whether to avoid trucks or not true ... [-] bool ... ... target_object.min_obstacle_vel [m/s] double Minimum obstacle velocity to avoid 1.0 drivable_area_generation.lat_offset_from_obstacle [m] double Lateral offset to avoid from obstacles 0.8 drivable_area_generation.max_lat_offset_to_avoid [m] double Maximum lateral offset to avoid 0.5 drivable_area_generation.overtaking_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.overtaking_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 4.0 drivable_area_generation.overtaking_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 5.0 drivable_area_generation.overtaking_object.duration_to_hold_avoidance [s] double Duration to hold avoidance after passing by obstacles 3.0 drivable_area_generation.oncoming_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.oncoming_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 9.0 drivable_area_generation.oncoming_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#goal-planner-design","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#purpose-role","title":"Purpose / Role","text":"

Plan path around the goal.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#design","title":"Design","text":"

If goal modification is not allowed, park at the designated fixed goal. (fixed_goal_planner in the figure below) When allowed, park in accordance with the specified policy(e.g pull over on left/right side of the lane). (rough_goal_planner in the figure below). Currently rough goal planner only support pull_over feature, but it would be desirable to be able to accommodate various parking policies in the future.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#start-condition","title":"start condition","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#fixed_goal_planner","title":"fixed_goal_planner","text":"

This is a very simple function that plans a smooth path to a specified goal. This function does not require approval and always runs with the other modules. NOTE: this planner does not perform the several features described below, such as \"goal search\", \"collision check\", \"safety check\", etc.

Executed when both conditions are met.

If the target path contains a goal, modify the points of the path so that the path and the goal are connected smoothly. This process will change the shape of the path by the distance of refine_goal_search_radius_range from the goal. Note that this logic depends on the interpolation algorithm that will be executed in a later module (at the moment it uses spline interpolation), so it needs to be updated in the future.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#rough_goal_planner","title":"rough_goal_planner","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#pull-over-on-road-lane","title":"pull over on road lane","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#pull-over-on-shoulder-lane","title":"pull over on shoulder lane","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#finish-condition","title":"finish condition","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#general-parameters-for-goal_planner","title":"General parameters for goal_planner","text":"Name Unit Type Description Default value th_arrived_distance [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time [s] double time threshold for arrival of path termination 2.0 center_line_path_interval [m] double reference center line path point interval 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#goal-search","title":"Goal Search","text":"

To realize pull over even when an obstacle exists near the original goal, a collision free area is searched within a certain range around the original goal. The goal found will be published as /planning/scenario_planning/modified_goal.

goal search video

  1. The original goal is set, and the refined goal pose is obtained by moving in the direction normal to the lane center line and keeping margin_from_boundary from the edge of the lane.

  2. Using refined_goal as the base goal, search for candidate goals in the range of -forward_goal_search_length to backward_goal_search_length in the longitudinal direction and longitudinal_margin to longitudinal_margin+max_lateral_offset in th lateral direction based on refined_goal.

  3. Each candidate goal is prioritized and a path is generated for each planner for each goal. The priority of a candidate goal is determined by its distance from the base goal. The ego vehicle tries to park for the highest possible goal. The distance is determined by the selected policy. In case minimum_longitudinal_distance, sort with smaller longitudinal distances taking precedence over smaller lateral distances. In case minimum_weighted_distance, sort with the sum of weighted lateral distance and longitudinal distance. This means the distance is calculated by longitudinal_distance + lateral_cost*lateral_distance The following figure is an example of minimum_weighted_distance.\u200b The white number indicates the goal candidate priority, and the smaller the number, the higher the priority. the 0 goal indicates the base goal.

  4. If the footprint in each goal candidate is within object_recognition_collision_check_margin of that of the object, it is determined to be unsafe. These goals are not selected. If use_occupancy_grid_for_goal_search is enabled, collision detection on the grid is also performed with occupancy_grid_collision_check_margin.

Red goals candidates in the image indicate unsafe ones.

It is possible to keep longitudinal_margin in the longitudinal direction apart from the collision margin for obstacles from the goal candidate. This is intended to provide natural spacing for parking and efficient departure.

Also, if prioritize_goals_before_objects is enabled, To arrive at each goal, the number of objects that need to be avoided in the target range is counted, and those with the lowest number are given priority.

The images represent a count of objects to be avoided at each range, with priority given to those with the lowest number, regardless of the aforementioned distances.

The gray numbers represent objects to avoid, and you can see that the goal in front has a higher priority in this case.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-goal-search","title":"Parameters for goal search","text":"Name Unit Type Description Default value goal_priority [-] string In case minimum_longitudinal_distance, sort with smaller longitudinal distances taking precedence over smaller lateral distances. In case minimum_weighted_distance, sort with the sum of weighted lateral distance and longitudinal distance minimum_weighted_distance lateral_weight [-] double Weight for lateral distance used when minimum_weighted_distance 40.0 prioritize_goals_before_objects [-] bool If there are objects that may need to be avoided, prioritize the goal in front of them true forward_goal_search_length [m] double length of forward range to be explored from the original goal 20.0 backward_goal_search_length [m] double length of backward range to be explored from the original goal 20.0 goal_search_interval [m] double distance interval for goal search 2.0 longitudinal_margin [m] double margin between ego-vehicle at the goal position and obstacles 3.0 max_lateral_offset [m] double maximum offset of goal search in the lateral direction 0.5 lateral_offset_interval [m] double distance interval of goal search in the lateral direction 0.25 ignore_distance_from_lane_start [m] double This parameter ensures that the distance between the start of the shoulder lane and the goal is not less than the specified value. It's used to prevent setting goals too close to the beginning of the shoulder lane, which might lead to unsafe or impractical pull-over maneuvers. Increasing this value will force the system to ignore potential goal positions near the start of the shoulder lane, potentially leading to safer and more comfortable pull-over locations. 0.0 margin_from_boundary [m] double distance margin from edge of the shoulder lane 0.5"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#pull-over","title":"Pull Over","text":"

There are three path generation methods. The path is generated with a certain margin (default: 0.75 m) from the boundary of shoulder lane.

The process is time consuming because multiple planners are used to generate path for each candidate goal. Therefore, in this module, the path generation is performed in a thread different from the main thread. Path generation is performed at the timing when the shape of the output path of the previous module changes. If a new module launches, it is expected to go to the previous stage before the goal planner, in which case the goal planner re-generates the path. The goal planner is expected to run at the end of multiple modules, which is achieved by keep_last in the planner manager.

Threads in the goal planner are shown below.

The main thread will be the one called from the planner manager flow.

Name Unit Type Description Default value pull_over_minimum_request_length [m] double when the ego-vehicle approaches the goal by this distance or a safe distance to stop, pull over is activated. 100.0 pull_over_velocity [m/s] double decelerate to this speed by the goal search area 3.0 pull_over_minimum_velocity [m/s] double speed of pull_over after stopping once. this prevents excessive acceleration. 1.38 decide_path_distance [m] double decide path if it approaches this distance relative to the parking position. after that, no path planning and goal search are performed 10.0 maximum_deceleration [m/s2] double maximum deceleration. it prevents sudden deceleration when a parking path cannot be found suddenly 1.0 path_priority [-] string In case efficient_path use a goal that can generate an efficient path which is set in efficient_path_order. In case close_goal use the closest goal to the original one. efficient_path efficient_path_order [-] string efficient order of pull over planner along lanes excluding freespace pull over [\"SHIFT\", \"ARC_FORWARD\", \"ARC_BACKWARD\"] lane_departure_check_expansion_margin [m] double margin to expand the ego vehicle footprint when doing lane departure checks 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#shift-parking","title":"shift parking","text":"

Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.

  1. Apply uniform offset to centerline of shoulder lane for ensuring margin
  2. In the section between merge start and end, path is shifted by a method that is used to generate avoidance path (four segmental constant jerk polynomials)
  3. Combine this path with center line of road lane

shift_parking video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-shift-parking","title":"Parameters for shift parking","text":"Name Unit Type Description Default value enable_shift_parking [-] bool flag whether to enable shift parking true shift_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 deceleration_interval [m] double distance of deceleration section 15.0 after_shift_straight_distance [m] double straight line distance after pull over end point 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#geometric-parallel-parking","title":"geometric parallel parking","text":"

Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-geometric-parallel-parking","title":"Parameters geometric parallel parking","text":"Name Unit Type Description Default value arc_path_interval [m] double interval between arc path points 1.0 pull_over_max_steer_rad [rad] double maximum steer angle for path generation. it may not be possible to control steer up to max_steer_angle in vehicle_info when stopped 0.35"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#arc-forward-parking","title":"arc forward parking","text":"

Generate two forward arc paths.

arc_forward_parking video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-arc-forward-parking","title":"Parameters arc forward parking","text":"Name Unit Type Description Default value enable_arc_forward_parking [-] bool flag whether to enable arc forward parking true after_forward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 forward_parking_velocity [m/s] double velocity when forward parking 1.38 forward_parking_lane_departure_margin [m/s] double lane departure margin for front left corner of ego-vehicle when forward parking 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#arc-backward-parking","title":"arc backward parking","text":"

Generate two backward arc paths.

.

arc_backward_parking video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-arc-backward-parking","title":"Parameters arc backward parking","text":"Name Unit Type Description Default value enable_arc_backward_parking [-] bool flag whether to enable arc backward parking true after_backward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 backward_parking_velocity [m/s] double velocity when backward parking -1.38 backward_parking_lane_departure_margin [m/s] double lane departure margin for front right corner of ego-vehicle when backward 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#freespace-parking","title":"freespace parking","text":"

If the vehicle gets stuck with lane_parking, run freespace_parking. To run this feature, you need to set parking_lot to the map, activate_by_scenario of costmap_generator to false and enable_freespace_parking to true

Simultaneous execution with avoidance_module in the flowchart is under development.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-freespace-parking","title":"Parameters freespace parking","text":"Name Unit Type Description Default value enable_freespace_parking [-] bool This flag enables freespace parking, which runs when the vehicle is stuck due to e.g. obstacles in the parking area. true

See freespace_planner for other parameters.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#collision-check-for-path-generation","title":"collision check for path generation","text":"

To select a safe one from the path candidates, a collision check with obstacles is performed.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#occupancy-grid-based-collision-check","title":"occupancy grid based collision check","text":"

Generate footprints from ego-vehicle path points and determine obstacle collision from the value of occupancy_grid of the corresponding cell.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-occupancy-grid-based-collision-check","title":"Parameters for occupancy grid based collision check","text":"Name Unit Type Description Default value use_occupancy_grid_for_goal_search [-] bool flag whether to use occupancy grid for goal search collision check true use_occupancy_grid_for_goal_longitudinal_margin [-] bool flag whether to use occupancy grid for keeping longitudinal margin false use_occupancy_grid_for_path_collision_check [-] bool flag whether to use occupancy grid for collision check false occupancy_grid_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.0 theta_size [-] int size of theta angle to be considered. angular resolution for collision check will be 2\\(\\pi\\) / theta_size [rad]. 360 obstacle_threshold [-] int threshold of cell values to be considered as obstacles 60"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#object-recognition-based-collision-check","title":"object recognition based collision check","text":"

A collision decision is made for each of the path candidates, and a collision-free path is selected. There are three main margins at this point.

Then there is the concept of soft and hard margins. Although not currently parameterized, if a collision-free path can be generated by a margin several times larger than object_recognition_collision_check_margin, then the priority is higher.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-object-recognition-based-collision-check","title":"Parameters for object recognition based collision check","text":"Name Unit Type Description Default value use_object_recognition [-] bool flag whether to use object recognition for collision check true object_recognition_collision_check_soft_margins [m] vector[double] soft margins for collision check when path generation. It is not strictly the distance between footprints, but the maximum distance when ego and objects are oriented. [5.0, 4.5, 4.0, 3.5, 3.0, 2.5, 2.0, 1.5, 1.0] object_recognition_collision_check_hard_margins [m] vector[double] hard margins for collision check when path generation [0.6] object_recognition_collision_check_max_extra_stopping_margin [m] double maximum value when adding longitudinal distance margin for collision check considering stopping distance 1.0 detection_bound_offset [m] double expand pull over lane with this offset to make detection area for collision check of path generation 15.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#safety-check","title":"safety check","text":"

Perform safety checks on moving objects. If the object is determined to be dangerous, no path decision is made and no approval is given,

This module has two methods of safety check, RSS and integral_predicted_polygon.

RSS method is a method commonly used by other behavior path planner modules, see RSS based safety check utils explanation.

integral_predicted_polygon is a more safety-oriented method. This method is implemented because speeds during pull over are lower than during driving, and fewer objects travel along the edge of the lane. (It is sometimes too reactive and may be less available.) This method integrates the footprints of egos and objects at a given time and checks for collisions between them.

In addition, the safety check has a time hysteresis, and if the path is judged \"safe\" for a certain period of time(keep_unsafe_time), it is finally treated as \"safe\".

                         ==== is_safe\n                         ---- current_is_safe\n       is_safe\n        ^\n        |\n        |                   time\n      1 +--+    +---+       +---=========   +--+\n        |  |    |   |       |           |   |  |\n        |  |    |   |       |           |   |  |\n        |  |    |   |       |           |   |  |\n        |  |    |   |       |           |   |  |\n      0 =========================-------==========--> t\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-safety-check","title":"Parameters for safety check","text":"Name Unit Type Description Default value enable_safety_check [-] bool flag whether to use safety check true method [-] string method for safety check. RSS or integral_predicted_polygon integral_predicted_polygon keep_unsafe_time [s] double safety check Hysteresis time. if the path is judged \"safe\" for the time it is finally treated as \"safe\". 3.0 check_all_predicted_path - bool Flag to check all predicted paths true publish_debug_marker - bool Flag to publish debug markers false collision_check_yaw_diff_threshold [rad] double Maximum yaw difference between ego and object when executing rss-based collision checking 3.1416"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-rss-safety-check","title":"Parameters for RSS safety check","text":"Name Unit Type Description Default value rear_vehicle_reaction_time [s] double Reaction time for rear vehicles 2.0 rear_vehicle_safety_time_margin [s] double Safety time margin for rear vehicles 1.0 lateral_distance_max_threshold [m] double Maximum lateral distance threshold 2.0 longitudinal_distance_min_threshold [m] double Minimum longitudinal distance threshold 3.0 longitudinal_velocity_delta_time [s] double Delta time for longitudinal velocity 0.8"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-integral_predicted_polygon-safety-check","title":"Parameters for integral_predicted_polygon safety check","text":"Name Unit Type Description Default value forward_margin [m] double forward margin for ego footprint 1.0 backward_margin [m] double backward margin for ego footprint 1.0 lat_margin [m] double lateral margin for ego footprint 1.0 time_horizon [s] double Time width to integrate each footprint 10.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#path-deciding","title":"path deciding","text":"

When decide_path_distance closer to the start of the pull over, if it is collision-free at that time and safe for the predicted path of the objects, it transitions to DECIDING. If it is safe for a certain period of time, it moves to DECIDED.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#unimplemented-parts-limitations","title":"Unimplemented parts / limitations","text":"

Unimplemented parts / limitations for freespace parking

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/","title":"Lane Change design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-design","title":"Lane Change design","text":"

The Lane Change module is activated when lane change is needed and can be safely executed.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-requirement","title":"Lane Change Requirement","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#generating-lane-change-candidate-path","title":"Generating Lane Change Candidate Path","text":"

The lane change candidate path is divided into two phases: preparation and lane-changing. The following figure illustrates each phase of the lane change candidate path.

The following chart illustrates the process of sampling candidate paths for lane change.

While the following chart demonstrates the process of generating a valid candidate path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#preparation-phase","title":"Preparation phase","text":"

The preparation trajectory is the candidate path's first and the straight portion generated along the ego vehicle's current lane. The length of the preparation trajectory is computed as follows.

lane_change_prepare_distance = current_speed * lane_change_prepare_duration + 0.5 * deceleration * lane_change_prepare_duration^2\n

During the preparation phase, the turn signal will be activated when the remaining distance is equal to or less than lane_change_search_distance.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-changing-phase","title":"Lane-changing phase","text":"

The lane-changing phase consist of the shifted path that moves ego from current lane to the target lane. Total distance of lane-changing phase is as follows. Note that during the lane changing phase, the ego vehicle travels at a constant speed.

lane_change_prepare_velocity = std::max(current_speed + deceleration * lane_change_prepare_duration, minimum_lane_changing_velocity)\nlane_changing_distance = lane_change_prepare_velocity * lane_changing_duration\n

The backward_length_buffer_for_end_of_lane is added to allow some window for any possible delay, such as control or mechanical delay during brake lag.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#multiple-candidate-path-samples-longitudinal-acceleration","title":"Multiple candidate path samples (longitudinal acceleration)","text":"

Lane change velocity is affected by the ego vehicle's current velocity. High velocity requires longer preparation and lane changing distance. However we also need to plan lane changing trajectories in case ego vehicle slows down. Computing candidate paths that assumes ego vehicle's slows down is performed by substituting predetermined deceleration value into prepare_length, prepare_velocity and lane_changing_length equation.

The predetermined longitudinal acceleration values are a set of value that starts from longitudinal_acceleration = maximum_longitudinal_acceleration, and decrease by longitudinal_acceleration_resolution until it reaches longitudinal_acceleration = -maximum_longitudinal_deceleration. Both maximum_longitudinal_acceleration and maximum_longitudinal_deceleration are calculated as: defined in the common.param file as normal.min_acc.

maximum_longitudinal_acceleration = min(common_param.max_acc, lane_change_param.max_acc)\nmaximum_longitudinal_deceleration = max(common_param.min_acc, lane_change_param.min_acc)\n

where common_param is vehicle common parameter, which defines vehicle common maximum longitudinal acceleration and deceleration. Whereas, lane_change_param has maximum longitudinal acceleration and deceleration for the lane change module. For example, if a user set and common_param.max_acc=1.0 and lane_change_param.max_acc=0.0, maximum_longitudinal_acceleration becomes 0.0, and the lane change does not accelerate in the lane change phase.

The longitudinal_acceleration_resolution is determine by the following

longitudinal_acceleration_resolution = (maximum_longitudinal_acceleration - minimum_longitudinal_acceleration) / longitudinal_acceleration_sampling_num\n

Note that when the current_velocity is lower than minimum_lane_changing_velocity, the vehicle needs to accelerate its velocity to minimum_lane_changing_velocity. Therefore, longitudinal acceleration becomes positive value (not decelerate).

The chart illustrates the conditions under which longitudinal acceleration values are sampled.

while the following describes the process by which longitudinal accelerations are sampled.

The following figure illustrates when longitudinal_acceleration_sampling_num = 4. Assuming that maximum_deceleration = 1.0 then a0 == 0.0 == no deceleration, a1 == 0.25, a2 == 0.5, a3 == 0.75 and a4 == 1.0 == maximum_deceleration. a0 is the expected lane change trajectories should ego vehicle do not decelerate, and a1's path is the expected lane change trajectories should ego vehicle decelerate at 0.25 m/s^2.

Which path will be chosen will depend on validity and collision check.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#multiple-candidate-path-samples-lateral-acceleration","title":"Multiple candidate path samples (lateral acceleration)","text":"

In addition to sampling longitudinal acceleration, we also sample lane change paths by adjusting the value of lateral acceleration. Since lateral acceleration influences the duration of a lane change, a lower lateral acceleration value results in a longer lane change path, while a higher lateral acceleration value leads to a shorter lane change path. This allows the lane change module to generate a shorter lane change path by increasing the lateral acceleration when there is limited space for the lane change.

The maximum and minimum lateral accelerations are defined in the lane change parameter file as a map. The range of lateral acceleration is determined for each velocity by linearly interpolating the values in the map. Let's assume we have the following map

Ego Velocity Minimum lateral acceleration Maximum lateral acceleration 0.0 0.2 0.3 2.0 0.2 0.4 4.0 0.3 0.4 6.0 0.3 0.5

In this case, when the current velocity of the ego vehicle is 3.0, the minimum and maximum lateral accelerations are 0.25 and 0.4 respectively. These values are obtained by linearly interpolating the second and third rows of the map, which provide the minimum and maximum lateral acceleration values.

Within this range, we sample the lateral acceleration for the ego vehicle. Similar to the method used for sampling longitudinal acceleration, the resolution of lateral acceleration (lateral_acceleration_resolution) is determined by the following:

lateral_acceleration_resolution = (maximum_lateral_acceleration - minimum_lateral_acceleration) / lateral_acceleration_sampling_num\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#candidate-paths-validity-check","title":"Candidate Path's validity check","text":"

A candidate path is considered valid if it meets the following criteria:

  1. The distance from the ego vehicle's current position to the end of the current lanes is sufficient to perform a single lane change.
  2. The distance from the ego vehicle's current position to the goal along the current lanes is adequate to complete multiple lane changes.
  3. The distance from the ego vehicle's current position to the end of the target lanes is adequate for completing multiple lane changes.
  4. The distance from the ego vehicle's current position to the next regulatory element is adequate to perform a single lane change.
  5. The lane change can be completed after passing a parked vehicle.
  6. The lane change is deemed safe to execute.

The following flow chart illustrates the validity check.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#delay-lane-change-check","title":"Delay Lane Change Check","text":"

In certain situations, when there are stopped vehicles along the target lane ahead of Ego vehicle, to avoid getting stuck, it is desired to perform the lane change maneuver after the stopped vehicle. To do so, all static objects ahead of ego along the target lane are checked in order from closest to furthest, if any object satisfies the following conditions, lane change will be delayed and candidate path will be rejected.

  1. The distance from object to terminal end is sufficient to perform lane change
  2. The distance to object is less than the lane changing length
  3. The distance from object to next object is sufficient to perform lane change

If the parameter check_only_parked_vehicle is set to true, the check will only consider objects which are determined as parked.

The following flow chart illustrates the delay lane change check.

The following figures demonstrate different situations under which will or will not be triggered:

  1. Delay lane change will be triggered as there is sufficient distance ahead.
  2. Delay lane change will NOT be triggered as there is no sufficient distance ahead
  3. Delay lane change will be triggered by fist NPC as there is sufficient distance ahead.
  4. Delay lane change will be triggered by second NPC as there is sufficient distance ahead
  5. Delay lane change will NOT be triggered as there is no sufficient distance ahead.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#candidate-paths-safety-check","title":"Candidate Path's Safety check","text":"

See safety check utils explanation

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#objects-selection-and-classification","title":"Objects selection and classification","text":"

First, we divide the target objects into obstacles in the target lane, obstacles in the current lane, and obstacles in other lanes. Target lane indicates the lane that the ego vehicle is going to reach after the lane change and current lane mean the current lane where the ego vehicle is following before the lane change. Other lanes are lanes that do not belong to the target and current lanes. The following picture describes objects on each lane. Note that users can remove objects either on current and other lanes from safety check by changing the flag, which are check_objects_on_current_lanes and check_objects_on_other_lanes.

Furthermore, to change lanes behind a vehicle waiting at a traffic light, we skip the safety check for the stopping vehicles near the traffic light. The explanation for parked car detection is written in documentation for avoidance module.

The detection area for the target lane can be expanded beyond its original boundaries to enable detection of objects that are outside the target lane's limits.

Without Lane Expansion With Lane Expansion"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#object-filtering","title":"Object filtering","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#collision-check-in-prepare-phase","title":"Collision check in prepare phase","text":"

The ego vehicle may need to secure ample inter-vehicle distance ahead of the target vehicle before attempting a lane change. The flag enable_collision_check_at_prepare_phase can be enabled to gain this behavior. The following image illustrates the differences between the false and true cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#if-the-lane-is-blocked-and-multiple-lane-changes","title":"If the lane is blocked and multiple lane changes","text":"

When driving on the public road with other vehicles, there exist scenarios where lane changes cannot be executed. Suppose the candidate path is evaluated as unsafe, for example, due to incoming vehicles in the adjacent lane. In that case, the ego vehicle can't change lanes, and it is impossible to reach the goal. Therefore, the ego vehicle must stop earlier at a certain distance and wait for the adjacent lane to be evaluated as safe. The minimum stopping distance can be computed from shift length and minimum lane changing velocity.

lane_changing_time = f(shift_length, lat_acceleration, lat_jerk)\nminimum_lane_change_distance = minimum_prepare_length + minimum_lane_changing_velocity * lane_changing_time + lane_change_finish_judge_buffer\n

The following figure illustrates when the lane is blocked in multiple lane changes cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#stopping-behavior","title":"Stopping behavior","text":"

The stopping behavior of the ego vehicle is determined based on various factors, such as the number of lane changes required, the presence of obstacles, and the position of blocking objects in relation to the lane change plan. The objective is to choose a suitable stopping point that allows for a safe and effective lane change while adapting to different traffic scenarios.

The following flowchart and subsections explain the conditions for deciding where to insert a stop point when an obstacle is ahead.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#ego-vehicles-stopping-position-when-an-object-exists-ahead","title":"Ego vehicle's stopping position when an object exists ahead","text":"

When the ego vehicle encounters an obstacle ahead, it stops while maintaining a safe distance to prepare for a possible lane change. The exact stopping position depends on factors like whether the target lane is clear or if the lane change needs to be delayed. The following explains how different stopping scenarios are handled:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-the-near-the-end-of-the-lane-change","title":"When the near the end of the lane change","text":"

Whether the target lane has obstacles or is clear, the ego vehicle stops while keeping a safe distance from the obstacle ahead, ensuring there is enough room for the lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-the-ego-vehicle-is-not-near-the-end-of-the-lane-change","title":"When the ego vehicle is not near the end of the lane change","text":"

The ego vehicle stops while maintaining a safe distance from the obstacle ahead, ensuring there is enough space for a lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#ego-vehicles-stopping-position-when-an-object-exists-in-the-lane-changing-section","title":"Ego vehicle's stopping position when an object exists in the lane changing section","text":"

If there are objects within the lane change section of the target lane, the ego vehicle stops closer to the obstacle ahead, without maintaining the usual distance for a lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-near-the-end-of-the-lane-change","title":"When near the end of the lane change","text":"

Regardless of whether there are obstacles in the target lane, the ego vehicle stops while keeping a safe distance from the obstacle ahead, allowing for the lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-not-near-the-end-of-the-lane-change","title":"When not near the end of the lane change","text":"

If there are no obstacles in the lane change section of the target lane, the ego vehicle stops while keeping a safe distance from the obstacle ahead to accommodate the lane change.

If there are obstacles within the lane change section of the target lane, the ego vehicle stops closer to the obstacle ahead, without keeping the usual distance needed for a lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-the-target-lane-is-far-away","title":"When the target lane is far away","text":"

If the target lane for the lane change is far away and not next to the current lane, the ego vehicle stops closer to the obstacle ahead, as maintaining the usual distance for a lane change is not necessary.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-when-stuck","title":"Lane Change When Stuck","text":"

The ego vehicle is considered stuck if it is stopped and meets any of the following conditions:

In this case, the safety check for lane change is relaxed compared to normal times. Please refer to the 'stuck' section under the 'Collision checks during lane change' for more details. The function to stop by keeping a margin against forward obstacle in the previous section is being performed to achieve this feature.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-regulations","title":"Lane change regulations","text":"

If you want to regulate lane change on crosswalks, intersections, or traffic lights, the lane change module is disabled near any of them. To regulate lane change on crosswalks, intersections, or traffic lights, set regulation.crosswalk, regulation.intersection or regulation.traffic_light to true. If the ego vehicle gets stuck, to avoid stuck, it enables lane change in crosswalk/intersection. If the ego vehicle stops more than stuck_detection.stop_time seconds, it is regarded as a stuck. If the ego vehicle velocity is smaller than stuck_detection.velocity, it is regarded as stopping.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-completion-checks","title":"Lane change completion checks","text":"

To determine if the ego vehicle has successfully changed lanes, one of two criteria must be met: either the longitudinal or the lateral criteria.

For the longitudinal criteria, the ego vehicle must pass the lane-changing end pose and be within the finish_judge_buffer distance from it. The module then checks if the ego vehicle is in the target lane. If true, the module returns success. This check ensures that the planner manager updates the root lanelet correctly based on the ego vehicle's current pose. Without this check, if the ego vehicle is changing lanes while avoiding an obstacle and its current pose is in the original lane, the planner manager might set the root lanelet as the original lane. This would force the ego vehicle to perform the lane change again. With the target lane check, the ego vehicle is confirmed to be in the target lane, and the planner manager can correctly update the root lanelets.

If the longitudinal criteria are not met, the module evaluates the lateral criteria. For the lateral criteria, the ego vehicle must be within finish_judge_lateral_threshold distance from the target lane's centerline, and the angle deviation must be within finish_judge_lateral_angle_deviation degrees. The angle deviation check ensures there is no sudden steering. If the angle deviation is set too high, the ego vehicle's orientation could deviate significantly from the centerline, causing the trajectory follower to aggressively correct the steering to return to the centerline. Keeping the angle deviation value as small as possible avoids this issue.

The process of determining lane change completion is shown in the following diagram.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#terminal-lane-change-path","title":"Terminal Lane Change Path","text":"

Depending on the space configuration around the Ego vehicle, it is possible that a valid LC path cannot be generated. If that happens, then Ego will get stuck at terminal_start and not be able to proceed. Therefore we introduced the terminal LC path feature; when ego gets near to the terminal point (dist to terminal_start is less than the maximum lane change length) a terminal lane changing path will be computed starting from the terminal start point on the current lane and connects to the target lane. The terminal path only needs to be computed once in each execution of LC module. If no valid candidate paths are found in the path generation process, then the terminal path will be used as a fallback candidate path, the safety of the terminal path is not ensured and therefore it can only be force approved. The following images illustrate the expected behavior without and with the terminal path feature respectively:

Additionally if terminal path feature is enabled and path is computed, stop point placement can be configured to be at the edge of the current lane instead of at the terminal_start position, as indicated by the dashed red line in the image above.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#generating-path-using-frenet-planner","title":"Generating Path Using Frenet Planner","text":"

Warning

Generating path using Frenet planner applies only when ego is near terminal start

If the ego vehicle is far from the terminal, the lane change module defaults to using the path shifter. This ensures that the lane change is completed while the target lane remains a neighbor of the current lane. However, this approach may result in high curvature paths near the terminal, potentially causing long vehicles to deviate from the lane.

To address this, the lane change module provides an option to choose between the path shifter and the Frenet planner. The Frenet planner allows for some flexibility in the lane change endpoint, extending the lane changing end point slightly beyond the current lane's neighbors.

The following table provides comparisons between the planners

With Path Shifter With Frenet Planner

Note

The planner can be enabled or disabled using the frenet.enable flag.

Note

Since only a segment of the target lane is used as input to generate the lane change path, the end pose of the lane change segment may not smoothly connect to the target lane centerline. To address this, increase the value of frenet.th_curvature_smoothing to improve the smoothness.

Note

The yaw difference threshold (frenet.th_yaw_diff) limits the maximum curvature difference between the end of the prepare segment and the lane change segment. This threshold might prevent the generation of a lane change path when the lane curvature is high. In such cases, you can increase the frenet.th_yaw_diff value. However, note that if the prepare path was initially shifted by other modules, the resultant steering may not be continuous.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#aborting-a-previously-approved-lane-change","title":"Aborting a Previously Approved Lane Change","text":"

Once the lane change path is approved, there are several situations where we may need to abort the maneuver. The abort process is triggered when any of the following conditions is met

  1. The ego vehicle is near a traffic light, crosswalk, or intersection, and it is possible to complete the lane change after the ego vehicle passes these areas.
  2. The target object list is updated, requiring us to delay lane change
  3. The lane change is forcefully canceled via RTC.
  4. The path has become unsafe.

Furthermore, if the path has become unsafe, there are three possible outcomes for the maneuver:

  1. CANCEL: The approved lane change path is canceled, and the ego vehicle resumes its previous maneuver.
  2. ABORT: The lane change module generates a return path to bring the ego vehicle back to its current lane.
  3. CRUISE or STOP: If aborting is not feasible, the ego vehicle continues with the lane change. Another module should decide whether the ego vehicle should cruise or stop in this scenario.

CANCEL can be enabled by setting the cancel.enable_on_prepare_phase flag to true, and ABORT can be enabled by setting the cancel.enable_on_lane_changing_phase flag to true.

Warning

Enabling CANCEL is a prerequisite for enabling ABORT.

Warning

When CANCEL is disabled, all maneuvers will default to either CRUISE or STOP.

The chart shows the high level flow of the lane change abort process.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#preventing-oscillating-paths-when-unsafe","title":"Preventing Oscillating Paths When Unsafe","text":"

Lane change paths can oscillate when conditions switch between safe and unsafe. To address this, a hysteresis count check is added before executing an abort maneuver. When the path is unsafe, the unsafe_hysteresis_count_ increases. If it exceeds the unsafe_hysteresis_threshold, an abort condition check is triggered. This logic stabilizes the path approval process and prevents abrupt changes caused by temporary unsafe conditions.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#evaluating-ego-vehicles-position-to-prevent-abrupt-maneuvers","title":"Evaluating Ego Vehicle's Position to Prevent Abrupt Maneuvers","text":"

To avoid abrupt maneuvers during CANCEL or ABORT, the lane change module ensures the ego vehicle can safely return to the original lane. This is done through geometric checks that verify whether the ego vehicle remains within the lane boundaries.

The edges of the ego vehicle\u2019s footprint are compared against the boundary of the current lane to determine if they exceed the overhang tolerance, cancel.overhang_tolerance. If the distance from any edge of the footprint to the boundary exceeds this threshold, the vehicle is considered to be diverging.

The footprints checked against the lane boundary include:

  1. Current Footprint: Based on the ego vehicle's current position.
  2. Future Footprint: Based on the ego vehicle's estimated position after traveling a distance, calculated as \\(\ud835\udc51_{est}=\ud835\udc63_{ego} \\cdot \\Delta_{\ud835\udc61}\\), where

    as depicted in the following diagram

Note

The ego vehicle is considered capable of safely returning to the current lane only if BOTH the current and future footprint checks are true.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#checking-approved-path-safety","title":"Checking Approved Path Safety","text":"

The lane change module samples accelerations along the path and recalculates velocity to perform safety checks. The motivation for this feature is explained in the Limitation section.

The computation of sampled accelerations is as follows:

Let

\\[ \\text{resolution} = \\frac{a_{\\text{min}} - a_{\\text{LC}}}{N} \\]

The sequence of sampled accelerations is then given as

\\[ \\text{acc} = a_{\\text{LC}} + k \\cdot \\text{resolution}, \\quad k = [0, N] \\]

where

If none of the sampled accelerations pass the safety check, the lane change path will be canceled, subject to the hysteresis check.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#cancel","title":"Cancel","text":"

When lane change is canceled, the approved path is reset. After the reset, the ego vehicle will return to following the original reference path (the last approved path before the lane change started), as illustrated in the following image

The following parameters can be configured to tune the behavior of the cancel process:

  1. Safety constraints for cancel.
  2. Safety constraints for parked vehicle.

Note

To ensure feasible behavior, all safety constraint values must be equal to or less than their corresponding parameters in the execution settings.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#abort","title":"Abort","text":"

During the prepare phase, the ego vehicle follows the previously approved path. However, once the ego vehicle begins the lane change, its heading starts to diverge from this path. Resetting to the previously approved path in this situation would cause abrupt steering, as the controller would attempt to rapidly realign the vehicle with the reference trajectory.

Instead, the lane change module generates an abort path. This return path is specifically designed to guide the ego vehicle back to the current lane, avoiding any sudden maneuvers. The following image provides an illustration of the abort process.

The abort path is generated by shifting the approved lane change path using the path shifter. This ensures the continuity in lateral velocity, and prevents abrupt changes in the vehicle\u2019s movement. The abort start shift and abort end shift are computed as follows:

  1. Start Shift: \\(d_{start}^{abort} = v_{ego} \\cdot \\Delta_{t}\\)
  2. End Shift: \\(d_{end}^{abort} = v_{ego} \\cdot ( \\Delta_{t} + t_{end} )\\)

as depicted in the following diagram

Note

When executing the abort process, comfort is not a primary concern. However, due to safety considerations, limited real-world testing has been conducted to tune or validate this parameter. Currently, the maximum lateral jerk is set to an arbitrary value. To avoid generating a path with excessive lateral jerk, this value can be configured using cancel.max_lateral_jerk.

Note

Lane change module returns ModuleStatus::FAILURE once abort is completed.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#stopcruise","title":"Stop/Cruise","text":"

Once canceling or aborting the lane change is no longer an option, the ego vehicle will proceed with the lane change. This can happen in the following situations:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#parameters","title":"Parameters","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#essential-lane-change-parameters","title":"Essential lane change parameters","text":"

The following parameters are configurable in lane_change.param.yaml

Name Unit Type Description Default value time_limit [ms] double Time limit for lane change candidate path generation 50.0 backward_lane_length [m] double The backward length to check incoming objects in lane change target lane. 200.0 backward_length_buffer_for_end_of_lane [m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change 3.0 backward_length_buffer_for_blocking_object [m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change when there is an object in front 3.0 backward_length_from_intersection [m] double Distance threshold from the last intersection to invalidate or cancel the lane change path 5.0 enable_stopped_vehicle_buffer [-] bool If true, will keep enough distance from current lane front stopped object to perform lane change when possible true trajectory.max_prepare_duration [s] double The maximum preparation time for the ego vehicle to be ready to perform lane change. 4.0 trajectory.min_prepare_duration [s] double The minimum preparation time for the ego vehicle to be ready to perform lane change. 2.0 trajectory.lateral_jerk [m/s3] double Lateral jerk value for lane change path generation 0.5 trajectory.minimum_lane_changing_velocity [m/s] double Minimum speed during lane changing process. 2.78 trajectory.lon_acc_sampling_num [-] int Number of possible lane-changing trajectories that are being influenced by longitudinal acceleration 3 trajectory.lat_acc_sampling_num [-] int Number of possible lane-changing trajectories that are being influenced by lateral acceleration 3 trajectory.max_longitudinal_acc [m/s2] double maximum longitudinal acceleration for lane change 1.0 trajectory.min_longitudinal_acc [m/s2] double maximum longitudinal deceleration for lane change -1.0 trajectory.lane_changing_decel_factor [-] double longitudinal deceleration factor during lane changing phase 0.5 trajectory.th_prepare_curvature [-] double If the maximum curvature of the prepare segment exceeds the threshold, the prepare segment is invalid. 0.03 min_length_for_turn_signal_activation [m] double Turn signal will be activated if the ego vehicle approaches to this length from minimum lane change length 10.0 lateral_acceleration.velocity [m/s] double Reference velocity for lateral acceleration calculation (look up table) [0.0, 4.0, 10.0] lateral_acceleration.min_values [m/s2] double Min lateral acceleration values corresponding to velocity (look up table) [0.4, 0.4, 0.4] lateral_acceleration.max_values [m/s2] double Max lateral acceleration values corresponding to velocity (look up table) [0.65, 0.65, 0.65]"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#parameter-to-judge-if-lane-change-is-completed","title":"Parameter to judge if lane change is completed","text":"

The following parameters are used to judge lane change completion.

Name Unit Type Description Default value lane_change_finish_judge_buffer [m] double The longitudinal distance starting from the lane change end pose. 2.0 finish_judge_lateral_threshold [m] double The lateral distance from targets lanes' centerline. Used in addition with finish_judge_lateral_angle_deviation 0.1 finish_judge_lateral_angle_deviation [deg] double Ego angle deviation with reference to target lanes' centerline. Used in addition with finish_judge_lateral_threshold 2.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-regulations_1","title":"Lane change regulations","text":"Name Unit Type Description Default value regulation.crosswalk [-] boolean Allow lane change in between crosswalks true regulation.intersection [-] boolean Allow lane change in between intersections true regulation.traffic_light [-] boolean Allow lane change to be performed in between traffic light true"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#ego-vehicle-stuck-detection","title":"Ego vehicle stuck detection","text":"Name Unit Type Description Default value stuck_detection.velocity [m/s] double Velocity threshold for ego vehicle stuck detection 0.1 stuck_detection.stop_time [s] double Stop time threshold for ego vehicle stuck detection 3.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#delay-lane-change","title":"Delay Lane Change","text":"Name Unit Type Description Default value delay_lane_change.enable [-] bool Flag to enable/disable lane change delay feature true delay_lane_change.check_only_parked_vehicle [-] bool Flag to limit delay feature for only parked vehicles false delay_lane_change.min_road_shoulder_width [m] double Width considered as road shoulder if lane doesn't have road shoulder when checking for parked vehicle 0.5 delay_lane_change.th_parked_vehicle_shift_ratio [-] double Stopped vehicles beyond this distance ratio from center line will be considered as parked 0.6"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#terminal-lane-change-path_1","title":"Terminal Lane Change Path","text":"

The following parameters are used to configure terminal lane change path feature.

Name Unit Type Description Default value terminal_path.enable [-] bool Flag to enable/disable terminal path feature true terminal_path.disable_near_goal [-] bool Flag to disable terminal path feature if ego is near goal true terminal_path.stop_at_boundary [-] bool If true, ego will stop at current lane boundary instead of middle of lane false"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#generating-lane-changing-path-using-frenet-planner","title":"Generating Lane Changing Path using Frenet Planner","text":"

Warning

Only applicable when ego is near terminal start

Name Unit Type Description Default value frenet.enable [-] bool Flag to enable/disable frenet planner when ego is near terminal start. true frenet.th_yaw_diff [deg] double If the yaw diff between of the prepare segment's end and lane changing segment's start exceed the threshold , the lane changing segment is invalid. 10.0 frenet.th_curvature_smoothing [-] double Filters and appends target path points with curvature below the threshold to candidate path. 0.1"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#collision-checks","title":"Collision checks","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#target-objects","title":"Target Objects","text":"Name Unit Type Description Default value target_object.car [-] boolean Include car objects for safety check true target_object.truck [-] boolean Include truck objects for safety check true target_object.bus [-] boolean Include bus objects for safety check true target_object.trailer [-] boolean Include trailer objects for safety check true target_object.unknown [-] boolean Include unknown objects for safety check true target_object.bicycle [-] boolean Include bicycle objects for safety check true target_object.motorcycle [-] boolean Include motorcycle objects for safety check true target_object.pedestrian [-] boolean Include pedestrian objects for safety check true"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#common","title":"common","text":"Name Unit Type Description Default value safety_check.lane_expansion.left_offset [m] double Expand the left boundary of the detection area, allowing objects previously outside on the left to be detected and registered as targets. 0.0 safety_check.lane_expansion.right_offset [m] double Expand the right boundary of the detection area, allowing objects previously outside on the right to be detected and registered as targets. 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#additional-parameters","title":"Additional parameters","text":"Name Unit Type Description Default value collision_check.enable_for_prepare_phase.general_lanes [-] boolean Perform collision check starting from the prepare phase for situations not explicitly covered by other settings (e.g., intersections). If false, collision check only evaluated for lane changing phase. false collision_check.enable_for_prepare_phase.intersection [-] boolean Perform collision check starting from prepare phase when ego is in intersection. If false, collision check only evaluated for lane changing phase. true collision_check.enable_for_prepare_phase.turns [-] boolean Perform collision check starting from prepare phase when ego is in lanelet with turn direction tags. If false, collision check only evaluated for lane changing phase. true collision_check.check_current_lanes [-] boolean If true, the lane change module always checks objects in the current lanes for collision assessment. If false, it only checks objects in the current lanes when the ego vehicle is stuck. false collision_check.check_other_lanes [-] boolean If true, the lane change module includes objects in other lanes when performing collision assessment. false collision_check.use_all_predicted_paths [-] boolean If false, use only the predicted path that has the maximum confidence. true collision_check.prediction_time_resolution [s] double Time resolution for object's path interpolation and collision check. 0.5 collision_check.yaw_diff_threshold [rad] double Maximum yaw difference between predicted ego pose and predicted object pose when executing rss-based collision checking 3.1416 collision_check.th_incoming_object_yaw [rad] double Maximum yaw difference between current ego pose and current object pose. Objects with a yaw difference exceeding this value are excluded from the safety check. 2.3562"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-during-lane-change-path-is-computed","title":"safety constraints during lane change path is computed","text":"Name Unit Type Description Default value safety_check.execution.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.execution.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.execution.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.execution.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 1.0 safety_check.execution.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.execution.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.execution.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.8 safety_check.execution.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-specifically-for-stopped-or-parked-vehicles","title":"safety constraints specifically for stopped or parked vehicles","text":"Name Unit Type Description Default value safety_check.parked.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.parked.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -2.0 safety_check.parked.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 1.0 safety_check.parked.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 0.8 safety_check.parked.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 1.0 safety_check.parked.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.parked.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.8 safety_check.parked.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-to-cancel-lane-change-path","title":"safety constraints to cancel lane change path","text":"Name Unit Type Description Default value safety_check.cancel.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.cancel.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -2.0 safety_check.cancel.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 1.5 safety_check.cancel.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 0.8 safety_check.cancel.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 1.0 safety_check.cancel.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 2.5 safety_check.cancel.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.6 safety_check.cancel.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-used-during-lane-change-path-is-computed-when-ego-is-stuck","title":"safety constraints used during lane change path is computed when ego is stuck","text":"Name Unit Type Description Default value safety_check.stuck.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.stuck.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 1.0 safety_check.stuck.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.stuck.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.stuck.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.8 safety_check.stuck.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle

(*1) the value must be negative.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#abort-lane-change","title":"Abort lane change","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value cancel.enable_on_prepare_phase [-] boolean Enable cancel lane change true cancel.enable_on_lane_changing_phase [-] boolean Enable abort lane change. false cancel.delta_time [s] double The time taken to start steering to return to the center line. 3.0 cancel.duration [s] double The time taken to complete returning to the center line. 3.0 cancel.max_lateral_jerk [m/sss] double The maximum lateral jerk for abort path 1000.0 cancel.overhang_tolerance [m] double Lane change cancel is prohibited if the vehicle head exceeds the lane boundary more than this tolerance distance 0.0 cancel.unsafe_hysteresis_threshold [-] int threshold that helps prevent frequent switching between safe and unsafe decisions 10 cancel.deceleration_sampling_num [-] int Number of deceleration patterns to check safety to cancel lane change 5"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#debug","title":"Debug","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value publish_debug_marker [-] boolean Flag to publish debug marker false"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#debug-marker-visualization","title":"Debug Marker & Visualization","text":"

To enable the debug marker, execute (no restart is needed)

ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner lane_change.publish_debug_marker true\n

or simply set the publish_debug_marker to true in the lane_change.param.yaml for permanent effect (restart is needed).

Then add the marker

/planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/lane_change_left\n

in rviz2.

Available information

  1. Ego to object relation, plus safety check information
  2. Ego vehicle interpolated pose up to the latest safety check position.
  3. Object is safe or not, shown by the color of the polygon (Green = Safe, Red = unsafe)
  4. Valid candidate paths.
  5. Position when lane changing start and end.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#limitation","title":"Limitation","text":"
  1. When a lane change is canceled, the lane change module returns ModuleStatus::FAILURE. As the module is removed from the approved module stack (see Failure modules), a new instance of the lane change module is initiated. Due to this, any information stored prior to the reset is lost. For example, the lane_change_prepare_duration in the TransientData is reset to its maximum value.
  2. The lane change module has no knowledge of any velocity modifications introduced to the path after it is approved. This is because other modules may add deceleration points after subscribing to the behavior path planner output, and the final velocity is managed by the velocity smoother. Since this limitation affects CANCEL, the lane change module mitigates it by sampling accelerations along the approved lane change path. These sampled accelerations are used during safety checks to estimate the velocity that might occur if the ego vehicle decelerates.
  3. Ideally, the abort path should account for whether its execution would affect trailing vehicles in the current lane. However, the lane change module does not evaluate such interactions or assess whether the abort path is safe. As a result, the abort path is not guaranteed to be safe. To minimize the risk of unsafe situations, the abort maneuver is only permitted if the ego vehicle has not yet diverged from the current lane.
  4. Due to limited resources, the abort path logic is not fully optimized. The generated path may overshoot, causing the return trajectory to slightly shift toward the opposite lane. This can be dangerous, especially if the opposite lane has traffic moving in the opposite direction. Furthermore, the logic does not account for different vehicle types, which can lead to varying effects. For instance, the behavior might differ significantly between a bus and a small passenger car.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/","title":"Behavior Path Planner","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#behavior-path-planner","title":"Behavior Path Planner","text":"

The Behavior Path Planner's main objective is to significantly enhance the safety of autonomous vehicles by minimizing the risk of accidents. It improves driving efficiency through time conservation and underpins reliability with its rule-based approach. Additionally, it allows users to integrate their own custom behavior modules or use it with different types of vehicles, such as cars, buses, and delivery robots, as well as in various environments, from busy urban streets to open highways.

The module begins by thoroughly analyzing the ego vehicle's current situation, including its position, speed, and surrounding environment. This analysis leads to essential driving decisions about lane changes or stopping and subsequently generates a path that is both safe and efficient. It considers road geometry, traffic rules, and dynamic conditions while also incorporating obstacle avoidance to respond to static and dynamic obstacles such as other vehicles, pedestrians, or unexpected roadblocks, ensuring safe navigation.

Moreover, the planner responds to the behavior of other traffic participants, predicting their actions and accordingly adjusting the vehicle's path. This ensures not only the safety of the autonomous vehicle but also contributes to smooth traffic flow. Its adherence to traffic laws, including speed limits and compliance with traffic signals, further guarantees lawful and predictable driving behavior. The planner is also designed to minimize sudden or abrupt maneuvers, aiming for a comfortable and natural driving experience.

Note

The Planning Component Design documentation outlines the foundational philosophy guiding the design and future development of the Behavior Path Planner module. We strongly encourage readers to consult this document to understand the rationale behind its current configuration and the direction of its ongoing development.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#purpose-use-cases","title":"Purpose / Use Cases","text":"

Essentially, the module has three primary responsibilities:

  1. Creating a path based on the traffic situation.
  2. Generating drivable area, i.e. the area within which the vehicle can maneuver.
  3. Generating turn signal commands to be relayed to the vehicle interface.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#features","title":"Features","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#supported-scene-modules","title":"Supported Scene Modules","text":"

Behavior Path Planner has the following scene modules

Name Description Details Lane Following This module generates a reference path from lanelet centerline. LINK Static Obstacle Avoidance This module generates an avoidance path when there are objects that should be avoided. LINK Dynamic Obstacle Avoidance WIP LINK Avoidance By Lane Change This module generates a lane change path when there are objects that should be avoided. LINK Lane Change This module is performed when it is necessary and a collision check with other vehicles is cleared. LINK External Lane Change WIP LINK Goal Planner This module is performed when the ego vehicle is in a driving lane and the goal is in the shoulder lane. The ego vehicle will stop at the goal. LINK Start Planner This module is performed when the ego vehicle is stationary and the footprint of the ego vehicle is included in the shoulder lane. This module ends when the ego vehicle merges into the road. LINK Side Shift This module shifts the path to the left or right based on external instructions, intended for remote control applications. LINK

Note

Click on the following images to view videos of their execution

Note

Users can refer to Planning component design for some additional behavior.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#how-to-add-or-implement-new-module","title":"How to add or implement new module","text":"

All scene modules are implemented by inheriting the base class scene_module_interface.hpp.

Warning

The remainder of this subsection is a work in progress (WIP).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#planner-manager","title":"Planner Manager","text":"

The Planner Manager's responsibilities include:

  1. Activating the relevant scene module in response to the specific situation faced by the autonomous vehicle. For example, when a parked vehicle blocks the ego vehicle's driving lane, the manager would engage the avoidance module.
  2. Managing the execution order when multiple modules are running simultaneously. For instance, if both the lane-changing and avoidance modules are operational, the manager decides which should take precedence.
  3. Merging paths from multiple modules when they are activated simultaneously and each generates its own path, thereby creating a single functional path.

Note

To check the scene module's transition \u2013 i.e., registered, approved and candidate modules \u2013 set verbose: true in the Behavior Path Planner configuration file.

Note

For more in-depth information, refer to the Manager design document.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#input","title":"Input","text":"Name Required? Type Description ~/input/odometry \u25cb nav_msgs::msg::Odometry For ego velocity ~/input/accel \u25cb geometry_msgs::msg::AccelWithCovarianceStamped For ego acceleration ~/input/objects \u25cb autoware_perception_msgs::msg::PredictedObjects Dynamic objects from the perception module ~/input/occupancy_grid_map \u25cb nav_msgs::msg::OccupancyGrid Occupancy grid map from the perception module. This is used for only the Goal Planner module ~/input/traffic_signals \u25cb autoware_perception_msgs::msg::TrafficLightGroupArray Traffic signal information from the perception module ~/input/vector_map \u25cb autoware_map_msgs::msg::LaneletMapBin Vector map information ~/input/route \u25cb autoware_planning_msgs::msg::LaneletRoute Current route from start to goal ~/input/scenario \u25cb tier4_planning_msgs::msg::Scenario Launches Behavior Path Planner if current scenario == Scenario:LaneDriving ~/input/lateral_offset \u25b3 tier4_planning_msgs::msg::LateralOffset Lateral offset to trigger side shift ~/system/operation_mode/state \u25cb autoware_adapi_v1_msgs::msg::OperationModeState Allows the planning module to know if vehicle is in autonomous mode or if it can be controlledref "},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#output","title":"Output","text":"Name Type Description QoS Durability ~/output/path tier4_planning_msgs::msg::PathWithLaneId The path generated by modules volatile ~/output/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand Turn indicators command volatile ~/output/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand Hazard lights command volatile ~/output/modified_goal autoware_planning_msgs::msg::PoseWithUuidStamped Output modified goal commands transient_local ~/output/reroute_availability tier4_planning_msgs::msg::RerouteAvailability The path the module is about to take. To be executed as soon as external approval is obtained volatile"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#debug","title":"Debug","text":"Name Type Description QoS Durability ~/debug/avoidance_debug_message_array tier4_planning_msgs::msg::AvoidanceDebugMsgArray Debug message for avoidance. Notifies users of reasons avoidance path cannot be generated volatile ~/debug/lane_change_debug_message_array tier4_planning_msgs::msg::LaneChangeDebugMsgArray Debug message for lane change. Notifies users of unsafe conditions during lane-changing process volatile ~/debug/maximum_drivable_area visualization_msgs::msg::MarkerArray Shows maximum static drivable area volatile ~/debug/turn_signal_info visualization_msgs::msg::MarkerArray TBA volatile ~/debug/bound visualization_msgs::msg::MarkerArray Debug for static drivable area volatile ~/planning/path_candidate/* autoware_planning_msgs::msg::Path The path before approval volatile ~/planning/path_reference/* autoware_planning_msgs::msg::Path Reference path generated by each module volatile

Note

For specific information about which topics are being subscribed to and published, refer to behavior_path_planner.xml.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#how-to-enable-or-disable-modules","title":"How to Enable or Disable Modules","text":"

Enabling and disabling the modules in the Behavior Path Planner is primarily managed through two key files: default_preset.yaml and behavior_path_planner.launch.xml.

The default_preset.yaml file acts as a configuration file for enabling or disabling specific modules within the planner. It contains a series of arguments which represent the Behavior Path Planner's modules or features. For example:

Note

Click here to view the default_preset.yaml.

The behavior_path_planner.launch.xml file references the settings defined in default_preset.yaml to apply the configurations when the Behavior Path Planner's node is running. For instance, the parameter static_obstacle_avoidance.enable_module in the following segment corresponds to launch_static_obstacle_avoidance_module from default_preset.yaml:

<param name=\"static_obstacle_avoidance.enable_module\" value=\"$(var launch_static_obstacle_avoidance_module)\"/>\n

Therefore, to enable or disable a module, simply set the corresponding module in default_preset.yaml to true or false. These changes will be applied upon the next launch of Autoware.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#generating-path","title":"Generating Path","text":"

A sophisticated methodology is used for path generation, particularly focusing on maneuvers like lane changes and avoidance. At the core of this design is the smooth lateral shifting of the reference path, achieved through a constant-jerk profile. This approach ensures a consistent rate of change in acceleration, facilitating smooth transitions and minimizing abrupt changes in lateral dynamics, crucial for passenger comfort and safety.

The design involves complex mathematical formulations for calculating the lateral shift of the vehicle's path over time. These calculations include determining lateral displacement, velocity, and acceleration, while considering the vehicle's lateral acceleration and velocity limits. This is essential for ensuring that the vehicle's movements remain safe and manageable.

The ShiftLine struct (as seen here) is utilized to represent points along the path where the lateral shift starts and ends. It includes details like the start and end points in absolute coordinates, the relative shift lengths at these points compared to the reference path, and the associated indexes on the reference path. This struct is integral to managing the path shifts, as it allows the path planner to dynamically adjust the trajectory based on the vehicle's current position and planned maneuver.

Furthermore, the design and its implementation incorporate various equations and mathematical models to calculate essential parameters for the path shift. These include the total distance of the lateral shift, the maximum allowable lateral acceleration and jerk, and the total time required for the shift. Practical considerations are also noted, such as simplifying assumptions in the absence of a specific time interval for most lane change and avoidance cases.

The shifted path generation logic enables the Behavior Path Planner to dynamically generate safe and efficient paths, precisely controlling the vehicle\u2019s lateral movements to ensure the smooth execution of lane changes and avoidance maneuvers. This careful planning and execution adhere to the vehicle's dynamic capabilities and safety constraints, maximizing efficiency and safety in autonomous vehicle navigation.

Note

If you're a math lover, refer to Path Generation Design for the nitty-gritty.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#collision-assessment-safety-check","title":"Collision Assessment / Safety Check","text":"

The purpose of the collision assessment function in the Behavior Path Planner is to evaluate the potential for collisions with target objects across all modules. It is utilized in two scenarios:

  1. During candidate path generation, to ensure that the generated candidate path is collision-free.
  2. When the path is approved by the manager, and the ego vehicle is executing the current module. If the current situation is deemed unsafe, depending on each module's requirements, the planner will either cancel the execution or opt to execute another module.

The safety check process involves several steps. Initially, it obtains the pose of the target object at a specific time, typically through interpolation of the predicted path. It then checks for any overlap between the ego vehicle and the target object at this time. If an overlap is detected, the path is deemed unsafe. The function also identifies which vehicle is in front by using the arc length along the given path. The function operates under the assumption that accurate data on the position, velocity, and shape of both the ego vehicle (the autonomous vehicle) and any target objects are available. It also relies on the yaw angle of each point in the predicted paths of these objects, which is expected to point towards the next path point.

A critical part of the safety check is the calculation of the RSS (Responsibility-Sensitive Safety) distance-inspired algorithm. This algorithm considers factors such as reaction time, safety time margin, and the velocities and decelerations of both vehicles. Extended object polygons are created for both the ego and target vehicles. Notably, the rear object\u2019s polygon is extended by the RSS distance longitudinally and by a lateral margin. The function finally checks for overlap between this extended rear object polygon and the front object polygon. Any overlap indicates a potential unsafe situation.

However, the module does have a limitation concerning the yaw angle of each point in the predicted paths of target objects, which may not always accurately point to the next point, leading to potential inaccuracies in some edge cases.

Note

For further reading on the collision assessment method, please refer to Safety check utils

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#generating-drivable-area","title":"Generating Drivable Area","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#static-drivable-area-logic","title":"Static Drivable Area logic","text":"

The drivable area is used to determine the area in which the ego vehicle can travel. The primary goal of static drivable area expansion is to ensure safe travel by generating an area that encompasses only the necessary spaces for the vehicle's current behavior, while excluding non-essential areas. For example, while avoidance module is running, the drivable area includes additional space needed for maneuvers around obstacles, and it limits the behavior by not extending the avoidance path outside of lanelet areas.

Static drivable area expansion operates under assumptions about the correct arrangement of lanes and the coverage of both the front and rear of the vehicle within the left and right boundaries. Key parameters for drivable area generation include extra footprint offsets for the ego vehicle, the handling of dynamic objects, maximum expansion distance, and specific methods for expansion. Additionally, since each module generates its own drivable area, before passing it as the input to generate the next running module's drivable area, or before generating a unified drivable area, the system sorts drivable lanes based on the vehicle's passage order. This ensures the correct definition of the lanes used in drivable area generation.

Note

Further details can be found in Drivable Area Design.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#dynamic-drivable-area-logic","title":"Dynamic Drivable Area Logic","text":"

Large vehicles require much more space, which sometimes causes them to veer out of their current lane. A typical example being a bus making a turn at a corner. In such cases, relying on a static drivable area is insufficient, since the static method depends on lane information provided by high-definition maps. To overcome the limitations of the static approach, the dynamic drivable area expansion algorithm adjusts the navigable space for an autonomous vehicle in real-time. It conserves computational power by reusing previously calculated path data, updating only when there is a significant change in the vehicle's position. The system evaluates the minimum lane width necessary to accommodate the vehicle's turning radius and other dynamic factors. It then calculates the optimal expansion of the drivable area's boundaries to ensure there is adequate space for safe maneuvering, taking into account the vehicle's path curvature. The rate at which these boundaries can expand or contract is moderated to maintain stability in the vehicle's navigation. The algorithm aims to maximize the drivable space while avoiding fixed obstacles and adhering to legal driving limits. Finally, it applies these boundary adjustments and smooths out the path curvature calculations to ensure a safe and legally compliant navigable path is maintained throughout the vehicle's operation.

Note

The feature can be enabled in the drivable_area_expansion.param.yaml.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#generating-turn-signal","title":"Generating Turn Signal","text":"

The Behavior Path Planner module uses the autoware_vehicle_msgs::msg::TurnIndicatorsCommand to output turn signal commands (see TurnIndicatorsCommand.idl). The system evaluates the driving context and determines when to activate turn signals based on its maneuver planning\u2014like turning, lane changing, or obstacle avoidance.

Within this framework, the system differentiates between desired and required blinker activations. Desired activations are those recommended by traffic laws for typical driving scenarios, such as signaling before a lane change or turn. Required activations are those that are deemed mandatory for safety reasons, like signaling an abrupt lane change to avoid an obstacle.

The TurnIndicatorsCommand message structure has a command field that can take one of several constants: NO_COMMAND indicates no signal is necessary, DISABLE to deactivate signals, ENABLE_LEFT to signal a left turn, and ENABLE_RIGHT to signal a right turn. The Behavior Path Planner sends these commands at the appropriate times, based on its rules-based system that considers both the desired and required scenarios for blinker activation.

Note

For more in-depth information, refer to Turn Signal Design document.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#rerouting","title":"Rerouting","text":"

Warning

The rerouting feature is under development. Further information will be included at a later date.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#parameters-and-configuration","title":"Parameters and Configuration","text":"

The configuration files are organized in a hierarchical directory structure for ease of navigation and management. Each subdirectory contains specific configuration files relevant to its module. The root directory holds general configuration files that apply to the overall behavior of the planner. The following is an overview of the directory structure with the respective configuration files.

behavior_path_planner\n\u251c\u2500\u2500 behavior_path_planner.param.yaml\n\u251c\u2500\u2500 drivable_area_expansion.param.yaml\n\u251c\u2500\u2500 scene_module_manager.param.yaml\n\u251c\u2500\u2500 static_obstacle_avoidance\n\u2502   \u2514\u2500\u2500 static_obstacle_avoidance.param.yaml\n\u251c\u2500\u2500 avoidance_by_lc\n\u2502   \u2514\u2500\u2500 avoidance_by_lc.param.yaml\n\u251c\u2500\u2500 dynamic_obstacle_avoidance\n\u2502   \u2514\u2500\u2500 dynamic_obstacle_avoidance.param.yaml\n\u251c\u2500\u2500 goal_planner\n\u2502   \u2514\u2500\u2500 goal_planner.param.yaml\n\u251c\u2500\u2500 lane_change\n\u2502   \u2514\u2500\u2500 lane_change.param.yaml\n\u251c\u2500\u2500 side_shift\n\u2502   \u2514\u2500\u2500 side_shift.param.yaml\n\u2514\u2500\u2500 start_planner\n    \u2514\u2500\u2500 start_planner.param.yaml\n

Similarly, the common directory contains configuration files that are used across various modules, providing shared parameters and settings essential for the functioning of the Behavior Path Planner:

common\n\u251c\u2500\u2500 common.param.yaml\n\u251c\u2500\u2500 costmap_generator.param.yaml\n\u2514\u2500\u2500 nearest_search.param.yaml\n

The preset directory contains the configurations for managing the operational state of various modules. It includes the default_preset.yaml file, which specifically caters to enabling and disabling modules within the system.

preset\n\u2514\u2500\u2500 default_preset.yaml\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#limitations-future-work","title":"Limitations & Future Work","text":"
  1. The Goal Planner module cannot be simultaneously executed together with other modules.
  2. The module is not designed as a plugin. Integrating a custom module is not straightforward. Users have to modify part of the Behavior Path Planner's main code.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_interface_design/","title":"Interface design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_interface_design/#interface-design","title":"Interface design","text":"

Warning

Under Construction

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/","title":"Limitations","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitations","title":"Limitations","text":"

The document describes the limitations that are currently present in the behavior_path_planner module.

The following items (but not limited to) fall in the scope of limitation:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-multiple-connected-opposite-lanes-require-linestring-with-shared-id","title":"Limitation: Multiple connected opposite lanes require Linestring with shared ID","text":"

To fully utilize the Lanelet2's API, the design of the vector map (.osm) needs to follow all the criteria described in Lanelet2 documentation. Specifically, in the case of 2 or more lanes, the Linestrings that divide the current lane with the opposite/adjacent lane need to have a matching Linestring ID. Assume the following ideal case.

In the image, Linestring ID51 is shared by Lanelet A and Lanelet B. Hence we can directly use the available left, adjacentLeft, right, adjacentRight and findUsages method within Lanelet2's API to directly query the direction and opposite lane availability.

const auto right_lane = routing_graph_ptr_->right(lanelet);\nconst auto adjacent_right_lane = routing_graph_ptr_->adjacentRight(lanelet);\nconst auto opposite_right_lane = lanelet_map_ptr_->laneletLayer.findUsages(lanelet.rightBound().invert());\n

The following images show the situation where these API does not work directly. This means that we cannot use them straight away, and several assumptions and logical instruction are needed to make these APIs work.

In this example (multiple linestring issues), Lanelet C contains Linestring ID61 and ID62, while Lanelet D contains Linestring ID63 and ID 64. Although the Linestring ID62 and ID64 have identical point IDs and seem visually connected, the API will treat these Linestring as though they are separated. When it searches for any Lanelet that is connected via Linestring ID62, it will return NULL, since ID62 only connects to Lanelet C and not other Lanelet.

Although, in this case, it is possible to forcefully search the lanelet availability by checking the lanelet that contains the points, usinggetLaneletFromPoint method. But, the implementation requires complex rules for it to work. Take the following images as an example.

Assume Object X is in Lanelet F. We can forcefully search Lanelet E via Point 7, and it will work if Point 7 is utilized by only 2 lanelet. However, the complexity increases when we want to start searching for the direction of the opposite lane. We can infer the direction of the lanelet by using mathematical operations (dot product of vector V_ID72 (Point 6 minus Point 9), and V_ID74 (Point 7 minus Point 8). But, notice that we did not use Point 7 in V_ID72. This is because searching it requires an iteration, adding additional non-beneficial computation.

Suppose the points are used by more than 2 lanelets. In that case, we have to find the differences for all lanelet, and the result might be undefined. The reason is that the differences between the coordinates do not reflect the actual shape of the lanelet. The following image demonstrates this point.

There are many other available solutions to try. However, further attempt to solve this might cause issues in the future, especially for maintaining or scaling up the software.

In conclusion, the multiple Linestring issues will not be supported. Covering these scenarios might give the user an \"everything is possible\" impression. This is dangerous since any attempt to create a non-standardized vector map is not compliant with safety regulations.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-avoidance-at-corners-and-intersections","title":"Limitation: Avoidance at Corners and Intersections","text":"

Currently, the implementation doesn't cover avoidance at corners and intersections. The reason is similar to here. However, this case can still be supported in the future (assuming the vector map is defined correctly).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-chattering-shifts","title":"Limitation: Chattering shifts","text":"

There are possibilities that the shifted path chatters as a result of various factors. For example, bounded box shape or position from the perception input. Sometimes, it is difficult for the perception to get complete information about the object's size. As the object size is updated, the object length will also be updated. This might cause shifts point to be re-calculated, therefore resulting in chattering shift points.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#manager-design","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#purpose-role","title":"Purpose / Role","text":"

The manager launches and executes scene modules in behavior_path_planner depending on the use case, and has been developed to achieve following features:

Movie

Support status:

Name Simple exclusive execution Advanced simultaneous execution Avoidance Avoidance By Lane Change Lane Change External Lane Change Goal Planner (without goal modification) Goal Planner (with goal modification) Pull Out Side Shift

Click here for supported scene modules.

Warning

It is still under development and some functions may be unstable.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#overview","title":"Overview","text":"

The manager is the core part of the behavior_path_planner implementation. It outputs path based on the latest data.

The manager has sub-managers for each scene module, and its main task is

Additionally, the manager generates root reference path, and if any other modules don't request execution, the path is used as the planning result of behavior_path_planner.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#slot","title":"Slot","text":"

The manager owns several containers of sub-managers, namely slots, that holds/runs several sub-managers and send the output to the next slot. Given the initial reference path, each slot processes the input path and the output path is processed by the next slot. The final slot output is utilized as the output of the manager. The slot passes following information

struct SlotOutput\n{\nBehaviorModuleOutput valid_output;\n\n// if candidate module is running, valid_output contains the planning by candidate module. In\n// that case, downstream slots will just run aproved modules and do not try to launch new\n// modules\nbool is_upstream_candidate_exclusive{false};\n\n// if this slot failed, downstream slots need to refresh approved/candidate modules and just\n// forward valid_output of this slot output\nbool is_upstream_failed_approved{false};\n\n// if the approved module in this slot returned to isWaitingApproval, downstream slots need to\n// refresh candidate once\nbool is_upstream_waiting_approved{false};\n};\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#sub-managers","title":"Sub-managers","text":"

The sub-manager's main task is

sub-managers

Sub-manager is registered on the manager with the following function.

/**\n * @brief register managers.\n * @param manager pointer.\n */\nvoid registerSceneModuleManager(const SceneModuleManagerPtr & manager_ptr)\n{\nRCLCPP_INFO(logger_, \"register %s module\", manager_ptr->getModuleName().c_str());\nmanager_ptrs_.push_back(manager_ptr);\nprocessing_time_.emplace(manager_ptr->getModuleName(), 0.0);\n}\n

Code is here

Sub-manager has the following parameters that are needed by the manager to manage the launched modules, and these parameters can be set for each module.

struct ModuleConfigParameters\n{\nbool enable_module{false};\nbool enable_rtc{false};\nbool enable_simultaneous_execution_as_approved_module{false};\nbool enable_simultaneous_execution_as_candidate_module{false};\nuint8_t priority{0};\n};\n

Code is here

Name Type Description enable_module bool if true, the sub-manager is registered on the manager. enable_rtc bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc. enable_simultaneous_execution_as_candidate_module bool if true, the manager allows its scene modules to run with other scene modules as candidate module. enable_simultaneous_execution_as_approved_module bool if true, the manager allows its scene modules to run with other scene modules as approved module. priority uint8_t the manager decides execution priority based on this parameter. The smaller the number is, the higher the priority is."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-modules","title":"Scene modules","text":"

Scene modules receives necessary data and RTC command, and outputs candidate path(s), reference path and RTC cooperate status. When multiple modules run in series, the output of the previous module is received as input and the information is used to generate a new modified path, as shown in the following figure. And, when one module is running alone, it receives a reference path generated from the centerline of the lane in which Ego is currently driving as previous module output.

scene module I/O Type Description IN behavior_path_planner::BehaviorModuleOutput previous module output. contains data necessary for path planning. IN behavior_path_planner::PlannerData contains data necessary for path planning. IN tier4_planning_msgs::srv::CooperateCommands contains approval data for scene module's path modification. (details) OUT behavior_path_planner::BehaviorModuleOutput contains modified path, turn signal information, etc... OUT tier4_planning_msgs::msg::CooperateStatus contains RTC cooperate status. (details) OUT autoware_planning_msgs::msg::Path candidate path output by a module that has not received approval for path change. when it approved, the ego's following path is switched to this path. (just for visualization) OUT autoware_planning_msgs::msg::Path reference path generated from the centerline of the lane the ego is going to follow. (just for visualization) OUT visualization_msgs::msg::MarkerArray virtual wall, debug info, etc...

Scene modules running on the manager are stored on the candidate modules stack or approved modules stack depending on the condition whether the path modification has been approved or not.

Stack Approval condition Description candidate modules Not approved The candidate modules whose modified path has not been approved by RTC is stored in vector candidate_module_ptrs_ in the manager. The candidate modules stack is updated in the following order. 1. The manager selects only those modules that can be executed based on the configuration of the sub-manager whose scene module requests execution. 2. Determines the execution priority. 3. Executes them as candidate module. All of these modules receive the decided (approved) path from approved modules stack and RUN in PARALLEL. approved modules Already approved When the path modification is approved via RTC commands, the manager moves the candidate module to approved modules stack. These modules are stored in approved_module_ptrs_. In this stack, all scene modules RUN in SERIES."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#process-flow","title":"Process flow","text":"

There are 6 steps in one process:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step1","title":"Step1","text":"

At first, the manager set latest planner data, and run all approved modules and get output path. At this time, the manager checks module status and removes expired modules from approved modules stack.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step2","title":"Step2","text":"

Input approved modules output and necessary data to all registered modules, and the modules judge the necessity of path modification based on it. The manager checks which module makes execution request.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step3","title":"Step3","text":"

Check request module existence.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step4","title":"Step4","text":"

The manager decides which module to execute as candidate modules from the modules that requested to execute path modification.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step5","title":"Step5","text":"

Decides the priority order of execution among candidate modules. And, run all candidate modules. Each modules outputs reference path and RTC cooperate status.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step6","title":"Step6","text":"

Move approved module to approved modules stack from candidate modules stack.

and, within a single planning cycle, these steps are repeated until the following conditions are satisfied.

  while (rclcpp::ok()) {\n/**\n     * STEP1: get approved modules' output\n     */\nconst auto approved_modules_output = runApprovedModules(data);\n\n/**\n     * STEP2: check modules that need to be launched\n     */\nconst auto request_modules = getRequestModules(approved_modules_output);\n\n/**\n     * STEP3: if there is no module that need to be launched, return approved modules' output\n     */\nif (request_modules.empty()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n     * STEP4: if there is module that should be launched, execute the module\n     */\nconst auto [highest_priority_module, candidate_modules_output] =\nrunRequestModules(request_modules, data, approved_modules_output);\nif (!highest_priority_module) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n     * STEP5: if the candidate module's modification is NOT approved yet, return the result.\n     * NOTE: the result is output of the candidate module, but the output path don't contains path\n     * shape modification that needs approval. On the other hand, it could include velocity profile\n     * modification.\n     */\nif (highest_priority_module->isWaitingApproval()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn candidate_modules_output;\n}\n\n/**\n     * STEP6: if the candidate module is approved, push the module into approved_module_ptrs_\n     */\naddApprovedModule(highest_priority_module);\nclearCandidateModules();\n}\n

Code is here

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#priority-of-execution-request","title":"Priority of execution request","text":"

Compare priorities parameter among sub-managers to determine the order of execution based on config. Therefore, the priority between sub-modules does NOT change at runtime.

  /**\n   * @brief swap the modules order based on it's priority.\n   * @param modules.\n   * @details for now, the priority is decided in config file and doesn't change runtime.\n   */\nvoid sortByPriority(std::vector<SceneModulePtr> & modules) const\n{\n// TODO(someone) enhance this priority decision method.\nstd::sort(modules.begin(), modules.end(), [this](auto a, auto b) {\nreturn getManager(a)->getPriority() < getManager(b)->getPriority();\n});\n}\n

Code is here

In the future, however, we are considering having the priorities change dynamically depending on the situation in order to achieve more complex use cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-request-modules-to-run","title":"How to decide which request modules to run?","text":"

On this manager, it is possible that multiple scene modules may request path modification at same time. In that case, the modules to be executed as candidate module is determined in the following order.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step1_1","title":"Step1","text":"

Push back the modules that make a request to request_modules.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step2_1","title":"Step2","text":"

Check approved modules stack, and remove non-executable modules fromrequest_modules based on the following condition.

Executable or not:

Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NO

If a module that doesn't support simultaneous execution exists in approved modules stack (NOT satisfy Condition B), no more modules can be added to the stack, and therefore none of the modules can be executed as candidate.

For example, if approved module's setting of enable_simultaneous_execution_as_approved_module is ENABLE, then only modules whose the setting is ENABLE proceed to the next step.

Other examples:

Process Description If approved modules stack is empty, then all request modules proceed to the next step, regardless of the setting of enable_simultaneous_execution_as_approved_module. If approved module's setting of enable_simultaneous_execution_as_approved_module is DISABLE, then all request modules are discarded."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step3_1","title":"Step3","text":"

Sort request_modules by priority.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step4_1","title":"Step4","text":"

Check and pick up executable modules as candidate in order of priority based on the following conditions.

Executable or not:

Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NO

For example, if the highest priority module's setting of enable_simultaneous_execution_as_candidate_module is DISABLE, then all modules after the second priority are discarded.

Other examples:

Process Description If a module with a higher priority exists, lower priority modules whose setting of enable_simultaneous_execution_as_candidate_module is DISABLE are discarded. If all modules' setting of enable_simultaneous_execution_as_candidate_module is ENABLE, then all modules proceed to the next step."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step5_1","title":"Step5","text":"

Run all candidate modules.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-modules-output-to-use","title":"How to decide which module's output to use?","text":"

Sometimes, multiple candidate modules are running simultaneously.

In this case, the manager selects a candidate modules which output path is used as behavior_path_planner output by approval condition in the following rules.

Module A's approval condition Module A's priority Module B's approval condition Module B's priority Final priority Approved 1 Approved 99 Module A > Module B Approved 1 Not approved 99 Module A > Module B Not approved 1 Approved 99 Module B > Module A Not approved 1 Not approved 99 Module A > Module B

Note

The smaller the number is, the higher the priority is.

module priority

Additionally, the manager moves the highest priority module to approved modules stack if it is already approved.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-module-unregister-process","title":"Scene module unregister process","text":"

The manager removes expired module in approved modules stack based on the module's status.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#waiting-approval-modules","title":"Waiting approval modules","text":"

If one module requests multiple path changes, the module may be back to waiting approval condition again. In this case, the manager moves the module to candidate modules stack. If there are some modules that was pushed back to approved modules stack later than the waiting approved module, it is also removed from approved modules stack.

This is because module C is planning output path with the output of module B as input, and if module B is removed from approved modules stack and the input of module C changes, the output path of module C may also change greatly, and the output path will be unstable.

As a result, the module A's output is used as approved modules stack.

If this case happened in the slot, is_upstream_waiting_approved is set to true.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#failure-modules","title":"Failure modules","text":"

If a module returns ModuleStatus::FAILURE, the manager removes the failed module. Additionally, all modules after the failed module are removed, even if they did not return ModuleStatus::FAILURE. These modules are not added back to the candidate modules stack and will instead run again from the beginning. Once these modules are removed, the output of the module prior to the failed module will be used as the planner's output.

As shown in the example below, modules B, A, and C are running. When module A returns ModuleStatus::FAILURE, both module A and C are removed from the approved modules stack. Module B's output is then used as the final output of the planner.

If this case happened in the slot, is_upstream_failed_approved is set to true.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#succeeded-modules","title":"Succeeded modules","text":"

The succeeded modules return the status ModuleStatus::SUCCESS. The manager removes those modules based on Last In First Out policy. In other words, if a module added later to approved modules stack is still running (is in ModuleStatus::RUNNING), the manager doesn't remove the succeeded module. The reason for this is the same as in removal for waiting approval modules, and is to prevent sudden changes of the running module's output.

As an exception, if Lane Change module returns status ModuleStatus::SUCCESS, the manager doesn't remove any modules until all modules is in status ModuleStatus::SUCCESS. This is because when the manager removes the Lane Change (normal LC, external LC, avoidance by LC) module as succeeded module, the manager updates the information of the lane Ego is currently driving in, so root reference path (= module A's input path) changes significantly at that moment.

When the manager removes succeeded modules, the last added module's output is used as approved modules stack.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#slot-output-propagation","title":"Slot output propagation","text":"

As the initial solution, following SlotOutput is passed to the first slot.

  SlotOutput result_output = SlotOutput{\ngetReferencePath(data),\nfalse,\nfalse,\nfalse,\n};\n

If a slot turned out to be is_upstream_failed_approved, then all the subsequent slots are refreshed and have all of their approved_modules and candidate_modules cleared. The valid_output is just transferred to the end without any modification.

If a slot turned out to be is_upstream_waiting_approved, then all the subsequent slots clear their candidate_modules once and apply their approved_modules to obtain the slot output.

If a slot turned out to be is_upstream_candidate_exclusive, it means that a not simultaneously_executable_as_candidate module is running in that slot. Then all the subsequent modules just apply their approved_modules without trying to launch new candidate_modules.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#reference-path-generation","title":"Reference path generation","text":"

The reference path is generated from the centerline of the lanelet sequence obtained from the current route lanelet, and it is not only used as an input to the first added module of approved modules stack, but also used as the output of behavior_path_planner if none of the modules are running.

root reference path generation

The current route lanelet keeps track of the route lanelet currently followed by the planner. It is initialized as the closest lanelet within the route. It is then updated as ego travels along the route such that (1) it follows the previous current route lanelet and (2) it is the closest lanelet within the route.

The current route lanelet can be reset to the closest lanelet within the route, ignoring whether it follows the previous current route lanelet .

The manager needs to know the ego behavior and then generate a root reference path from the lanes that Ego should follow.

For example, during autonomous driving, even if Ego moves into the next lane in order to avoid a parked vehicle, the target lanes that Ego should follow will NOT change because Ego will return to the original lane after the avoidance maneuver. Therefore, the manager does NOT reset the current route lanelet, even if the avoidance maneuver is finished.

On the other hand, if the lane change is successful, the manager resets the current route lanelet because the lane that Ego should follow changes.

In addition, while manually driving (i.e., either the OperationModeState is different from AUTONOMOUS or the Autoware control is not engaged), the manager resets the current route lanelet at each iteration because the ego vehicle may move to an adjacent lane regardless of the decision of the autonomous driving system. The only exception is when a module is already approved, allowing testing the module's behavior while manually driving.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#drivable-area-generation","title":"Drivable area generation","text":"

Warning

Under Construction

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#turn-signal-management","title":"Turn signal management","text":"

Warning

Under Construction

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/","title":"Drivable Area design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-design","title":"Drivable Area design","text":"

Drivable Area represents the area where ego vehicle can pass.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#purpose-role","title":"Purpose / Role","text":"

In order to defined the area that ego vehicle can travel safely, we generate drivable area in behavior path planner module. Our drivable area is represented by two line strings, which are left_bound line and right_bound line respectively. Both left_bound and right_bound are created from left and right boundaries of lanelets. Note that left_bound and right bound are generated by generateDrivableArea function.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#assumption","title":"Assumption","text":"

Our drivable area has several assumptions.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#limitations","title":"Limitations","text":"

Currently, when clipping left bound or right bound, it can clip the bound more than necessary and the generated path might be conservative.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#parameters-for-drivable-area-generation","title":"Parameters for drivable area generation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion","title":"Static expansion","text":"Name Unit Type Description Default value drivable_area_right_bound_offset [m] double right offset length to expand drivable area 5.0 drivable_area_left_bound_offset [m] double left offset length to expand drivable area 5.0 drivable_area_types_to_skip [-] string linestring types (as defined in the lanelet map) that will not be expanded road_border"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion","title":"Dynamic expansion","text":"Name Unit Type Description Default value enabled [-] boolean if true, dynamically expand the drivable area based on the path curvature true print_runtime [-] boolean if true, runtime is logged by the node true max_expansion_distance [m] double maximum distance by which the original drivable area can be expanded (no limit if set to 0) 0.0 smoothing.curvature_average_window [-] int window size used for smoothing the curvatures using a moving window average 3 smoothing.max_bound_rate [m/m] double maximum rate of change of the bound lateral distance over its arc length 1.0 smoothing.arc_length_range [m] double arc length range where an expansion distance is initially applied 2.0 ego.extra_wheel_base [m] double extra ego wheelbase 0.0 ego.extra_front_overhang [m] double extra ego overhang 0.5 ego.extra_width [m] double extra ego width 1.0 dynamic_objects.avoid [-] boolean if true, the drivable area is not expanded in the predicted path of dynamic objects true dynamic_objects.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.5 path_preprocessing.max_arc_length [m] double maximum arc length along the path where the ego footprint is projected (0.0 means no limit) 100.0 path_preprocessing.resample_interval [m] double fixed interval between resampled path points (0.0 means path points are directly used) 2.0 path_preprocessing.reuse_max_deviation [m] double if the path changes by more than this value, the curvatures are recalculated. Otherwise they are reused 0.5 avoid_linestring.types [-] string array linestring types in the lanelet maps that will not be crossed when expanding the drivable area [\"road_border\", \"curbstone\"] avoid_linestring.distance [m] double distance to keep between the drivable area and the linestrings to avoid 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This section gives details of the generation of the drivable area (left_bound and right_bound).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-lanes-generation","title":"Drivable Lanes Generation","text":"

Before generating drivable areas, drivable lanes need to be sorted. Drivable Lanes are selected in each module (Lane Follow, Avoidance, Lane Change, Goal Planner, Pull Out and etc.), so more details about selection of drivable lanes can be found in each module's document. We use the following structure to define the drivable lanes.

struct DrivalbleLanes\n{\nlanelet::ConstLanelet right_lanelet; // right most lane\nlanelet::ConstLanelet left_lanelet; // left most lane\nlanelet::ConstLanelets middle_lanelets; // middle lanes\n};\n

The image of the sorted drivable lanes is depicted in the following picture.

Note that, the order of drivable lanes become

drivable_lanes = {DrivableLane1, DrivableLanes2, DrivableLanes3, DrivableLanes4, DrivableLanes5}\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-generation","title":"Drivable Area Generation","text":"

In this section, a drivable area is created using drivable lanes arranged in the order in which vehicles pass by. We created left_bound from left boundary of the leftmost lanelet and right_bound from right boundary of the rightmost lanelet. The image of the created drivable area will be the following blue lines. Note that the drivable area is defined in the Path and PathWithLaneId messages as

std::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n

and each point of right bound and left bound has a position in the absolute coordinate system.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-expansion","title":"Drivable Area Expansion","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion_1","title":"Static Expansion","text":"

Each module can statically expand the left and right bounds of the target lanes by the parameter defined values. This enables large vehicles to pass narrow curve. The image of this process can be described as

Note that we only expand right bound of the rightmost lane and left bound of the leftmost lane.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion_1","title":"Dynamic Expansion","text":"

The drivable area can also be expanded dynamically based on a minimum width calculated from the path curvature and the ego vehicle's properties. If static expansion is also enabled, the dynamic expansion will be done after the static expansion such that both expansions are applied.

Without dynamic expansion With dynamic expansion

Next we detail the algorithm used to expand the drivable area bounds.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#1-calculate-and-smooth-the-path-curvature","title":"1 Calculate and smooth the path curvature","text":"

To avoid sudden changes of the dynamically expanded drivable area, we first try to reuse as much of the previous path and its calculated curvatures as possible. Previous path points and curvatures are reused up to the first previous path point that deviates from the new path by more than the reuse_max_deviation parameter. At this stage, the path is also resampled according to the resampled_interval and cropped according to the max_arc_length. With the resulting preprocessed path points and previous curvatures, curvatures of the new path points are calculated using the 3 points method and smoothed using a moving window average with window size curvature_average_window.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#2-for-each-path-point-calculate-the-closest-bound-segment-and-the-minimum-drivable-area-width","title":"2 For each path point, calculate the closest bound segment and the minimum drivable area width","text":"

Each path point is projected on the original left and right drivable area bounds to calculate its corresponding bound index, original distance from the bounds, and the projected point. Additionally, for each path point, the minimum drivable area width is calculated using the following equation: Where \\(W\\) is the minimum drivable area width, \\(a\\), is the front overhang of ego, \\(l\\) is the wheelbase of ego, \\(w\\) is the width of ego, and \\(k\\) is the path curvature. This equation was derived from the work of Lim, H., Kim, C., and Jo, A., \"Model Predictive Control-Based Lateral Control of Autonomous Large-Size Bus on Road with Large Curvature,\" SAE Technical Paper 2021-01-0099, 2021.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#3-calculate-maximum-expansion-distances-of-each-bound-point-based-on-dynamic-objects-and-linestring-of-the-vector-map-optional","title":"3 Calculate maximum expansion distances of each bound point based on dynamic objects and linestring of the vector map (optional)","text":"

For each drivable area bound point, we calculate its maximum expansion distance as its distance to the closest \"obstacle\" (either a map linestring with type avoid_linestrings.type, or a dynamic object footprint if dynamic_objects.avoid is set to true). If max_expansion_distance is not 0.0, it is use here if smaller than the distance to the closest obstacle.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#4-calculate-by-how-much-each-bound-point-should-be-pushed-away-from-the-path","title":"4 Calculate by how much each bound point should be pushed away from the path","text":"

For each bound point, a shift distance is calculated. such that the resulting width between corresponding left and right bound points is as close as possible to the minimum width calculated in step 2 but the individual shift distance stays bellow the previously calculated maximum expansion distance.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#5-shift-bound-points-by-the-values-calculated-in-step-4-and-remove-all-loops-in-the-resulting-bound","title":"5 Shift bound points by the values calculated in step 4 and remove all loops in the resulting bound","text":"

Finally, each bound point is shifted away from the path by the distance calculated in step 4. Once all points have been shifted, loops are removed from the bound and we obtain our final expanded drivable area.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#visualizing-maximum-drivable-area-debug","title":"Visualizing maximum drivable area (Debug)","text":"

Sometimes, the developers might get a different result between two maps that may look identical during visual inspection.

For example, in the same area, one can perform avoidance and another cannot. This might be related to the maximum drivable area issues due to the non-compliance vector map design from the user.

To debug the issue, the maximum drivable area boundary can be visualized.

The maximum drivable area can be visualize by adding the marker from /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/maximum_drivable_area

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#expansion-with-hatched-road-markings-area","title":"Expansion with hatched road markings area","text":"

If the hatched road markings area is defined in the lanelet map, the area can be used as a drivable area. Since the area is expressed as a polygon format of Lanelet2, several steps are required for correct expansion.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/","title":"Path Generation design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#path-generation-design","title":"Path Generation design","text":"

This document explains how the path is generated for lane change and avoidance, etc. The implementation can be found in path_shifter.hpp.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#overview","title":"Overview","text":"

The base idea of the path generation in lane change and avoidance is to smoothly shift the reference path, such as the center line, in the lateral direction. This is achieved by using a constant-jerk profile as in the figure below. More details on how it is used can be found in README. It is assumed that the reference path is smooth enough for this algorithm.

The figure below explains how the application of a constant lateral jerk \\(l^{'''}(s)\\) can be used to induce lateral shifting. In order to comply with the limits on lateral acceleration and velocity, zero-jerk time is employed in the figure ( \\(T_a\\) and \\(T_v\\) ). In each interval where constant jerk is applied, the shift position \\(l(s)\\) can be characterized by a third-degree polynomial. Therefore the shift length from the reference path can then be calculated by combining spline curves.

Note that, due to the rarity of the \\(T_v\\) in almost all cases of lane change and avoidance, \\(T_v\\) is not considered in the current implementation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#mathematical-derivation","title":"Mathematical Derivation","text":"

With initial longitudinal velocity \\(v_0^{\\rm lon}\\) and longitudinal acceleration \\(a^{\\rm lon}\\), longitudinal position \\(s(t)\\) and longitudinal velocity at each time \\(v^{\\rm lon}(t)\\) can be derived as:

\\[ \\begin{align} s_1&= v^{\\rm lon}_0 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_1&= v^{\\rm lon}_0 + a^{\\rm lon} T_j \\\\ s_2&= v^{\\rm lon}_1 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_2&= v^{\\rm lon}_1 + a^{\\rm lon} T_a \\\\ s_3&= v^{\\rm lon}_2 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_3&= v^{\\rm lon}_2 + a^{\\rm lon} T_j \\\\ s_4&= v^{\\rm lon}_3 T_v + \\frac{1}{2} a^{\\rm lon} T_v^2 \\\\ v_4&= v^{\\rm lon}_3 + a^{\\rm lon} T_v \\\\ s_5&= v^{\\rm lon}_4 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_5&= v^{\\rm lon}_4 + a^{\\rm lon} T_j \\\\ s_6&= v^{\\rm lon}_5 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_6&= v^{\\rm lon}_5 + a^{\\rm lon} T_a \\\\ s_7&= v^{\\rm lon}_6 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_7&= v^{\\rm lon}_6 + a^{\\rm lon} T_j \\end{align} \\]

By applying simple integral operations, the following analytical equations can be derived to describe the shift distance \\(l(t)\\) at each time under lateral jerk, lateral acceleration, and velocity constraints.

\\[ \\begin{align} l_1&= \\frac{1}{6}jT_j^3\\\\[10pt] l_2&= \\frac{1}{6}j T_j^3 + \\frac{1}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_3&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_4&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_5&= \\frac{11}{6} j T_j^3 + \\frac{5}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v \\\\[10pt] l_6&= \\frac{11}{6} j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_7&= 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v \\end{align} \\]

These equations are used to determine the shape of a path. Additionally, by applying further mathematical operations to these basic equations, the expressions of the following subsections can be derived.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-maximum-acceleration-from-transition-time-and-final-shift-length","title":"Calculation of Maximum Acceleration from transition time and final shift length","text":"

In the case where there are no limitations on lateral velocity and lateral acceleration, the maximum lateral acceleration during the shifting can be calculated as follows. The constant-jerk time is given by \\(T_j = T_{\\rm total}/4\\) because of its symmetric property. Since \\(T_a=T_v=0\\), the final shift length \\(L=l_7=2jT_j^3\\) can be determined using the above equation. The maximum lateral acceleration is then given by \\(a_{\\rm max} =jT_j\\). This results in the following expression for the maximum lateral acceleration:

\\[ \\begin{align} a_{\\rm max}^{\\rm lat} = \\frac{8L}{T_{\\rm total}^2} \\end{align} \\]"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-ta-tj-and-jerk-from-acceleration-limit","title":"Calculation of Ta, Tj and jerk from acceleration limit","text":"

In the case where there are no limitations on lateral velocity, the constant-jerk and acceleration times, as well as the required jerk can be calculated from the acceleration limit, total time, and final shift length as follows. Since \\(T_v=0\\), the final shift length is given by:

\\[ \\begin{align} L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j \\end{align} \\]

Additionally, the velocity profile reveals the following relations:

\\[ \\begin{align} a_{\\rm lim}^{\\rm lat} &= j T_j\\\\ T_{\\rm total} &= 4T_j + 2T_a \\end{align} \\]

By solving these three equations, the following can be obtained:

\\[ \\begin{align} T_j&=\\frac{T_{\\rm total}}{2} - \\frac{2L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}}\\\\[10pt] T_a&=\\frac{4L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}} - \\frac{T_{\\rm total}}{2}\\\\[10pt] jerk&=\\frac{2a_{\\rm lim} ^2T_{\\rm total}}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}^2-4L} \\end{align} \\]

where \\(T_j\\) is the constant-jerk time, \\(T_a\\) is the constant acceleration time, \\(j\\) is the required jerk, \\(a_{\\rm lim}^{\\rm lat}\\) is the lateral acceleration limit, and \\(L\\) is the final shift length.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-required-time-from-jerk-and-acceleration-constraint","title":"Calculation of Required Time from Jerk and Acceleration Constraint","text":"

In the case where there are no limitations on lateral velocity, the total time required for shifting can be calculated from the lateral jerk and lateral acceleration limits and the final shift length as follows. By solving the two equations given above:

\\[ L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j,\\quad a_{\\rm lim}^{\\rm lat} = j T_j \\]

we obtain the following expressions:

\\[ \\begin{align} T_j &= \\frac{a_{\\rm lim}^{\\rm lat}}{j}\\\\[10pt] T_a &= \\frac{1}{2}\\sqrt{\\frac{a_{\\rm lim}^{\\rm lat}}{j}^2 + \\frac{4L}{a_{\\rm lim}^{\\rm lat}}} - \\frac{3a_{\\rm lim}^{\\rm lat}}{2j} \\end{align} \\]

The total time required for shifting can then be calculated as \\(T_{\\rm total}=4T_j+2T_a\\).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/","title":"Safety Check Utils","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#safety-check-utils","title":"Safety Check Utils","text":"

Safety check function checks if the given path will collide with a given target object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#purpose-role","title":"Purpose / Role","text":"

In the behavior path planner, certain modules (e.g., lane change) need to perform collision checks to ensure the safe navigation of the ego vehicle. These utility functions assist the user in conducting safety checks with other road participants.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#assumptions","title":"Assumptions","text":"

The safety check module is based on the following assumptions:

  1. Users must provide the position, velocity, and shape of both the ego and target objects to the utility functions.
  2. The yaw angle of each point in the predicted path of both the ego and target objects should point to the next point in the path.
  3. The safety check module uses RSS distance to determine the safety of a potential collision with other objects.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#limitations","title":"Limitations","text":"

Currently the yaw angle of each point of predicted paths of a target object does not point to the next point. Therefore, the safety check function might returns incorrect result for some edge case.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#inner-working-algorithm","title":"Inner working / Algorithm","text":"

The flow of the safety check algorithm is described in the following explanations.

Here we explain each step of the algorithm flow.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#1-get-pose-of-the-target-object-at-a-given-time","title":"1. Get pose of the target object at a given time","text":"

For the first step, we obtain the pose of the target object at a given time. This can be done by interpolating the predicted path of the object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#2-check-overlap","title":"2. Check overlap","text":"

With the interpolated pose obtained in the step.1, we check if the object and ego vehicle overlaps at a given time. If they are overlapped each other, the given path is unsafe.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#3-get-front-object","title":"3. Get front object","text":"

After the overlap check, it starts to perform the safety check for the broader range. In this step, it judges if ego or target object is in front of the other vehicle. We use arc length of the front point of each object along the given path to judge which one is in front of the other. In the following example, target object (red rectangle) is running in front of the ego vehicle (black rectangle).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#4-calculate-rss-distance","title":"4. Calculate RSS distance","text":"

After we find which vehicle is running ahead of the other vehicle, we start to compute the RSS distance. With the reaction time \\(t_{reaction}\\) and safety time margin \\(t_{margin}\\), RSS distance can be described as:

\\[ rss_{dist} = v_{rear} (t_{reaction} + t_{margin}) + \\frac{v_{rear}^2}{2|a_{rear, decel}|} - \\frac{v_{front}^2}{2|a_{front, decel|}} \\]

where \\(V_{front}\\), \\(v_{rear}\\) are front and rear vehicle velocity respectively and \\(a_{rear, front}\\), \\(a_{rear, decel}\\) are front and rear vehicle deceleration. Note that RSS distance is normally used for objects traveling in the same direction, if the yaw difference between a given ego pose and object pose is more than a user-defined yaw difference threshold, the rss collision check will be skipped for that specific pair of poses.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#5-create-extended-ego-and-target-object-polygons","title":"5. Create extended ego and target object polygons","text":"

In this step, we compute extended ego and target object polygons. The extended polygons can be described as:

As the picture shows, we expand the rear object polygon. For the longitudinal side, we extend it with the RSS distance, and for the lateral side, we extend it by the lateral margin.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#6-check-overlap","title":"6. Check overlap","text":"

Similar to the previous step, we check the overlap of the extended rear object polygon and front object polygon. If they are overlapped each other, we regard it as the unsafe situation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/","title":"Turn Signal design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#turn-signal-design","title":"Turn Signal design","text":"

Turn Signal decider determines necessary blinkers.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#purpose-role","title":"Purpose / Role","text":"

This module is responsible for activating a necessary blinker during driving. It uses rule-based algorithm to determine blinkers, and the details of this algorithm are described in the following sections. Note that this algorithm is strictly based on the Japanese Road Traffic Law.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#assumptions","title":"Assumptions","text":"

Autoware has following order of priorities for turn signals.

  1. Activate turn signal to safely navigate ego vehicle and protect other road participants
  2. Follow traffic laws
  3. Follow human driving practices
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#limitations","title":"Limitations","text":"

Currently, this algorithm can sometimes give unnatural (not wrong) blinkers in complicated situations. This is because it tries to follow the road traffic law and cannot solve blinker conflicts clearly in that environment.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#parameters-for-turn-signal-decider","title":"Parameters for turn signal decider","text":"Name Unit Type Description Default value turn_signal_intersection_search_distance [m] double constant search distance to decide activation of blinkers at intersections 30 turn_signal_intersection_angle_threshold_degree deg double angle threshold to determined the end point of intersection required section 15 turn_signal_minimum_search_distance [m] double minimum search distance for avoidance and lane change 10 turn_signal_search_time [s] double search time for to decide activation of blinkers 3.0 turn_signal_shift_length_threshold [m] double shift length threshold to decide activation of blinkers 0.3 turn_signal_remaining_shift_length_threshold [m] double When the ego's current shift length minus its end shift length is less than this threshold, the turn signal will be turned off. 0.1 turn_signal_on_swerving [-] bool flag to activate blinkers when swerving true

Note that the default values for turn_signal_intersection_search_distance and turn_signal_search_time is strictly followed by Japanese Road Traffic Laws. So if your country does not allow to use these default values, you should change these values in configuration files.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this algorithm, it assumes that each blinker has two sections, which are desired section and required section. The image of these two sections are depicted in the following diagram.

These two sections have the following meanings.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-desired-section","title":"- Desired Section","text":"
- This section is defined by road traffic laws. It cannot be longer or shorter than the designated length defined by the law.\n- In this section, you do not have to activate the designated blinkers if it is dangerous to do so.\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-required-section","title":"- Required Section","text":"
- In this section, ego vehicle must activate designated blinkers. However, if there are blinker conflicts, it must solve them based on the algorithm we mention later in this document.\n- Required section cannot be longer than desired section.\n

When turning on the blinker, it decides whether or not to turn on the specified blinker based on the distance from the front of the ego vehicle to the start point of each section. Conversely, when turning off the blinker, it calculates the distance from the base link of the ego vehicle to the end point of each section and decide whether or not to turn it off based on that.

For left turn, right turn, avoidance, lane change, goal planner and pull out, we define these two sections, which are elaborated in the following part.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#1-left-and-right-turn","title":"1. Left and Right turn","text":"

Turn signal decider checks each lanelet on the map if it has turn_direction information. If a lanelet has this information, it activates necessary blinker based on this information.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#2-avoidance","title":"2. Avoidance","text":"

Avoidance can be separated into two sections, first section and second section. The first section is from the start point of the path shift to the end of the path shift. The second section is from the end of shift point to the end of avoidance. Note that avoidance module will not activate turn signal when its shift length is below turn_signal_shift_length_threshold.

First section

Second section

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#3-lane-change","title":"3. Lane Change","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#4-pull-out","title":"4. Pull out","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#5-goal-planner","title":"5. Goal Planner","text":"

When it comes to handle several blinkers, it gives priority to the first blinker that comes first. However, this rule sometimes activate unnatural blinkers, so turn signal decider uses the following five rules to decide the necessary turn signal.

Based on these five rules, turn signal decider can solve blinker conflicts. The following pictures show some examples of this kind of conflicts.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-several-right-and-left-turns-on-short-sections","title":"- Several right and left turns on short sections","text":"

In this scenario, ego vehicle has to pass several turns that are close each other. Since this pattern can be solved by the pattern1 rule, the overall result is depicted in the following picture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-1","title":"- Avoidance with left turn (1)","text":"

In this scene, ego vehicle has to deal with the obstacle that is on its original path as well as make a left turn. The overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-2","title":"- Avoidance with left turn (2)","text":"

Same as the previous scenario, ego vehicle has to avoid the obstacle as well as make a turn left. However, in this scene, the obstacle is parked after the intersection. Similar to the previous one, the overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-lane-change-and-left-turn","title":"- Lane change and left turn","text":"

In this scenario, ego vehicle has to do lane change before making a left turn. In the following example, ego vehicle does not activate left turn signal until it reaches the end point of the lane change path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/","title":"Behavior Path Sampling Based Planner","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#behavior-path-sampling-based-planner","title":"Behavior Path Sampling Based Planner","text":"

WARNING: This module is experimental and has not been properly tested on a real vehicle, use only on simulations.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#purpose","title":"Purpose","text":"

This package implements a node that uses sampling based planning to generate a drivable trajectory for the behavior path planner. It is heavily based off the sampling_based_planner module.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#features","title":"Features","text":"

This package is able to:

Note that the velocity is just taken over from the input path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#algorithm","title":"Algorithm","text":"

Sampling based planning is decomposed into 3 successive steps:

  1. Sampling: candidate trajectories are generated.
  2. Pruning: invalid candidates are discarded.
  3. Selection: the best remaining valid candidate is selected.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#sampling","title":"Sampling","text":"

Candidate trajectories are generated based on the current ego state and some target state. 1 sampling algorithms is currently implemented: polynomials in the frenet frame.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#pruning","title":"Pruning","text":"

The validity of each candidate trajectory is checked using a set of hard constraints.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#selection","title":"Selection","text":"

Among the valid candidate trajectories, the best one is determined using a set of soft constraints (i.e., objective functions).

Each soft constraint is associated with a weight to allow tuning of the preferences.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#limitations","title":"Limitations","text":"

The quality of the candidates generated with polynomials in frenet frame greatly depend on the reference path. If the reference path is not smooth, the resulting candidates will probably be un-drivable.

Failure to find a valid trajectory current results in a suddenly stopping trajectory.

The module has troubles generating paths that converge rapidly to the goal lanelet. Basically, after overcoming all obstacles, the module should prioritize paths that rapidly make the ego vehicle converge back to its goal lane (ie. paths with low average and final lateral deviation). However, this does not function properly at the moment.

Detection of proper merging can be rough: Sometimes, the module when detects that the ego has converged on the goal lanelet and that there are no more obstacles, the planner transitions to the goal planner, but the transition is not very smooth and could cause discomfort for the user.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#future-works","title":"Future works","text":"

Some possible improvements for this module include:

-Implementing a dynamic weight tuning algorithm: Dynamically change weights depending on the scenario (ie. to prioritize more the paths with low curvature and low avg. lat. deviation after all obstacles have been avoided).

-Implementing multi-objective optimization to improve computing time and possibly make a more dynamic soft constraints weight tuning. Related publication.

-Implement bezier curves as another method to obtain samples, see the sampling_based_planner module.

-Explore the possibility to replace several or other behavior path modules with the sampling based behavior path module.

-Perform real-life tests of this module once it has matured and some of its limitations have been solved.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#other-possibilities","title":"Other possibilities","text":"

The module is currently aimed at creating paths for static obstacle avoidance. However, the nature of sampling planner allows this module to be expanded or repurposed to other tasks such as lane changes, dynamic avoidance and general reaching of a goal. It is possible, with a good dynamic/scenario dependant weight tuning to use the sampling planning approach as a replacement for the other behavior path modules, assuming good candidate pruning and good soft constraints weight tuning.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/","title":"Side Shift design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#side-shift-design","title":"Side Shift design","text":"

(For remote control) Shift the path to left or right according to an external instruction.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#overview-of-the-side-shift-module-process","title":"Overview of the Side Shift Module Process","text":"
  1. Receive the required lateral offset input.
  2. Update the requested_lateral_offset_ under the following conditions: a. Verify if the last update time has elapsed. b. Ensure the required lateral offset value is different from the previous one.
  3. Insert the shift points into the path if the side shift module's status is not in the SHIFTING status.

Please be aware that requested_lateral_offset_ is continuously updated with the latest values and is not queued.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#statuses-of-the-side-shift","title":"Statuses of the Side Shift","text":"

The side shift has three distinct statuses. Note that during the SHIFTING status, the path cannot be updated:

  1. BEFORE_SHIFT: Preparing for shift.
  2. SHIFTING: Currently in the process of shifting.
  3. AFTER_SHIFT: Shift completed.

side shift status"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/","title":"Start Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#start-planner-design","title":"Start Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#purpose-role","title":"Purpose / Role","text":"

This module generates and plans a path for safely merging from the shoulder lane or side of road lane into the center of the road lane.

Specifically, it includes the following features:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-cases","title":"Use Cases","text":"

Give an typical example of how path generation is executed. Showing example of path generation starts from shoulder lane, but also from side of road lane can be generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-1-shift-pull-out","title":"Use Case 1: Shift pull out","text":"

In the shoulder lane, when there are no parked vehicles ahead and the shoulder lane is sufficiently long, a forward shift pull out path (a clothoid curve with consistent jerk) is generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-2-geometric-pull-out","title":"Use Case 2: Geometric pull out","text":"

In the shoulder lane, when there are objects in front and the lane is not sufficiently long behind, a geometric pull out path is generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-3-backward-and-shift-pull-out","title":"Use Case 3: Backward and shift pull out","text":"

In the shoulder lane, when there are parked vehicles ahead and the lane is sufficiently long behind, a path that involves reversing before generating a forward shift pull out path is created.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-4-backward-and-geometric-pull-out","title":"Use Case 4: Backward and geometric pull out","text":"

In the shoulder lane, when there is an object ahead and not enough space to reverse sufficiently, a path that involves reversing before making an geometric pull out is generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-5-freespace-pull-out","title":"Use Case 5: Freespace pull out","text":"

If the map is annotated with the information that a free space path can be generated in situations where both shift and geometric pull out paths are impossible to create, a path based on the free space algorithm will be generated.

As a note, the patterns for generating these paths are based on default parameters, but as will be explained in the following sections, it is possible to control aspects such as making paths that involve reversing more likely to be generated, or making geometric paths more likely to be generated, by changing the path generation policy or adjusting the margin around static objects.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#limitations","title":"Limitations","text":"

Here are some notable limitations:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#startend-conditions","title":"Start/End Conditions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#start-conditions","title":"Start Conditions","text":"

The StartPlannerModule is designed to initiate its execution based on specific criteria evaluated by the isExecutionRequested function. The module will not start under the following conditions:

  1. Start pose on the middle of the road: The module will not initiate if the start pose of the vehicle is determined to be in the middle of the road. This ensures the planner starts from a roadside position.

  2. Vehicle is far from start position: If the vehicle is far from the start position, the module will not execute. This prevents redundant execution when the new goal is given.

  3. Vehicle reached goal: The module will not start if the vehicle has already reached its goal position, avoiding unnecessary execution when the destination is attained.

  4. Vehicle in motion: If the vehicle is still moving, the module will defer starting. This ensures that planning occurs from a stable, stationary state for safety.

  5. Goal behind in same route segment: The module will not initiate if the goal position is behind the ego vehicle within the same route segment. This condition is checked to avoid complications with planning routes that require the vehicle to move backward on its current path, which is currently not supported.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#end-conditions","title":"End Conditions","text":"

The StartPlannerModule terminates when specific conditions are met, depending on the type of planner being used. The canTransitSuccessState function determines whether the module should transition to the success state based on the following criteria:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#when-the-freespace-planner-is-active","title":"When the Freespace Planner is active","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#when-any-other-type-of-planner-is-active","title":"When any other type of planner is active","text":"

The transition to the success state is determined by the following conditions:

The flowchart below illustrates the decision-making process in the canTransitSuccessState function:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#concept-of-safety-assurance","title":"Concept of safety assurance","text":"

The approach to collision safety is divided into two main components: generating paths that consider static information, and detecting collisions with dynamic obstacles to ensure the safety of the generated paths.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#1-generating-path-with-static-information","title":"1. Generating path with static information","text":"

Here's the expression of the steps start pose searching steps, considering the collision_check_margins is set at [2.0, 1.0, 0.5, 0.1] as example. The process is as follows:

  1. Generating start pose candidates

  2. Starting search at maximum margin

  3. Repeating search according to threshold levels

  4. Continuing the search

  5. Generating a stop path

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#search-priority","title":"search priority","text":"

If a safe path with sufficient clearance for static obstacles cannot be generated forward, a backward search from the vehicle's current position is conducted to locate a suitable start point for a pull out path generation.

During this backward search, different policies can be applied based on search_priority parameters:

Selecting efficient_path focuses on creating a shift pull out path, regardless of how far back the vehicle needs to move. Opting for short_back_distance aims to find a location with the least possible backward movement.

PriorityOrder is defined as a vector of pairs, where each element consists of a size_t index representing a start pose candidate index, and the planner type. The PriorityOrder vector is processed sequentially from the beginning, meaning that the pairs listed at the top of the vector are given priority in the selection process for pull out path generation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#efficient_path","title":"efficient_path","text":"

When search_priority is set to efficient_path and the preference is for prioritizing shift_pull_out, the PriorityOrder array is populated in such a way that shift_pull_out is grouped together for all start pose candidates before moving on to the next planner type. This prioritization is reflected in the order of the array, with shift_pull_out being listed before geometric_pull_out.

Index Planner Type 0 shift_pull_out 1 shift_pull_out ... ... N shift_pull_out 0 geometric_pull_out 1 geometric_pull_out ... ... N geometric_pull_out

This approach prioritizes trying all candidates with shift_pull_out before proceeding to geometric_pull_out, which may be efficient in situations where shift_pull_out is likely to be appropriate.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#short_back_distance","title":"short_back_distance","text":"

For search_priority set to short_back_distance, the array alternates between planner types for each start pose candidate, which can minimize the distance the vehicle needs to move backward if the earlier candidates are successful.

Index Planner Type 0 shift_pull_out 0 geometric_pull_out 1 shift_pull_out 1 geometric_pull_out ... ... N shift_pull_out N geometric_pull_out

This ordering is beneficial when the priority is to minimize the backward distance traveled, giving an equal chance for each planner to succeed at the closest possible starting position.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#2-collision-detection-with-dynamic-obstacles","title":"2. Collision detection with dynamic obstacles","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#example-of-safety-check-performed-range-for-shift-pull-out","title":"example of safety check performed range for shift pull out","text":"

Give an example of safety verification range for shift pull out. The safety check is performed from the start of the shift to the end of the shift. And if the vehicle footprint does not leave enough space for a rear vehicle to drive through, the safety check against dynamic objects is disabled.

As a note, no safety check is performed during backward movements. Safety verification commences at the point where the backward motion ceases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#rtc-interface","title":"RTC interface","text":"

The system operates distinctly under manual and auto mode, especially concerning the before the departure and interaction with dynamic obstacles. Below are the specific behaviors for each mode:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#when-approval-is-required","title":"When approval is required?","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#forward-driving","title":"Forward driving","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#backward-driving-forward-driving","title":"Backward driving + forward driving","text":"

This differentiation ensures that the vehicle operates safely by requiring human intervention in manual mode(enable_rtc is true) at critical junctures and incorporating automatic safety checks in both modes to prevent unsafe operations in the presence of dynamic obstacles.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#general-parameters-for-start_planner","title":"General parameters for start_planner","text":"Name Unit Type Description Default value th_arrived_distance [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time [s] double time threshold for arrival of path termination 1.0 th_distance_to_middle_of_the_road [m] double distance threshold to determine if the vehicle is on the middle of the road 0.1 collision_check_margins [m] double Obstacle collision check margins list [2.0, 1.0, 0.5, 0.1] shift_collision_check_distance_from_end [m] double collision check distance from end shift end pose -10.0 geometric_collision_check_distance_from_end [m] double collision check distance from end geometric end pose 0.0 collision_check_margin_from_front_object [m] double collision check margin from front object 5.0 skip_rear_vehicle_check - bool flag to skip rear vehicle check (rear vehicle check is performed to skip safety check and proceed with departure when the ego vehicle is obstructing the progress of a rear vehicle) false extra_width_margin_for_rear_obstacle [m] double extra width that is added to the perceived rear obstacle's width when deciding if the rear obstacle can overtake the ego vehicle while it is merging to a lane from the shoulder lane 0.5 object_types_to_check_for_path_generation.check_car - bool flag to check car for path generation true object_types_to_check_for_path_generation.check_truck - bool flag to check truck for path generation true object_types_to_check_for_path_generation.check_bus - bool flag to check bus for path generation true object_types_to_check_for_path_generation.check_bicycle - bool flag to check bicycle for path generation true object_types_to_check_for_path_generation.check_motorcycle - bool flag to check motorcycle for path generation true object_types_to_check_for_path_generation.check_pedestrian - bool flag to check pedestrian for path generation true object_types_to_check_for_path_generation.check_unknown - bool flag to check unknown for path generation true center_line_path_interval [m] double reference center line path point interval 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#ego-vehicles-velocity-planning","title":"Ego vehicle's velocity planning","text":"

WIP

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#safety-check-in-free-space-area","title":"Safety check in free space area","text":"

WIP

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-safety-check","title":"Parameters for safety check","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#stop-condition-parameters","title":"Stop Condition Parameters","text":"

Parameters under stop_condition define the criteria for stopping conditions.

Name Unit Type Description Default value maximum_deceleration_for_stop [m/s^2] double Maximum deceleration allowed for a stop 1.0 maximum_jerk_for_stop [m/s^3] double Maximum jerk allowed for a stop 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#ego-predicted-path-parameters","title":"Ego Predicted Path Parameters","text":"

Parameters under path_safety_check.ego_predicted_path specify the ego vehicle's predicted path characteristics.

Name Unit Type Description Default value min_velocity [m/s] double Minimum velocity of the ego vehicle's predicted path 0.0 acceleration [m/s^2] double Acceleration for the ego vehicle's predicted path 1.0 max_velocity [m/s] double Maximum velocity of the ego vehicle's predicted path 1.0 time_horizon_for_front_object [s] double Time horizon for predicting front objects 10.0 time_horizon_for_rear_object [s] double Time horizon for predicting rear objects 10.0 time_resolution [s] double Time resolution for the ego vehicle's predicted path 0.5 delay_until_departure [s] double Delay until the ego vehicle departs 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#target-object-filtering-parameters","title":"Target Object Filtering Parameters","text":"

Parameters under target_filtering are related to filtering target objects for safety check.

Name Unit Type Description Default value safety_check_time_horizon [s] double Time horizon for predicted paths of the ego and dynamic objects 5.0 safety_check_time_resolution [s] double Time resolution for predicted paths of the ego and dynamic objects 1.0 object_check_forward_distance [m] double Forward distance for object detection 10.0 object_check_backward_distance [m] double Backward distance for object detection 100.0 ignore_object_velocity_threshold [m/s] double Velocity threshold below which objects are ignored 1.0 object_types_to_check.check_car - bool Flag to check cars true object_types_to_check.check_truck - bool Flag to check trucks true object_types_to_check.check_bus - bool Flag to check buses true object_types_to_check.check_trailer - bool Flag to check trailers true object_types_to_check.check_bicycle - bool Flag to check bicycles true object_types_to_check.check_motorcycle - bool Flag to check motorcycles true object_types_to_check.check_pedestrian - bool Flag to check pedestrians true object_types_to_check.check_unknown - bool Flag to check unknown object types false object_lane_configuration.check_current_lane - bool Flag to check the current lane true object_lane_configuration.check_right_side_lane - bool Flag to check the right side lane true object_lane_configuration.check_left_side_lane - bool Flag to check the left side lane true object_lane_configuration.check_shoulder_lane - bool Flag to check the shoulder lane true object_lane_configuration.check_other_lane - bool Flag to check other lanes false include_opposite_lane - bool Flag to include the opposite lane in check false invert_opposite_lane - bool Flag to invert the opposite lane check false check_all_predicted_path - bool Flag to check all predicted paths true use_all_predicted_path - bool Flag to use all predicted paths true use_predicted_path_outside_lanelet - bool Flag to use predicted paths outside of lanelets false"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#safety-check-parameters","title":"Safety Check Parameters","text":"

Parameters under safety_check_params define the configuration for safety check.

Name Unit Type Description Default value enable_safety_check - bool Flag to enable safety check true check_all_predicted_path - bool Flag to check all predicted paths true publish_debug_marker - bool Flag to publish debug markers false rss_params.rear_vehicle_reaction_time [s] double Reaction time for rear vehicles 2.0 rss_params.rear_vehicle_safety_time_margin [s] double Safety time margin for rear vehicles 1.0 rss_params.lateral_distance_max_threshold [m] double Maximum lateral distance threshold 2.0 rss_params.longitudinal_distance_min_threshold [m] double Minimum longitudinal distance threshold 3.0 rss_params.longitudinal_velocity_delta_time [s] double Delta time for longitudinal velocity 0.8 hysteresis_factor_expand_rate - double Rate to expand/shrink the hysteresis factor 1.0 collision_check_yaw_diff_threshold - double Maximum yaw difference between ego and object when executing rss-based collision checking 1.578"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#path-generation","title":"Path Generation","text":"

There are two path generation methods.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#shift-pull-out","title":"shift pull out","text":"

This is the most basic method of starting path planning and is used on road lanes and shoulder lanes when there is no particular obstruction.

Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.

shift pull out video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-shift-pull-out","title":"parameters for shift pull out","text":"Name Unit Type Description Default value enable_shift_pull_out [-] bool flag whether to enable shift pull out true check_shift_path_lane_departure [-] bool flag whether to check if shift path footprints are out of lane true allow_check_shift_path_lane_departure_override [-] bool flag to override/cancel lane departure check if the ego vehicle's starting pose is already out of lane false shift_pull_out_velocity [m/s] double velocity of shift pull out 2.0 pull_out_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.1 minimum_shift_pull_out_distance [m] double minimum shift pull out distance. if calculated pull out distance is shorter than this, use this for path generation. 0.0 maximum_curvature [1/m] double maximum curvature. Calculate the required pull out distance from this maximum curvature, assuming the reference path is considered a straight line and shifted by two approximate arcs. This does not compensate for curvature in a shifted path or curve. 0.07 end_pose_curvature_threshold [1/m] double The curvature threshold which is used for calculating the shift pull out distance. The shift end pose is shifted forward so that curvature on shift end pose is less than this value. This is to prevent the generated path from having a large curvature when the end pose is on a curve. If a shift end pose with a curvature below the threshold is not found, the shift pull out distance is used as the distance to the point with the lowest curvature among the points beyond a certain distance. 0.01"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#geometric-pull-out","title":"geometric pull out","text":"

Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.

geometric pull out video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-geometric-pull-out","title":"parameters for geometric pull out","text":"Name Unit Type Description Default value enable_geometric_pull_out [-] bool flag whether to enable geometric pull out true divide_pull_out_path [-] bool flag whether to divide arc paths. The path is assumed to be divided because the curvature is not continuous. But it requires a stop during the departure. false geometric_pull_out_velocity [m/s] double velocity of geometric pull out 1.0 lane_departure_margin [m] double margin of deviation to lane right 0.2 lane_departure_check_expansion_margin [m] double margin to expand the ego vehicle footprint when doing lane departure checks 0.0 pull_out_max_steer_angle [rad] double maximum steer angle for path generation 0.26"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#backward-pull-out-start-point-search","title":"backward pull out start point search","text":"

If a safe path cannot be generated from the current position, search backwards for a pull out start point at regular intervals(default: 2.0).

pull out after backward driving video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-backward-pull-out-start-point-search","title":"parameters for backward pull out start point search","text":"Name Unit Type Description Default value enable_back [-] bool flag whether to search backward for start_point true search_priority [-] string In the case of efficient_path, use efficient paths even if the back distance is longer. In case of short_back_distance, use a path with as short a back distance efficient_path max_back_distance [m] double maximum back distance 30.0 backward_search_resolution [m] double distance interval for searching backward pull out start point 2.0 backward_path_update_duration [s] double time interval for searching backward pull out start point. this prevents chattering between back driving and pull_out 3.0 ignore_distance_from_lane_end [m] double If distance from shift start pose to end of shoulder lane is less than this value, this start pose candidate is ignored 15.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#freespace-pull-out","title":"freespace pull out","text":"

If the vehicle gets stuck with pull out along lanes, execute freespace pull out. To run this feature, you need to set parking_lot to the map, activate_by_scenario of costmap_generator to false and enable_freespace_planner to true

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#unimplemented-parts-limitations-for-freespace-pull-out","title":"Unimplemented parts / limitations for freespace pull out","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-freespace-parking","title":"Parameters freespace parking","text":"Name Unit Type Description Default value enable_freespace_planner [-] bool this flag activates a free space pullout that is executed when a vehicle is stuck due to obstacles in the lanes where the ego is located true end_pose_search_start_distance [m] double distance from ego to the start point of the search for the end point in the freespace_pull_out driving lane 20.0 end_pose_search_end_distance [m] double distance from ego to the end point of the search for the end point in the freespace_pull_out driving lane 30.0 end_pose_search_interval [m] bool interval to search for the end point in the freespace_pull_out driving lane 2.0

See freespace_planner for other parameters.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/","title":"Avoidance module for static objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#avoidance-module-for-static-objects","title":"Avoidance module for static objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#purposerole","title":"Purpose/Role","text":"

This is a rule-based avoidance module, which runs based on perception output data, HDMap, current path and route. This module is designed to create avoidance paths for static (=stopped) objects in simple situations. Currently, this module doesn't support dynamic (=moving) objects.

This module has an RTC interface, and the user can select operation mode from MANUAL/AUTO depending on the performance of the vehicle's sensors. If the user selects MANUAL mode, this module outputs a candidate avoidance path and awaits operator approval. In the case where the sensor/perception performance is insufficient and false positives occur, we recommend MANUAL mode to prevent unnecessary avoidance maneuvers.

If the user selects AUTO mode, this module modifies the current following path without operator approval. If the sensor/perception performance is sufficient, use AUTO mode.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitations","title":"Limitations","text":"

This module allows developers to design vehicle behavior in avoidance planning using specific rules. Due to the property of rule-based planning, the algorithm cannot compensate for not colliding with obstacles in complex cases. This is a trade-off between \"be intuitive and easy to design\" and \"be hard to tune but can handle many cases\". This module adopts the former policy and therefore this output should be checked more strictly in the later stage. In the .iv reference implementation, there is another avoidance module in the motion planning module that uses optimization to handle the avoidance in complex cases. (Note that, the motion planner needs to be adjusted so that the behavior result will not be changed much in the simple case and this is a typical challenge for the behavior-motion hierarchical architecture.)

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#why-is-avoidance-in-behavior-module","title":"Why is avoidance in behavior module?","text":"

This module executes avoidance over lanes, and the decision requires the lane structure information to take care of traffic rules (e.g. it needs to send an indicator signal when the vehicle crosses a lane). The difference between the motion and behavior modules in the planning stack is whether the planner takes traffic rules into account, which is why this avoidance module exists in the behavior module.

If you would like to know the overview rather than the detail, please skip the next section and refer to FAQ.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#inner-workingsalgorithms","title":"Inner workings/Algorithms","text":"

This module mainly has two parts, target filtering and path generation. At first, all objects are filtered by several conditions. In this step, the module checks avoidance feasibility and necessity. After that, this module generates avoidance path outline, which we call shift line, based on filtered objects. The shift lines are set into path shifter, which is a library for path generation, to create a smooth shift path. Additionally, this module has a feature to check non-target objects so that the ego can avoid target objects safely. This feature receives a generated avoidance path and surrounding objects, and judges the current situation. Lastly, this module updates current ego behavior.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#target-object-filtering","title":"Target object filtering","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#overview","title":"Overview","text":"

The module uses the following conditions to filter avoidance target objects.

Check condition Target class Details If conditions are not met Is an avoidance target class object? All Use can select avoidance target class from config file. Never avoid it. Is a stopped object? All Objects keep higher speed than th_moving_speed for longer period of time than th_moving_time is judged as moving. Never avoid it. Is within detection area? All The module creates detection area to filter target objects roughly based on lateral margin in config file. (see here) Never avoid it. Isn't there enough lateral distance between the object and path? All - Never avoid it. Is near the centerline of ego lane? All - It depends on other conditions. Is there a crosswalk near the object? Pedestrian, Bicycle The module doesn't avoid the Pedestrian and Bicycle nearer the crosswalk because the ego should stop in front of it if they're crossing the road. (see here) Never avoid it. Is the distance between the object and traffic light along the path longer than the threshold? Car, Truck, Bus, Trailer The module uses this condition when there is ambiguity about whether the vehicle is parked. It depends on other conditions. Is the distance between the object and crosswalk light along the path longer than threshold? Car, Truck, Bus, Trailer Same as above. It depends on other conditions. Is the stopping time longer than threshold? Car, Truck, Bus, Trailer Same as above. It depends on other conditions. Is within intersection? Car, Truck, Bus, Trailer The module assumes that there isn't any parked vehicle within intersection. It depends on other conditions. Is on ego lane? Car, Truck, Bus, Trailer - It depends on other conditions. Is a parked vehicle? Car, Truck, Bus, Trailer The module judges whether the vehicle is a parked vehicle based on its lateral offset. (see here) It depends on other conditions. Is merging into ego lane from other lane? Car, Truck, Bus, Trailer The module judges the vehicle behavior based on its yaw angle and offset direction. (see here) It depends on other conditions. Is merging into other lane from ego lane? Car, Truck, Bus, Trailer Same as above. It depends on other conditions."},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#common-conditions","title":"Common conditions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#detection-area","title":"Detection area","text":"

The module generates detection area for target filtering based on the following parameters:

      # avoidance is performed for the object type with true\ntarget_object:\n...\nlateral_margin:\nsoft_margin: 0.3                            # [m]\nhard_margin: 0.2                            # [m]\nhard_margin_for_parked_vehicle: 0.7         # [m]\n...\n# For target object filtering\ntarget_filtering:\n...\n# detection area generation parameters\ndetection_area:\nstatic: false                                 # [-]\nmin_forward_distance: 50.0                    # [m]\nmax_forward_distance: 150.0                   # [m]\nbackward_distance: 10.0                       # [m]\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#width-of-detection-area","title":"Width of detection area","text":"
  1. Get the largest lateral margin of all classes (Car, Truck, ...). The margin is the sum of soft_margin and hard_margin_for_parked_vehicle.
  2. The detection area width is the sum of ego vehicle width and the largest lateral margin.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#longitudinal-distance-of-detection-area","title":"Longitudinal distance of detection area","text":"

If the parameter detection_area.static is set to true, the module creates detection area whose longitudinal distance is max_forward_distance.

If the parameter detection_area.static is set to false, the module creates a detection area so that the ego can avoid objects with minimum lateral jerk value. Thus, the longitudinal distance depends on maximum lateral shift length, lateral jerk constraints and current ego speed. Additionally, it has to consider the distance used for the preparation phase.

...\nconst auto max_shift_length = std::max(\nstd::abs(parameters_->max_right_shift_length), std::abs(parameters_->max_left_shift_length));\nconst auto dynamic_distance =\nPathShifter::calcLongitudinalDistFromJerk(max_shift_length, getLateralMinJerkLimit(), speed);\n\nreturn std::clamp(\n1.5 * dynamic_distance + getNominalPrepareDistance(),\nparameters_->object_check_min_forward_distance,\nparameters_->object_check_max_forward_distance);\n

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-non-vehicle-type-objects","title":"Conditions for non-vehicle type objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#for-crosswalk-users","title":"For crosswalk users","text":"

If Pedestrian and Bicycle are closer to crosswalk than threshold 2.0m (hard coded for now), the module judges they're crossing the road and never avoids them.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-vehicle-type-objects","title":"Conditions for vehicle type objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#judge-vehicle-behavior","title":"Judge vehicle behavior","text":"

The module classifies vehicles into the following three behaviors based on yaw angle and offset direction.

# params for filtering objects that are in intersection\nintersection:\nyaw_deviation: 0.349 # [rad] (default 20.0deg)\n
Behavior Details Figure NONE If the object's relative yaw angle to lane is less than threshold yaw_deviation, it is classified into NONE. MERGING See following flowchart. DEVIATING See following flowchart.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#judge-if-its-a-parked-vehicle","title":"Judge if it's a parked vehicle","text":"

Not only the length from the centerline, but also the length from the road shoulder is calculated and used for the filtering process. In this logic, it calculates ratio of actual shift length to shiftable shift length as follows. If the result is larger than threshold th_shiftable_ratio, the module judges the vehicle is a parked vehicle.

\\[ L_{d} = \\frac{W_{lane} - W_{obj}}{2}, \\\\ ratio = \\frac{L_{a}}{L_{d}} \\]

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#target-object-filtering_1","title":"Target object filtering","text":"Situation Details Ego behavior Vehicle is within intersection area defined in HDMap. The module ignores vehicles following a lane or merging into ego lane. Never avoid it. Vehicle is on ego lane. There are adjacent lanes for both sides. Never avoid it. Vehicle is merging into other lane from ego lane. Most of its footprint is on ego lane. Never avoid it. Vehicle is merging into ego lane from other lane. Most of its footprint is on ego lane. Never avoid it. Vehicle does not appear to be parked and is stopped in front of a crosswalk or traffic light. Never avoid it. Vehicle stops on ego lane while pulling over to the side of the road. Avoid it immediately. Vehicle stops on adjacent lane. Avoid it immediately. Vehicle stops on ego lane without pulling over to the side of the road. Set the parameter avoidance_for_ambiguous_vehicle.enable to true, the module avoids ambiguous vehicle. Vehicle is merging into ego lane from other lane. Set the parameter avoidance_for_ambiguous_vehicle.enable to true, the module avoids ambiguous vehicle. Vehicle is merging into other lane from ego lane. Set the parameter avoidance_for_ambiguous_vehicle.enable to true, the module avoids ambiguous vehicle."},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#flowchart","title":"Flowchart","text":"

There are three main filtering functions isSatisfiedWithCommonCondition(), isSatisfiedWithVehicleCondition() and isSatisfiedWithNonVehicleCondition(). The filtering process is executed according to the following flowchart. Additionally, the module checks avoidance necessity in isNoNeedAvoidanceBehavior() based on the object pose, ego path and lateral margin in the config file.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#common-conditions_1","title":"Common conditions","text":"

At first, the function isSatisfiedWithCommonCondition() includes conditions used for all object classes.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-vehicle","title":"Conditions for vehicle","text":"

Target class:

As a next step, the object is filtered by a condition specialized for its class.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-non-vehicle-objects","title":"Conditions for non-vehicle objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#when-target-object-has-gone","title":"When target object has gone","text":"

User can select the ego behavior when the target object has gone.

cancel:\nenable: true # [-]\n

If the above parameter is true, this module reverts avoidance path when the following conditions are met.

If the parameter is false, this module keeps running even after the target object has gone.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#path-generation","title":"Path generation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-to-prevent-shift-line-chattering-that-is-caused-by-perception-noise","title":"How to prevent shift line chattering that is caused by perception noise","text":"

Since the object recognition result contains noise related to position, orientation and polygon shape, if the module uses the raw object recognition results in path generation, the output path will be directly affected by the noise. Therefore, in order to reduce the influence of the noise, this module generates a polygon for each target object, and the output path is generated based on that.

The envelope polygon is a rectangle box, whose size depends on the object's polygon and buffer parameter envelope_buffer_margin. Additionally, it is always parallel to the reference path. When the module finds a target object for the first time, it initializes the polygon.

        car:\n...\nenvelope_buffer_margin: 0.5                   # [m] FOR DEVELOPER\n

The module creates a one-shot envelope polygon by using the latest object pose and raw polygon in every planning cycle. On the other hand, the module uses the envelope polygon information created in the last planning cycle in order to update the envelope polygon with consideration of the pose covariance.

If the pose covariance is smaller than the threshold, the envelope polygon would be updated according to the following logic. If the one-shot envelope polygon is not within the previous envelope polygon, the module creates a new envelope polygon. Otherwise, it keeps the previous envelope polygon.

When the pose covariance is larger than the threshold, it is compared with the maximum pose covariance of each object. If the value is smaller, the one-shot envelope polygon is used directly as the envelope polygon. Otherwise, it keeps the previous envelope polygon.

By doing this process, the envelope polygon size and pose will converge even if perception output includes noise in object pose or shape.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#relationship-between-envelope-polygon-and-avoidance-path","title":"Relationship between envelope polygon and avoidance path","text":"

The avoidance path has two shift sections, whose start or end point position depends on the envelope polygon. The end point of the avoidance shift section and start point of the return shift section are fixed based on the envelope polygon and the other side edges are dynamically changed based on ego speed, shift length, lateral jerk constraints, etc.

The lateral positions of the two points are decided so that there can be enough space (=lateral margin) between ego body and the most overhang point of the envelope polygon edge points. User can adjust lateral margin with the following parameters.

        car:\n...\nlateral_margin:\nsoft_margin: 0.3                            # [m]\nhard_margin: 0.2                            # [m]\nhard_margin_for_parked_vehicle: 0.7         # [m]\n

The longitudinal positions depends on the envelope polygon, ego vehicle specification and the following parameters. The longitudinal distance between avoidance shift section end point and envelope polygon (=front longitudinal buffer) is the sum of front_overhang defined in vehicle_info.param.yaml and longitudinal_margin if the parameter consider_front_overhang is true. If consider_front_overhang is false, only longitudinal_margin is considered. Similarly, the distance between the return shift section start point and envelope polygon (=rear longitudinal buffer) is the sum of rear_overhang and longitudinal_margin.

      target_object:\ncar:\n...\nlongitudinal_margin: 0.0                      # [m]\n\n...\navoidance:\n...\nlongitudinal:\n...\nconsider_front_overhang: true                 # [-]\nconsider_rear_overhang: true                  # [-]\n

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#lateral-margin","title":"Lateral margin","text":"

As mentioned above, user can adjust lateral margin by changing the following two types of parameters. The soft_margin is a soft constraint parameter for lateral margin. The hard_margin and hard_margin_for_parked_vehicle are hard constraint parameters.

        car:\n...\nlateral_margin:\nsoft_margin: 0.3                            # [m]\nhard_margin: 0.2                            # [m]\nhard_margin_for_parked_vehicle: 0.7         # [m]\n

Basically, this module tries to generate an avoidance path in order to keep lateral distance, which is the sum of soft_margin and hard_margin/hard_margin_for_parked_vehicle, from the avoidance target object.

But if there isn't enough space to keep soft_margin distance, this module shortens soft constraint lateral margin. The parameter soft_margin is a maximum value of soft constraint, and actual soft margin can be a value between 0.0 and soft_margin. On the other hand, this module definitely keeps hard_margin or hard_margin_for_parked_vehicle depending on the situation. Thus, the minimum value of total lateral margin is hard_margin/hard_margin_for_parked_vehicle, and the maximum value is the sum of hard_margin/hard_margin_for_parked_vehicle and soft_margin.

The following figure shows the situation where this module shortens lateral soft constraint in order not to drive in the opposite lane when user sets parameter use_lane_type to same_direction_lane.

This module avoids not only parked vehicles but also non-parked vehicles that stop temporarily for some reason (e.g. waiting for traffic light to change from red to green). Additionally, this module has two types of hard margin parameters, hard_margin and hard_margin_for_parked_vehicle and judges if it is a parked vehicle or not for each vehicle because it takes the risk of vehicle doors opening suddenly and people getting out from parked vehicles into consideration.

Users should set hard_margin_for_parked_vehicle larger than hard_margin to prevent collisions with doors or people who suddenly exit a vehicle.

This module has only one parameter soft_margin for soft lateral margin constraint.

As the hard margin parameters define the distance the user definitely wants to maintain, they are used in the logic to check whether the ego can pass the side of the target object without executing an avoidance maneuver as well.

If the lateral distance is less than hard_margin/hard_margin_for_parked_vehicle when assuming that the ego follows the current lane without an avoidance maneuver, this module thinks the ego can not pass the side of the object safely and the ego must avoid it. In this case, this module inserts a stop point until the avoidance maneuver is allowed to execute so that the ego can avoid the object after approval. (For example, the ego keeps stopping in front of such an object until the operator approves the avoidance maneuver if the module is in MANUAL mode.)

On the other hand, if the lateral distance is larger than hard_margin/hard_margin_for_parked_vehicle, this module doesn't insert a stop point even when it is waiting for approval because it thinks it is possible to pass the side of the object safely.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#when-there-is-not-enough-space","title":"When there is not enough space","text":"

This module inserts a stop point only when the ego can potentially avoid the object. So, if it is not able to keep a distance more than hard_margin/hard_margin_for_parked_vehicle, this module does nothing. The following figure shows the situation where this module is not able to keep enough lateral distance when the user sets parameter use_lane_type to same_direction_lane.

Info

In this situation, the obstacle stop feature in obstacle_cruise_planner is responsible for ego vehicle safety.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#shift-length-calculation","title":"Shift length calculation","text":"

The lateral shift length is the sum of overhang_distance, lateral margin, whose value is set in the config file, and half of ego vehicle width defined in vehicle_info.param.yaml. On the other hand, the module limits the shift length depending on the space the module can use for an avoidance maneuver and the parameters soft_drivable_bound_margin hard_drivable_bound_margin. Basically, the shift length is limited so that the ego doesn't get closer than soft_drivable_bound_margin to the drivable boundary. But the module allows the threshold to be relaxed from soft_drivable_bound_margin to hard_drivable_bound_margin when the road is narrow.

Usable lanes for the avoidance module can be selected using the config file.

      ...\n# drivable lane setting. This module is able to use not only current lane but also right/left lane\n# if the current lane(=lanelet::Lanelet) and the right/left lane share the boundary(=lanelet::Linestring) in HDMap.\n# \"current_lane\"           : use only current lane. This module doesn't use adjacent lane to avoid object.\n# \"same_direction_lane\"    : this module uses same direction lane to avoid object if needed.\n# \"opposite_direction_lane\": this module uses both same direction and opposite direction lanes.\nuse_lane_type: \"opposite_direction_lane\"\n

When user set parameter use_lane_type to opposite_direction_lane, it is possible to use opposite lane.

When user set parameter use_lane_type to same_direction_lane, the module doesn't create path that overlaps opposite lane.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#shift-line-generation","title":"Shift line generation","text":"

As mentioned above, the end point of the avoidance shift path and the start point of the return shift path, which are FIXED points, are calculated from envelope polygon. As a next step, the module adjusts the other side points depending on shift length, current ego speed and lateral jerk constrain params defined in the config file.

Since the two points are always on the centerline of the ego lane, the module only calculates longitudinal distance between the shift start and end point based on the following function. This function is defined in the path shifter library. See this page as well.

double PathShifter::calcLongitudinalDistFromJerk(\nconst double lateral, const double jerk, const double velocity)\n{\nconst double j = std::abs(jerk);\nconst double l = std::abs(lateral);\nconst double v = std::abs(velocity);\nif (j < 1.0e-8) {\nreturn 1.0e10;\n}\nreturn 4.0 * std::pow(0.5 * l / j, 1.0 / 3.0) * v;\n}\n

We call the line that connects shift start and end point shift_line, which the avoidance path is generated from with spline completion.

The start point of avoidance has another longitudinal constraint. In order to keep turning on the blinker for a few seconds before starting the avoidance maneuver, the avoidance start point must be further than the value (we call the distance prepare_length.) depending on ego speed from ego position.

longitudinal:\nmin_prepare_time: 1.0 # [s]\nmax_prepare_time: 2.0 # [s]\nmin_prepare_distance: 1.0 # [m]\n

The prepare_length is calculated as the product of ego speed and max_prepare_time. (When the ego speed is zero, min_prepare_distance is used.)

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#planning-at-red-traffic-light","title":"Planning at RED traffic light","text":"

This module takes traffic light information into account so that the ego can behave properly. Sometimes, the ego straddles the lane boundary but we want to prevent the ego from stopping in front of a red traffic signal in such a situation. This is because the ego will block adjacent lanes and it is inconvenient for other vehicles.

So, this module controls shift length and shift start/end point in order to prevent the above situation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#control-shift-length","title":"Control shift length","text":"

At first, if the ego hasn't initiated an avoidance maneuver yet, this module limits maximum shift length and uses ONLY current lane during a red traffic signal. This prevents the ego from blocking other vehicles even if this module executes the avoidance maneuver and the ego is caught by a red traffic signal.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#control-avoidance-shift-start-point","title":"Control avoidance shift start point","text":"

Additionally, if the target object is farther than the stop line of the traffic light, this module sets the avoidance shift start point on the stop line in order to prevent the ego from stopping at a red traffic signal in the middle of an avoidance maneuver.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#control-return-shift-end-point","title":"Control return shift end point","text":"

If the ego has already initiated an avoidance maneuver, this module tries to set the return shift end point on the stop line.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#safety-check","title":"Safety check","text":"

This feature can be enabled by setting the following parameter to true.

      safety_check:\n...\nenable: true                                    # [-]\n

This module pays attention not only to avoidance target objects but also non-target objects that are near the avoidance path, and if the avoidance path is unsafe due to surrounding objects, it reverts the avoidance maneuver and yields the lane to them.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#yield-maneuver","title":"Yield maneuver","text":"

Additionally, this module basically inserts a stop point in front of an avoidance target during yielding maneuvers in order to keep enough distance to avoid the target when it is safe to do so. If the shift side lane is congested, the ego stops at a point and waits.

This feature can be enabled by setting the following parameter to true.

yield:\nenable: true # [-]\n

But if the lateral margin is larger than hard_margin (or hard_margin_for_parked_vehicle), this module doesn't insert a stop point because the ego can pass the side of the object safely without an avoidance maneuver.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#safety-check-target-lane","title":"Safety check target lane","text":"

User can select the safety check area with the following parameters. Basically, we recommend the following configuration to check only the shift side lane. If users want to confirm safety strictly, please set check_current_lane and/or check_other_side_lane to true.

      safety_check:\n...\ncheck_current_lane: false                       # [-]\ncheck_shift_side_lane: true                     # [-]\ncheck_other_side_lane: false                    # [-]\n

In the avoidance module, the function path_safety_checker::isCentroidWithinLanelet is used for filtering objects by lane.

bool isCentroidWithinLanelet(const PredictedObject & object, const lanelet::ConstLanelet & lanelet)\n{\nconst auto & object_pos = object.kinematics.initial_pose_with_covariance.pose.position;\nlanelet::BasicPoint2d object_centroid(object_pos.x, object_pos.y);\nreturn boost::geometry::within(object_centroid, lanelet.polygon2d().basicPolygon());\n}\n

Info

If check_current_lane and/or check_other_side_lane are set to true, the possibility of false positives and unnecessary yield maneuvers increase.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#safety-check-algorithm","title":"Safety check algorithm","text":"

This module uses common safety check logic implemented in path_safety_checker library. See this page.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitation-1","title":"Limitation-1","text":"

The current behavior when the module judges it is unsafe is so conservative because it is difficult to achieve aggressive maneuvers (e.g. increase speed in order to increase the distance from rear vehicle) for current planning architecture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitation-2","title":"Limitation-2","text":"

The yield maneuver is executed ONLY when the vehicle has NOT initiated avoidance maneuver yet. (This module checks objects in the opposite lane but it is necessary to find very far objects to judge whether it is unsafe before avoidance maneuver.) If it detects a vehicle which is approaching the ego during an avoidance maneuver, this module doesn't revert the path or insert a stop point. For now, there is no feature to deal with this situation in this module. Thus, a new module is needed to adjust the path to avoid moving objects, or an operator must override.

Info

This module has a threshold parameter th_avoid_execution for the shift length, and judges that the vehicle is initiating avoidance if the vehicle current shift exceeds the value.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#other-features","title":"Other features","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#compensation-for-detection-lost","title":"Compensation for detection lost","text":"

In order to prevent chattering of recognition results, once an obstacle is targeted, it is held for a while even if it disappears. This is effective when recognition is unstable. However, since it will result in over-detection (increases number of false-positives), it is necessary to adjust parameters according to the recognition accuracy (if object_last_seen_threshold = 0.0, the recognition result is 100% trusted).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#drivable-area-expansion","title":"Drivable area expansion","text":"

This module supports drivable area expansion for following polygons defined in HDMap.

Please set the flags to true when user wants to make it possible to use those areas in avoidance maneuvers.

# drivable lane setting. This module is able to use not only current lane but also right/left lane\n# if the current lane(=lanelet::Lanelet) and the right/left lane share the boundary(=lanelet::Linestring) in HDMap.\n# \"current_lane\"           : use only current lane. This module doesn't use adjacent lane to avoid object.\n# \"same_direction_lane\"    : this module uses the same direction lane to avoid object if needed.\n# \"opposite_direction_lane\": this module uses both same direction and opposite direction lanes.\nuse_lane_type: \"opposite_direction_lane\"\n# drivable area setting\nuse_intersection_areas: true\nuse_hatched_road_markings: true\nuse_freespace_areas: true\n
use_lane_type: same_direction_lane use_lane_type: opposite_direction_lane intersection area The intersection area is defined on Lanelet map. See here hatched road markings The hatched road marking is defined on Lanelet map. See here freespace area The freespace area is defined on Lanelet map. (unstable)"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#future-extensionsunimplemented-parts","title":"Future extensions/Unimplemented parts","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#debug","title":"Debug","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#show-rclcpp_debug-on-console","title":"Show RCLCPP_DEBUG on console","text":"

All debug messages are logged in the following namespaces.

User can see debug information with the following command.

ros2 service call /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/config_logger logging_demo/srv/ConfigLogger \"{logger_name: 'planning.scenario_planning.lane_driving.behavior_planning.behavior_path_planner.static_obstacle_avoidance', level: DEBUG}\"\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#visualize-debug-markers","title":"Visualize debug markers","text":"

User can enable publishing of debug markers with the following parameters.

debug:\nenable_other_objects_marker: false\nenable_other_objects_info: false\nenable_detection_area_marker: false\nenable_drivable_bound_marker: false\nenable_safety_check_marker: false\nenable_shift_line_marker: false\nenable_lane_marker: false\nenable_misc_marker: false\n

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#echoing-debug-message-to-find-out-why-the-objects-were-ignored","title":"Echoing debug message to find out why the objects were ignored","text":"

If for some reason, no shift point is generated for your object, you can check for the failure reason via ros2 topic echo.

To print the debug message, just run the following

ros2 topic echo /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance_debug_message_array\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#frequently-asked-questions","title":"Frequently asked questions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#target-objects","title":"Target objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#does-it-avoid-static-objects-and-dynamic-objects","title":"Does it avoid static objects and dynamic objects?","text":"

This module avoids static (stopped) objects and does not support dynamic (moving) objects avoidance. Dynamic objects are handled within the dynamic obstacle avoidance module.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-type-class-of-object-would-it-avoid","title":"What type (class) of object would it avoid?","text":"

It avoids cars, trucks, buses, trailers, bicycles, motorcycles, pedestrians, and unknown objects by default. Details are in the Target object filtering section. The above objects are divided into vehicle type objects and non-vehicle type objects; the target object filtering would differ for vehicle types and non-vehicle types.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-does-it-judge-if-it-is-a-target-object-or-not","title":"How does it judge if it is a target object or not?","text":"

The conditions for vehicle type objects and non-vehicle type objects are different. However, the main idea is that static objects on road shoulders within the planned path would be avoided. Below are some examples when an avoidance path is generated for vehicle type objects.

For more details refer to vehicle type object and non-vehicle object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-is-an-ambiguous-target","title":"What is an ambiguous target?","text":"

An ambiguous target refers to objects that may not be clearly identifiable as avoidance target due to limitations of current Autoware (ex. parked vehicle in the center of a lane). This module will avoid clearly defined static objects automatically, whereas ambiguous targets would need some operator intervention.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-can-i-visualize-the-target-object","title":"How can I visualize the target object?","text":"

Target objects can be visualized using RViz, where the module's outputs, such as detected obstacles and planned avoidance paths, are displayed. For further information refer to the debug marker section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-can-i-check-the-lateral-distance-to-an-obstacle","title":"How can I check the lateral distance to an obstacle?","text":"

Currently, there isn't any topic that outputs the relative position with the ego vehicle and target object. Visual confirmation on RViz would be the only solution for now.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#does-it-avoid-multiple-objects-at-once","title":"Does it avoid multiple objects at once?","text":"

Yes, the module is capable of avoiding multiple static objects simultaneously. It generates multiple shift lines and calculates an avoidance path that navigates around each object. Details are explained in the How to decide path shape section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#area-used-when-avoiding","title":"Area used when avoiding","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#which-lanes-are-used-to-avoid-objects","title":"Which lanes are used to avoid objects?","text":"

This module is able to use not only the current lane but also adjacent lanes and opposite lanes. Usable lanes can be selected by the configuration file as noted in the shift length calculation section. It is assumed that there are no parked vehicles on the central lane in a situation where there are lanes on the left and right.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#would-it-avoid-objects-inside-intersections","title":"Would it avoid objects inside intersections?","text":"

Basically, the module assumes that there are no parked vehicles within intersections. Vehicles that follow the lane or merge into ego lane are non-target objects. Vehicles waiting to make a right/left turn within an intersection can be avoided by expanding the drivable area in the configuration file, as noted in the drivable area expansion section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#does-it-generate-avoidance-paths-for-any-road-type","title":"Does it generate avoidance paths for any road type?","text":"

Drivable area can be expanded in the configuration file, as noted in the drivable area expansion section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#path-generation_1","title":"Path generation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-is-the-avoidance-path-generated","title":"How is the avoidance path generated?","text":"

The avoidance path is generated by modifying the current reference path to navigate around detected static objects. This is done using a rule-based shift line approach that ensures the vehicle remains within safe boundaries and follows the road while avoiding obstacles. Details are explained in the appendix.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#which-way-right-or-left-is-the-avoidance-path-generated","title":"Which way (right or left) is the avoidance path generated?","text":"

The behavior of avoiding depends on the target vehicle's center of gravity. If the target object is on the left side of the ego lane the avoidance would be generated on the right side. Currently, avoiding left-shifted obstacles from the left side is not supported (same for right-shifted objects).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#why-is-an-envelope-polygon-used-for-the-target-object","title":"Why is an envelope polygon used for the target object?","text":"

It is employed to reduce the influence of the perception/tracking noise for each target object. The envelope polygon is a rectangle, whose size depends on the object's polygon and buffer parameter and it is always parallel to the reference path. The envelope polygon is created by using the latest one-shot envelope polygon and the previous envelope polygon. Details are explained in How to prevent shift line chattering that is caused by perception noise section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-happens-if-the-module-cannot-find-a-safe-avoidance-path","title":"What happens if the module cannot find a safe avoidance path?","text":"

If the module cannot find a safe avoidance path, the vehicle may stop or continue along its current path without performing an avoidance maneuver. If there is a target object and there is enough space to avoid, the ego vehicle would stop at a position where an avoidance path could be generated; this is called the yield manuever. On the other hand, where there is not enough space, this module has nothing to do and the obstacle cruise planner would be in charge of the object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#there-seems-to-be-an-avoidance-path-but-the-vehicle-stops-what-is-happening","title":"There seems to be an avoidance path, but the vehicle stops. What is happening?","text":"

This situation occurs when the module is operating in AUTO mode and the target object is ambiguous or when operating in MANUAL mode. The generated avoidance path is presented as a candidate and requires operator approval before execution. If the operator does not approve the path the ego vehicle would stop where it is possible to generate an avoidance path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#operation","title":"Operation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-are-the-benefits-of-using-manual-mode-over-auto-mode","title":"What are the benefits of using MANUAL mode over AUTO mode?","text":"

MANUAL mode allows the operator to have direct control over the approval of avoidance paths, which is particularly useful in situations where sensor data may be unreliable or ambiguous. This mode helps prevent unnecessary or incorrect avoidance maneuvers by requiring human validation before execution. It is recommended for environments where false positives are likely or when the sensor/perception system's performance is limited.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#can-this-module-be-customized-for-specific-vehicle-types-or-environments","title":"Can this module be customized for specific vehicle types or environments?","text":"

The module can be customized by adjusting the rules and parameters that define how it identifies and avoids obstacles. The avoidance manuever would not be changed by specific vehicle types.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#appendix-shift-line-generation-pipeline","title":"Appendix: Shift line generation pipeline","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#flow-chart-of-the-process","title":"Flow chart of the process","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-to-decide-the-path-shape","title":"How to decide the path shape","text":"

Generate shift points for obstacles with a given lateral jerk. These points are integrated to generate an avoidance path. The detailed process flow for each case corresponding to the obstacle placement are described below. The actual implementation is not separated for each case, but the function corresponding to multiple obstacle case (both directions) is always running.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#one-obstacle-case","title":"One obstacle case","text":"

The lateral shift distance to the obstacle is calculated, and then the shift point is generated from the ego vehicle speed and the given lateral jerk as shown in the figure below. A smooth avoidance path is then calculated based on the shift point.

Additionally, the following processes are executed in special cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#lateral-jerk-relaxation-conditions","title":"Lateral jerk relaxation conditions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#minimum-velocity-relaxation-conditions","title":"Minimum velocity relaxation conditions","text":"

There is a problem that we cannot know the actual speed during avoidance in advance. This is especially critical when the ego vehicle speed is 0. To solve that, this module provides a parameter for the minimum avoidance speed, which is used for the lateral jerk calculation when the vehicle speed is low.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#multiple-obstacle-case-one-direction","title":"Multiple obstacle case (one direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points along the reference path. The new points are filtered (e.g. remove small-impact shift points), and the avoidance path is computed for the filtered shift points.

Merge process of raw shift points: check the shift length on each path point. If the shift points are overlapped, the maximum shift value is selected for the same direction.

For the details of the shift point filtering, see filtering for shift points.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#multiple-obstacle-case-both-direction","title":"Multiple obstacle case (both direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points. If there are areas where the desired shifts conflict in different directions, the sum of the maximum shift amounts of these areas is used as the final shift amount. The rest of the process is the same as in the case of one direction.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#filtering-for-shift-points","title":"Filtering for shift points","text":"

The shift points are modified by a filtering process in order to get the expected shape of the avoidance path. It contains the following filters.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#appendix-all-parameters","title":"Appendix: All parameters","text":"

Location of the avoidance specific parameter configuration file: src/autoware/launcher/planning_launch/config/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/static_obstacle_avoidance.param.yaml.

Name Type Description Default Range resample_interval_for_planning float Path resample interval for avoidance planning path. 0.3 N/A resample_interval_for_output float Path resample interval for output path. Too short interval increases computational cost for latter modules. 4.0 N/A path_generation_method string Path generation method. shift_line_base ['shift_line_base', 'optimization_base', 'both'] use_lane_type string Drivable lane configuration. opposite_direction_lane ['current_lane', 'same_direction_lane', 'opposite_direction_lane'] use_hatched_road_markings boolean Extend drivable to hatched road marking area. true N/A use_intersection_areas boolean Extend drivable to intersection area. true N/A use_freespace_areas boolean Extend drivable to freespace area. true N/A car.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 car.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A car.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A car.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A car.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A car.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A truck.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 truck.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A truck.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A truck.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A truck.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A truck.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A bus.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 bus.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A bus.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A bus.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A bus.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A bus.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A trailer.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 trailer.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A trailer.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A trailer.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A trailer.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A trailer.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A unknown.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 unknown.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A unknown.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. -0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. -0.2 N/A unknown.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.1 N/A unknown.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A unknown.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A motorcycle.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 motorcycle.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A motorcycle.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.5 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.5 N/A motorcycle.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A motorcycle.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A motorcycle.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A bicycle.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 bicycle.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A bicycle.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.3 N/A bicycle.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A bicycle.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A bicycle.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A pedestrian.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 pedestrian.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A pedestrian.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.5 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.5 N/A pedestrian.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A pedestrian.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A pedestrian.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A target_object.lower_distance_for_polygon_expansion float If the distance between the ego and object is less than this, the expand ratio will be zero. 30.0 N/A target_object.upper_distance_for_polygon_expansion float If the distance between the ego and object is larger than this, the expand ratio will be max_expand_ratio. 100.0 N/A target_type.car boolean Enable avoidance maneuver for CAR. true N/A target_type.truck boolean Enable avoidance maneuver for TRUCK. true N/A target_type.bus boolean Enable avoidance maneuver for BUS. true N/A target_type.trailer boolean Enable avoidance maneuver for TRAILER. true N/A target_type.unknown boolean Enable avoidance maneuver for UNKNOWN. true N/A target_type.bicycle boolean Enable avoidance maneuver for BICYCLE. true N/A target_type.motorcycle boolean Enable avoidance maneuver for MOTORCYCLE. true N/A target_type.pedestrian boolean Enable avoidance maneuver for PEDESTRIAN. true N/A target_filtering.object_check_goal_distance float If the distance between object and goal position is less than this parameter, the module do not return center line. 20.0 N/A target_filtering.object_check_return_pose_distance float If the distance between object and return position is less than this parameter, the module do not return center line. 20.0 N/A target_filtering.max_compensation_time float For the compensation of the detection lost. The object is registered once it is observed as an avoidance target. When the detection loses, the timer will start and the object will be un-registered when the time exceeds this limit. 2.0 N/A detection_area.static boolean If true, the detection area longitudinal range is calculated based on current ego speed. false N/A detection_area.min_forward_distance float Minimum forward distance to search the avoidance target. 50.0 N/A detection_area.max_forward_distance float Maximum forward distance to search the avoidance target. 150.0 N/A detection_area.backward_distance float Backward distance to search the avoidance target. 10.0 N/A merging_vehicle.th_overhang_distance float Distance threshold to ignore merging/deviating vehicle to/from ego driving lane. The distance represents how the object polygon overlaps ego lane, and it's calculated from polygon overhang point and lane centerline. If the distance is more than this param, the module never avoid the object. (Basically, the ego stops behind of it.) 0.5 N/A parked_vehicle.th_offset_from_centerline float Vehicles around the center line within this distance will be excluded from avoidance target. 1.0 N/A parked_vehicle.th_shiftable_ratio float Vehicles around the center line within this distance will be excluded from avoidance target. 0.8 \u22650.0\u22641.0 parked_vehicle.min_road_shoulder_width float Width considered as a road shoulder if the lane does not have a road shoulder target. 0.5 N/A avoidance_for_ambiguous_vehicle.policy string Ego behavior policy for ambiguous vehicle. true ['auto', 'manual', 'ignore'] avoidance_for_ambiguous_vehicle.closest_distance_to_wait_and_see float Start avoidance maneuver after the distance to ambiguous vehicle is less than this param. 10.0 N/A condition.th_stopped_time float Never avoid object whose stopped time is less than this param. 3.0 N/A condition.th_moving_distance float Never avoid object which moves more than this param. 1.0 N/A traffic_light.front_distance float If the distance between traffic light and vehicle is less than this parameter, this module will ignore it. 100.0 N/A crosswalk.front_distance float If the front distance between crosswalk and vehicle is less than this parameter, this module will ignore it. 30.0 N/A crosswalk.behind_distance float If the back distance between crosswalk and vehicle is less than this parameter, this module will ignore it. 30.0 N/A wait_and_see.target_behaviors array This module doesn't avoid these behaviors vehicle until it gets closer than threshold. ['MERGING', 'DEVIATING'] N/A wait_and_see.th_closest_distance float Threshold to check whether the ego gets close enough the ambiguous vehicle. 10.0 N/A intersection.yaw_deviation float Yaw deviation threshold param to judge if the object is not merging or deviating vehicle. 0.349 N/A condition.th_stopped_time float This module delays avoidance maneuver to see vehicle behavior in freespace. 5.0 N/A target_type.car boolean Enable safety_check for CAR. true N/A target_type.truck boolean Enable safety_check for TRUCK. true N/A target_type.bus boolean Enable safety_check for BUS. true N/A target_type.trailer boolean Enable safety_check for TRAILER. true N/A target_type.unknown boolean Enable safety_check for UNKNOWN. false N/A target_type.bicycle boolean Enable safety_check for BICYCLE. true N/A target_type.motorcycle boolean Enable safety_check for MOTORCYCLE. true N/A target_type.pedestrian boolean Enable safety_check for PEDESTRIAN. true N/A safety_check.enable boolean Enable to use safety check feature. true N/A safety_check.check_current_lane boolean Check objects on current driving lane. true N/A safety_check.check_shift_side_lane boolean Check objects on shift side lane. true N/A safety_check.check_other_side_lane boolean Check objects on other side lane. true N/A safety_check.check_unavoidable_object boolean Check collision between ego and unavoidable objects. true N/A safety_check.check_other_object boolean Check collision between ego and non avoidance target objects. true N/A safety_check.check_all_predicted_path boolean Check all prediction path of safety check target objects. true N/A safety_check.safety_check_backward_distance float Backward distance to search the dynamic objects. 100.0 N/A safety_check.hysteresis_factor_expand_rate float Hysteresis factor that be used for chattering prevention. 2.0 N/A safety_check.hysteresis_factor_safe_count integer Hysteresis count that be used for chattering prevention. 10 N/A safety_check.collision_check_yaw_diff_threshold float Max yaw difference between ego and object when doing collision check 3.1416 N/A safety_check.min_velocity float Minimum velocity of the ego vehicle's predicted path. 1.38 N/A safety_check.max_velocity float Maximum velocity of the ego vehicle's predicted path. 50.0 N/A safety_check.time_resolution float Time resolution for the ego vehicle's predicted path. 0.5 N/A safety_check.time_horizon_for_front_object float Time horizon for predicting front objects. 3.0 N/A safety_check.time_horizon_for_rear_object float Time horizon for predicting rear objects. 10.0 N/A safety_check.delay_until_departure float Delay until the ego vehicle departs. 1.0 N/A safety_check.extended_polygon_policy string See https://github.com/autowarefoundation/autoware.universe/pull/6336. along_path ['rectangle', 'along_path'] safety_check.expected_front_deceleration float The front object's maximum deceleration when the front vehicle perform sudden braking. -1.0 N/A safety_check.expected_rear_deceleration float The rear object's maximum deceleration when the rear vehicle perform sudden braking. -1.0 N/A safety_check.rear_vehicle_reaction_time float The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 N/A safety_check.rear_vehicle_safety_time_margin float The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 1.0 N/A safety_check.lateral_distance_max_threshold float The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 N/A safety_check.longitudinal_distance_min_threshold float The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 N/A safety_check.longitudinal_velocity_delta_time float The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.0 N/A lateral.th_avoid_execution float The lateral distance deviation threshold between the current path and suggested avoidance point to execute avoidance. 0.09 N/A lateral.th_small_shift_length float The shift lines whose lateral offset is less than this will be applied with other ones. 0.101 N/A lateral.soft_drivable_bound_margin float Keep distance to drivable bound. 0.3 N/A lateral.hard_drivable_bound_margin float Keep distance to drivable bound. 0.3 N/A lateral.max_right_shift_length float Maximum shift length for right direction 5.0 N/A lateral.max_left_shift_length float Maximum shift length for left direction. 5.0 N/A lateral.max_deviation_from_lane float Use in validation phase to check if the avoidance path is in drivable area. 0.2 N/A lateral.ratio_for_return_shift_approval float This parameter is added to allow waiting for the return of shift approval until the occlusion behind the avoid target is clear. 0.5 \u22650.0\u22641.0 longitudinal.min_prepare_time float Avoidance shift starts from point ahead of this time x ego speed at least. 1.0 N/A longitudinal.max_prepare_time float Avoidance shift starts from point ahead of this time x ego speed if possible. 2.0 N/A longitudinal.min_prepare_distance float Minimum prepare distance. 1.0 N/A longitudinal.min_slow_down_speed float Minimum slow speed for avoidance prepare section. 1.38 N/A longitudinal.buf_slow_down_speed float Buffer for controller tracking error. Basically, vehicle always cannot follow velocity profile precisely. Therefore, the module inserts lower speed than target speed that satisfies conditions to avoid object within accel/jerk constraints so that the avoidance path always can be output even if the current speed is a little bit higher than target speed. 0.56 N/A longitudinal.nominal_avoidance_speed float Nominal avoidance speed. 8.33 N/A longitudinal.consider_front_overhang boolean Flag to consider vehicle front overhang in shift line generation logic. True N/A longitudinal.consider_rear_overhang boolean Flag to consider vehicle rear overhang in shift line generation logic. True N/A goal.enable boolean Insert stop point in order to return original lane before reaching goal. true N/A goal.buffer float Buffer distance to return original lane before reaching goal. 3.0 N/A traffic_light.enable boolean Insert stop point in order to return original lane before reaching traffic light. true N/A traffic_light.buffer float Buffer distance to return original lane before reaching traffic light. 3.0 N/A stop.max_distance float Maximum stop distance in the situation where avoidance maneuver is not approved or in yield maneuver. 20.0 N/A stop.stop_buffer float Buffer distance in the situation where avoidance maneuver is not approved or in yield maneuver. 1.0 N/A yield.enable boolean Flag to enable yield maneuver. true N/A yield.enable_during_shifting boolean Flag to enable yield maneuver during shifting. false N/A cancel.enable boolean Flag to enable cancel maneuver. true N/A force.duration_time float force deactivate duration time 2.0 N/A policy.make_approval_request string policy for rtc request. select per_shift_line or per_avoidance_maneuver. per_shift_line: request approval for each shift line. per_avoidance_maneuver: request approval for avoidance maneuver (avoid + return). per_shift_line ['per_shift_line', 'per_avoidance_maneuver'] policy.deceleration string policy for vehicle slow down behavior. select best_effort or reliable. best_effort: slow down deceleration & jerk are limited by constraints but there is a possibility that the vehicle can't stop in front of the vehicle. reliable: insert stop or slow down point with ignoring decel/jerk constraints. make it possible to increase chance to avoid but uncomfortable deceleration maybe happen. best_effort ['reliable', 'best_effort'] policy.lateral_margin string policy for voidance lateral margin. select best_effort or reliable. best_effort: output avoidance path with shorten lateral margin when there is no enough longitudinal margin to avoid. reliable: module output avoidance path with safe (rtc cooperate) state only when the vehicle can avoid with expected lateral margin. best_effort ['reliable', 'best_effort'] policy.use_shorten_margin_immediately boolean if true, module doesn't wait deceleration and outputs avoidance path by best effort margin. true N/A lateral.velocity array Velocity array to decide current lateral accel/jerk limit. [1.0, 1.38, 11.1] N/A lateral.max_accel_values array Avoidance path gets sharp up to this accel limit when there is not enough distance from ego. [0.5, 0.5, 0.5] N/A lateral.min_jerk_values array Avoidance path is generated with this jerk when there is enough distance from ego. [0.2, 0.2, 0.2] N/A lateral.max_jerk_values array Avoidance path gets sharp up to this jerk limit when there is not enough distance from ego. [1.0, 1.0, 1.0] N/A longitudinal.nominal_deceleration float Nominal deceleration limit. -1.0 N/A longitudinal.nominal_jerk float Nominal jerk limit. 0.5 N/A longitudinal.max_deceleration float Max deceleration limit. -1.5 N/A longitudinal.max_jerk float Max jerk limit. 1.0 N/A longitudinal.max_acceleration float Maximum acceleration during avoidance. 0.5 N/A longitudinal.min_velocity_to_limit_max_acceleration float If the ego speed is faster than this param, the module applies acceleration limit max_acceleration. 2.78 N/A trim.quantize_size float Lateral shift length quantize size. 0.1 N/A trim.th_similar_grad_1 float Lateral shift length threshold to merge similar gradient shift lines. 0.1 N/A trim.th_similar_grad_2 float Lateral shift length threshold to merge similar gradient shift lines. 0.2 N/A trim.th_similar_grad_3 float Lateral shift length threshold to merge similar gradient shift lines. 0.5 N/A debug.enable_other_objects_marker boolean Publish other objects marker. false N/A debug.enable_other_objects_info boolean Publish other objects detail information. false N/A debug.enable_detection_area_marker boolean Publish detection area. false N/A debug.enable_drivable_bound_marker boolean Publish drivable area boundary. false N/A debug.enable_safety_check_marker boolean Publish safety check information. false N/A debug.enable_shift_line_marker boolean Publish shift line information. false N/A debug.enable_lane_marker boolean Publish lane information. false N/A debug.enable_misc_marker boolean Publish misc markers. false N/A"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#blind-spot","title":"Blind Spot","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#role","title":"Role","text":"

Blind spot module checks possible collisions with bicycles and pedestrians running on its left/right side while turing left/right before junctions.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#activation-timing","title":"Activation Timing","text":"

This function is activated when the lane id of the target path has an intersection label (i.e. the turn_direction attribute is left or right).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Sets a stop line, a pass judge line, a detection area and conflict area based on a map information and a self position.

Stop/Go state: When both conditions are met for any of each object, this module state is transited to the \"stop\" state and insert zero velocity to stop the vehicle.

In order to avoid a rapid stop, the \u201cstop\u201d judgement is not executed after the judgment line is passed.

Once a \"stop\" is judged, it will not transit to the \"go\" state until the \"go\" judgment continues for a certain period in order to prevent chattering of the state (e.g. 2 seconds).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_line_margin double [m] a margin that the vehicle tries to stop before stop_line backward_length double [m] distance from closest path point to the edge of beginning point. ignore_width_from_center_line double [m] ignore threshold that vehicle behind is collide with ego vehicle or not max_future_movement_time double [s] maximum time for considering future movement of object adjacent_extend_width double [m] if adjacent lane e.g. bicycle only lane exists, blind_spot area is expanded by this length"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#crosswalk","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#role","title":"Role","text":"

This module judges whether the ego should stop in front of the crosswalk in order to provide safe passage for crosswalk users, such as pedestrians and bicycles, based on the objects' behavior and surround traffic.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#features","title":"Features","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#yield-the-way-to-the-pedestrians","title":"Yield the Way to the Pedestrians","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#target-object","title":"Target Object","text":"

The crosswalk module handles objects of the types defined by the following parameters in the object_filtering.target_object namespace.

Parameter Unit Type Description unknown [-] bool whether to look and stop by UNKNOWN objects pedestrian [-] bool whether to look and stop by PEDESTRIAN objects bicycle [-] bool whether to look and stop by BICYCLE objects motorcycle [-] bool whether to look and stop by MOTORCYCLE objects

In order to handle the crosswalk users crossing the neighborhood but outside the crosswalk, the crosswalk module creates an attention area around the crosswalk, shown as the yellow polygon in the figure. If the object's predicted path collides with the attention area, the object will be targeted for yield.

The neighborhood is defined by the following parameter in the object_filtering.target_object namespace.

Parameter Unit Type Description crosswalk_attention_range [m] double the detection area is defined as -X meters before the crosswalk to +X meters behind the crosswalk"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#stop-position","title":"Stop Position","text":"

First of all, stop_distance_from_object [m] is always kept at least between the ego and the target object for safety.

When the stop line exists in the lanelet map, the stop position is calculated based on the line. When the stop line does NOT exist in the lanelet map, the stop position is calculated by keeping stop_distance_from_crosswalk [m] between the ego and the crosswalk.

As an exceptional case, if a pedestrian (or bicycle) is crossing wide crosswalks seen in scramble intersections, and the pedestrian position is more than far_object_threshold meters away from the stop line, the actual stop position is determined by stop_distance_from_object and pedestrian position, not at the stop line.

In the stop_position namespace, the following parameters are defined.

Parameter Type Description stop_position_threshold [m] double If the ego vehicle has stopped near the stop line than this value, this module assumes itself to have achieved yielding. stop_distance_from_crosswalk [m] double make stop line away from crosswalk for the Lanelet2 map with no explicit stop lines far_object_threshold [m] double If objects cross X meters behind the stop line, the stop position is determined according to the object position (stop_distance_from_object meters before the object) for the case where the crosswalk width is very wide stop_distance_from_object [m] double the vehicle decelerates to be able to stop in front of object with margin"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#yield-decision","title":"Yield decision","text":"

The module makes a decision to yield only when the pedestrian traffic light is GREEN or UNKNOWN. The decision is based on the following variables, along with the calculation of the collision point.

We classify ego behavior at crosswalks into three categories according to the relative relationship between TTC and TTV [1].

The boundary of A and B is interpolated from ego_pass_later_margin_x and ego_pass_later_margin_y. In the case of the upper figure, ego_pass_later_margin_x is {0, 1, 2} and ego_pass_later_margin_y is {1, 4, 6}. In the same way, the boundary of B and C is calculated from ego_pass_first_margin_x and ego_pass_first_margin_y. In the case of the upper figure, ego_pass_first_margin_x is {3, 5} and ego_pass_first_margin_y is {0, 1}.

If the red signal is indicating to the corresponding crosswalk, the ego do not yield against the pedestrians.

In the pass_judge namespace, the following parameters are defined.

Parameter Type Description ego_pass_first_margin_x [[s]] double time to collision margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_margin_y [[s]] double time to vehicle margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_additional_margin [s] double additional time margin for ego pass first situation to suppress chattering ego_pass_later_margin_x [[s]] double time to vehicle margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_margin_y [[s]] double time to collision margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_additional_margin [s] double additional time margin for object pass first situation to suppress chattering"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#smooth-yield-decision","title":"Smooth Yield Decision","text":"

If the object is stopped near the crosswalk but has no intention of walking, a situation can arise in which the ego continues to yield the right-of-way to the object. To prevent such a deadlock situation, the ego will cancel yielding depending on the situation.

For the object stopped around the crosswalk but has no intention to walk (*1), after the ego has keep stopping to yield for a specific time (*2), the ego cancels the yield and starts driving.

*1: The time is calculated by the interpolation of distance between the object and crosswalk with distance_set_for_no_intention_to_walk and timeout_set_for_no_intention_to_walk.

In the pass_judge namespace, the following parameters are defined.

Parameter Type Description distance_set_for_no_intention_to_walk [[m]] double key sets to calculate the timeout for no intention to walk with interpolation timeout_set_for_no_intention_to_walk [[s]] double value sets to calculate the timeout for no intention to walk with interpolation

*2: In the pass_judge namespace, the following parameters are defined.

Parameter Type Description timeout_ego_stop_for_yield [s] double If the ego maintains the stop for this amount of time, then the ego proceeds, assuming it has stopped long time enough."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#new-object-handling","title":"New Object Handling","text":"

Due to the perception's limited performance where the tree or poll is recognized as a pedestrian or the tracking failure in the crowd or occlusion, even if the surrounding environment does not change, the new pedestrian (= the new ID's pedestrian) may suddenly appear unexpectedly. If this happens while the ego is going to pass the crosswalk, the ego will stop suddenly.

To deal with this issue, the option disable_yield_for_new_stopped_object is prepared. If true is set, the yield decisions around the crosswalk with a traffic light will ignore the new stopped object.

In the pass_judge namespace, the following parameters are defined.

Parameter Type Description disable_yield_for_new_stopped_object [-] bool If set to true, the new stopped object will be ignored around the crosswalk with a traffic light"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#stuck-prevention-on-the-crosswalk","title":"Stuck Prevention on the Crosswalk","text":"

The feature will make the ego not to stop on the crosswalk. When there is a low-speed or stopped vehicle ahead of the crosswalk, and there is not enough space between the crosswalk and the vehicle, the crosswalk module plans to stop before the crosswalk even if there are no pedestrians or bicycles.

min_acc, min_jerk, and max_jerk are met. If the ego cannot stop before the crosswalk with these parameters, the stop position will move forward.

In the stuck_vehicle namespace, the following parameters are defined.

Parameter Unit Type Description stuck_vehicle_velocity [m/s] double maximum velocity threshold whether the target vehicle is stopped or not max_stuck_vehicle_lateral_offset [m] double maximum lateral offset of the target vehicle position required_clearance [m] double clearance to be secured between the ego and the ahead vehicle min_acc [m/ss] double minimum acceleration to stop min_jerk [m/sss] double minimum jerk to stop max_jerk [m/sss] double maximum jerk to stop"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#safety-slow-down-behavior","title":"Safety Slow Down Behavior","text":"

In the current autoware implementation, if no target object is detected around a crosswalk, the ego vehicle will not slow down for the crosswalk. However, it may be desirable to slow down in situations, for example, where there are blind spots. Such a situation can be handled by setting some tags to the related crosswalk as instructed in the lanelet2_format_extension.md document.

Parameter Type Description slow_velocity [m/s] double target vehicle velocity when module receive slow down command from FOA max_slow_down_jerk [m/sss] double minimum jerk deceleration for safe brake max_slow_down_accel [m/ss] double minimum accel deceleration for safe brake no_relax_velocity [m/s] double if the current velocity is less than X m/s, ego always stops at the stop position(not relax deceleration constraints)"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#occlusion","title":"Occlusion","text":"

This feature makes ego slow down for a crosswalk that is occluded.

Occlusion of the crosswalk is determined using the occupancy grid. An occlusion is a square of size min_size of occluded cells (i.e., their values are between free_space_max and occupied_min) of size min_size. If an occlusion is found within range of the crosswalk, then the velocity limit at the crosswalk is set to slow_down_velocity (or more to not break limits set by max_slow_down_jerk and max_slow_down_accel). The range is calculated from the intersection between the ego path and the crosswalk and is equal to the time taken by ego to reach the crosswalk times the occluded_object_velocity. This range is meant to be large when ego is far from the crosswalk and small when ego is close.

In order to avoid flickering decisions, a time buffer can be used such that the decision to add (or remove) the slow down is only taken after an occlusion is detected (or not detected) for a consecutive time defined by the time_buffer parameter.

To ignore occlusions when the crosswalk has a traffic light, ignore_with_traffic_light should be set to true.

To ignore temporary occlusions caused by moving objects, ignore_behind_predicted_objects should be set to true. By default, occlusions behind an object with velocity higher than ignore_velocity_thresholds.default are ignored. This velocity threshold can be specified depending on the object type by specifying the object class label and velocity threshold in the parameter lists ignore_velocity_thresholds.custom_labels and ignore_velocity_thresholds.custom_thresholds. To inflate the masking behind objects, their footprint can be made bigger using extra_predicted_objects_size.

Parameter Unit Type Description enable [-] bool if true, ego will slow down around crosswalks that are occluded occluded_object_velocity [m/s] double assumed velocity of objects that may come out of the occluded space slow_down_velocity [m/s] double slow down velocity time_buffer [s] double consecutive time with/without an occlusion to add/remove the slowdown min_size [m] double minimum size of an occlusion (square side size) free_space_max [-] double maximum value of a free space cell in the occupancy grid occupied_min [-] double minimum value of an occupied cell in the occupancy grid ignore_with_traffic_light [-] bool if true, occlusions at crosswalks with traffic lights are ignored ignore_behind_predicted_objects [-] bool if true, occlusions behind predicted objects are ignored ignore_velocity_thresholds.default [m/s] double occlusions are only ignored behind objects with a higher or equal velocity ignore_velocity_thresholds.custom_labels [-] string list labels for which to define a non-default velocity threshold (see autoware_perception_msgs::msg::ObjectClassification for all the labels) ignore_velocity_thresholds.custom_thresholds [-] double list velocities of the custom labels extra_predicted_objects_size [m] double extra size added to the objects for masking the occlusions"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#others","title":"Others","text":"

In the common namespace, the following parameters are defined.

Parameter Unit Type Description show_processing_time [-] bool whether to show processing time traffic_light_state_timeout [s] double timeout threshold for traffic light signal enable_rtc [-] bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#known-issues","title":"Known Issues","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#debugging","title":"Debugging","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#visualization-of-debug-markers","title":"Visualization of debug markers","text":"

/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/crosswalk shows the following markers.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#visualization-of-time-to-collision","title":"Visualization of Time-To-Collision","text":"
ros2 run autoware_behavior_velocity_crosswalk_module time_to_collision_plotter.py\n

enables you to visualize the following figure of the ego and pedestrian's time to collision. The label of each plot is <crosswalk module id>-<pedestrian uuid>.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#trouble-shooting","title":"Trouble Shooting","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#behavior","title":"Behavior","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#parameter-tuning","title":"Parameter Tuning","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#referencesexternal-links","title":"References/External links","text":"

[1] \u4f50\u85e4 \u307f\u306a\u307f, \u65e9\u5742 \u7965\u4e00, \u6e05\u6c34 \u653f\u884c, \u6751\u91ce \u9686\u5f66, \u6a2a\u65ad\u6b69\u884c\u8005\u306b\u5bfe\u3059\u308b\u30c9\u30e9\u30a4\u30d0\u306e\u30ea\u30b9\u30af\u56de\u907f\u884c\u52d5\u306e\u30e2\u30c7\u30eb\u5316, \u81ea\u52d5\u8eca\u6280\u8853\u4f1a\u8ad6\u6587\u96c6, 2013, 44 \u5dfb, 3 \u53f7, p. 931-936.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#detection-area","title":"Detection Area","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#role","title":"Role","text":"

If pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a detection area on the target lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description use_dead_line bool [-] weather to use dead line or not use_pass_judge_line bool [-] weather to use pass judge line or not state_clear_time double [s] when the vehicle is stopping for certain time without incoming obstacle, move to STOPPED state stop_margin double [m] a margin that the vehicle tries to stop before stop_line dead_line_margin double [m] ignore threshold that vehicle behind is collide with ego vehicle or not hold_stop_margin_distance double [m] parameter for restart prevention (See Algorithm section) distance_to_judge_over_stop_line double [m] parameter for judging that the stop line has been crossed suppress_pass_judge_when_stopping bool [m] parameter for suppressing pass judge when stopping"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#inner-workings-algorithm","title":"Inner-workings / Algorithm","text":"
  1. Gets a detection area and stop line from map information and confirms if there is pointcloud in the detection area
  2. Inserts stop point l[m] in front of the stop line
  3. Inserts a pass judge point to a point where the vehicle can stop with a max deceleration
  4. Sets velocity as zero behind the stop line when the ego-vehicle is in front of the pass judge point
  5. If the ego vehicle has passed the pass judge point already, it doesn\u2019t stop and pass through.
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/","title":"Intersection","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersection","title":"Intersection","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#role","title":"Role","text":"

The intersection module is responsible for safely passing urban intersections by:

  1. checking collisions with upcoming vehicles
  2. recognizing the occluded area in the intersection
  3. reacting to each color/shape of associated traffic lights

This module is designed to be agnostic to left-hand/right-hand traffic rules and work for crossroads, T-shape junctions, etc. Roundabout is not formally supported in this module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#activation-condition","title":"Activation condition","text":"

This module is activated when the path contains the lanes with turn_direction tag. More precisely, if the lane_ids of the path contain the ids of those lanes, corresponding instances of intersection module are activated on each lane respectively.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#requirementslimitations","title":"Requirements/Limitations","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#attention-area","title":"Attention area","text":"

The attention area in the intersection is defined as the set of lanes that are conflicting with ego path and their preceding lanes up to common.attention_area_length meters. By default RightOfWay tag is not set, so the attention area covers all the conflicting lanes and its preceding lanes as shown in the first row. RightOfWay tag is used to rule out the lanes that each lane has priority given the traffic light relation and turn_direction priority. In the second row, purple lanes are set as the yield_lane of the ego_lane in the RightOfWay tag.

intersection_area, which is supposed to be defined on the HDMap, is an area converting the entire intersection.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#in-phaseanti-phase-signal-group","title":"In-phase/Anti-phase signal group","text":"

The terms \"in-phase signal group\" and \"anti-phase signal group\" are introduced to distinguish the lanes by the timing of traffic light regulation as shown in below figure.

The set of intersection lanes whose color is in sync with lane L1 is called the in-phase signal group of L1, and the set of remaining lanes is called the anti-phase signal group.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#how-towhy-set-rightofway-tag","title":"How-to/Why set RightOfWay tag","text":"

Ideally RightOfWay tag is unnecessary if ego has perfect knowledge of all traffic signal information because:

That allows ego to generate the attention area dynamically using the real time traffic signal information. However this ideal condition rarely holds unless the traffic signal information is provided through the infrastructure. Also there maybe be very complicated/bad intersection maps where multiple lanes overlap in a complex manner.

To help the intersection module care only a set of limited lanes, RightOfWay tag needs to be properly set.

Following table shows an example of how to set yield_lanes to each lane in a intersection w/o traffic lights. Since it is not apparent how to uniquely determine signal phase group for a set of intersection lanes in geometric/topological manner, yield_lane needs to be set manually. Straight lanes with traffic lights are exceptionally handled to detect no lanes because commonly it has priority over all the other lanes, so no RightOfWay setting is required.

turn direction of right_of_way yield_lane(with traffic light) yield_lane(without traffic light) straight not need to set yield_lane(this case is special) left/right conflicting lanes of in-phase group left(Left hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group right conflicting lanes of in-phase group right(Left hand traffic) all conflicting lanes of the anti-phase group no yield_lane left(Right hand traffic) all conflicting lanes of the anti-phase group no yield_lane right(Right hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group left conflicting lanes of in-phase group

This setting gives the following attention_area configurations.

For complex/bad intersection map like the one illustrated below, additional RightOfWay setting maybe necessary.

The bad points are:

  1. ego lane is overlapped with adjacent lane of the in-phase group. In this case you need to set this lane as yield_lane additionally because otherwise attention area is generated for its preceding lanes as well, which may cause unwanted stop.
  2. ego lane is overlapped with unrelated lane. In this case the lane is right-turn only and there is no chance of collision in theory. But you need to set this lane as yield_lane additionally for the same reason as (1).
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#possible-stop-lines","title":"Possible stop lines","text":"

Following figure illustrates important positions used in the intersection module. Note that each solid line represents ego front line position and the corresponding dot represents the actual inserted stop point position for the vehicle frame, namely the center of the rear wheel.

To precisely calculate stop positions, the path is interpolated at the certain interval of common.path_interpolation_ds.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#target-objects","title":"Target objects","text":"

For stuck vehicle detection and collision detection, this module checks car, bus, truck, trailer, motor cycle, and bicycle type objects.

Objects that satisfy all of the following conditions are considered as target objects (possible collision objects):

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#overview-of-decision-process","title":"Overview of decision process","text":"

There are several behaviors depending on the scene.

behavior scene action Safe Ego detected no occlusion and collision Ego passes the intersection StuckStop The exit of the intersection is blocked by traffic jam Ego stops before the intersection or the boundary of attention area YieldStuck Another vehicle stops to yield ego Ego stops before the intersection or the boundary of attention area NonOccludedCollisionStop Ego detects no occlusion but detects collision Ego stops at default_stopline FirstWaitBeforeOcclusion Ego detected occlusion when entering the intersection Ego stops at default_stopline at first PeekingTowardOcclusion Ego detected occlusion and but no collision within the FOV (after FirstWaitBeforeOcclusion) Ego approaches the boundary of the attention area slowly OccludedCollisionStop Ego detected both occlusion and collision (after FirstWaitBeforeOcclusion) Ego stops immediately FullyPrioritized Ego is fully prioritized by the RED/Arrow signal Ego only cares vehicles still running inside the intersection. Occlusion is ignored OverPassJudgeLine Ego is already inside the attention area and/or cannot stop before the boundary of attention area Ego does not detect collision/occlusion anymore and passes the intersection

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"

If there is any object on the path inside the intersection and at the exit of the intersection (up to stuck_vehicle.stuck_vehicle_detect_dist) lane and its velocity is less than the threshold (stuck_vehicle.stuck_vehicle_velocity_threshold), the object is regarded as a stuck vehicle. If stuck vehicles exist, this module inserts a stopline a certain distance (=default_stopline_margin) before the overlapped region with other lanes. The stuck vehicle detection area is generated based on the planned path, so the stuck vehicle stopline is not inserted if the upstream module generated an avoidance path.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#yield-stuck-vehicle-detection","title":"Yield stuck vehicle detection","text":"

If there is any stopped object on the attention lanelet between the intersection point with ego path and the position which is yield_stuck.distance_threshold before that position, the object is regarded as yielding to ego vehicle. In this case ego is given the right-of-way by the yielding object but this module inserts stopline to prevent entry into the intersection. This scene happens when the object is yielding against ego or the object is waiting before the crosswalk around the exit of the intersection.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#collision-detection","title":"Collision detection","text":"

The following process is performed for the targets objects to determine whether ego can pass the intersection safely. If it is judged that ego cannot pass the intersection with enough margin, this module inserts a stopline on the path.

  1. predict the time \\(t\\) when the object intersects with ego path for the first time from the predicted path time step. Only the predicted whose confidence is greater than collision_detection.min_predicted_path_confidence is used.
  2. detect collision between the predicted path and ego's predicted path in the following process
    1. calculate the collision interval of [\\(t\\) - collision_detection.collision_start_margin_time, \\(t\\) + collision_detection.collision_end_margin_time]
    2. calculate the passing area of ego during the collision interval from the array of (time, distance) obtained by smoothed velocity profile
    3. check if ego passing area and object predicted path interval collides
  3. if collision is detected, the module inserts a stopline
  4. if ego is over the pass_judge_line, collision checking is skipped to avoid sudden braking and/or unnecessary stop in the middle of the intersection

The parameters collision_detection.collision_start_margin_time and collision_detection.collision_end_margin_time can be interpreted as follows:

If collision is detected, the state transits to \"STOP\" immediately. On the other hand, the state does not transit to \"GO\" unless safe judgement continues for a certain period collision_detection.collision_detection_hold_time to prevent the chattering of decisions.

Currently, the intersection module uses motion_velocity_smoother feature to precisely calculate ego velocity profile along the intersection lane under longitudinal/lateral constraints. If the flag collision_detection.velocity_profile.use_upstream is true, the target velocity profile of the original path is used. Otherwise the target velocity is set to collision.velocity_profile.default_velocity. In the trajectory smoothing process the target velocity at/before ego trajectory points are set to ego current velocity. The smoothed trajectory is then converted to an array of (time, distance) which indicates the arrival time to each trajectory point on the path from current ego position. You can visualize this array by adding the lane id to debug.ttc and running

ros2 run behavior_velocity_intersection_module ttc.py --lane_id <lane_id>\n

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#about-use_upstream_velocity-flag","title":"about use_upstream_velocity flag","text":"

There are some use cases where ego should check collision before entering the intersection considering the temporal stop by walkway/crosswalk module around the exit of the intersection, because their stop position can be inside the intersection and it could bother upcoming vehicles. By setting the flag collision_detection.velocity_profile.use_upstream to true and running the walkway/crosswalk module prior to this module, ego velocity profile is calculated considering their velocity and stop positions.

As illustrated in below figure if upstream module inserted a stopline, ego position profile will remain there for the infinite time, thus it leads to the judgement that ego cannot exit the intersection during the interval [\\(t\\) - collision_detection.collision_start_margin_time, \\(t\\) + collision_detection.collision_end_margin_time]. In this way this feature considers possible collision for the infinite time if stoplines exist ahead of ego position (practically the prediction horizon is limited so the collision check horizon is bounded).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-detection","title":"Occlusion detection","text":"

If the flag occlusion.enable is true this module checks if there is sufficient field of view (FOV) on the attention area up to occlusion.occlusion_attention_area_length. If FOV is not clear enough ego first makes a brief stop at default_stopline for occlusion.temporal_stop_time_before_peeking, and then slowly creeps toward occlusion_peeking_stopline. If occlusion.creep_during_peeking.enable is true occlusion.creep_during_peeking.creep_velocity is inserted up to occlusion_peeking_stopline. Otherwise only stop line is inserted.

During the creeping if collision is detected this module inserts a stop line in front of ego immediately, and if the FOV gets sufficiently clear the intersection_occlusion wall will disappear. If occlusion is cleared and no collision is detected ego will pass the intersection.

The occlusion is detected as the common area of occlusion attention area(which is partially the same as the normal attention area) and the unknown cells of the occupancy grid map. The occupancy grid map is denoised using morphology with the window size of occlusion.denoise_kernel. The occlusion attention area lanes are discretized to line strings and they are used to generate a grid whose each cell represents the distance from ego path along the lane as shown below.

If the nearest occlusion cell value is below the threshold occlusion.occlusion_required_clearance_distance, it means that the FOV of ego is not clear. It is expected that the occlusion gets cleared as the vehicle approaches the occlusion peeking stop line.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-source-estimation-at-intersection-with-traffic-light","title":"Occlusion source estimation at intersection with traffic light","text":"

At intersection with traffic light, the whereabout of occlusion is estimated by checking if there are any objects between ego and the nearest occlusion cell. While the occlusion is estimated to be caused by some object (DYNAMICALLY occluded), intersection_wall appears at all times. If no objects are found between ego and the nearest occlusion cell (STATICALLY occluded), after ego stopped for the duration of occlusion.static_occlusion_with_traffic_light_timeout plus occlusion.occlusion_detection_hold_time, occlusion is intentionally ignored to avoid stuck.

The remaining time is visualized on the intersection_occlusion virtual wall.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-handling-at-intersection-without-traffic-light","title":"Occlusion handling at intersection without traffic light","text":"

At intersection without traffic light, if occlusion is detected, ego makes a brief stop at default_stopline and first_attention_stopline respectively. After stopping at the first_attention_area_stopline this module inserts occlusion.absence_traffic_light.creep_velocity velocity between ego and occlusion_wo_tl_pass_judge_line while occlusion is not cleared. If collision is detected, ego immediately stops. Once the occlusion is cleared or ego has passed occlusion_wo_tl_pass_judge_line this module does not detect collision and occlusion because ego footprint is already inside the intersection.

While ego is creeping, yellow intersection_wall appears in front ego.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#traffic-signal-specific-behavior","title":"Traffic signal specific behavior","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#collision-detection_1","title":"Collision detection","text":"

TTC parameter varies depending on the traffic light color/shape as follows.

traffic light color ttc(start) ttc(end) GREEN collision_detection.not_prioritized.collision_start_margin collision_detection.not_prioritized.collision_end_margin AMBER collision_detection.partially_prioritized.collision_start_end_margin collision_detection.partially_prioritized.collision_start_end_margin RED / Arrow collision_detection.fully_prioritized.collision_start_end_margin collision_detection.fully_prioritized.collision_start_end_margin"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#yield-on-green","title":"yield on GREEN","text":"

If the traffic light color changed to GREEN and ego approached the entry of the intersection lane within the distance collision_detection.yield_on_green_traffic_light.distance_to_assigned_lanelet_start and there is any object whose distance to its stopline is less than collision_detection.yield_on_green_traffic_light.object_dist_to_stopline, this module commands to stop for the duration of collision_detection.yield_on_green_traffic_light.duration at default_stopline.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#skip-on-amber","title":"skip on AMBER","text":"

If the traffic light color is AMBER but the object is expected to stop before its stopline under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration, collision checking is skipped.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#skip-on-red","title":"skip on RED","text":"

If the traffic light color is RED or Arrow signal is turned on, the attention lanes which are not conflicting with ego lane are not used for detection. And even if the object stops with a certain overshoot from its stopline, but its expected stop position under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration is more than the distance collision_detection.ignore_on_red_traffic_light.object_margin_to_path from collision point, the object is ignored.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-detection_1","title":"Occlusion detection","text":"

When the traffic light color/shape is RED/Arrow, occlusion detection is skipped.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#pass-judge-line","title":"Pass Judge Line","text":"

Generally it is not tolerable for vehicles that have lower traffic priority to stop in the middle of the unprotected area in intersections, and they need to stop at the stop line beforehand if there will be any risk of collision, which introduces two requirements:

  1. The vehicle must start braking before the boundary of the unprotected area at least by the braking distance if it is supposed to stop
  2. The vehicle must recognize upcoming vehicles and check safety beforehand with enough braking distance margin if it is supposed to go
    1. And the SAFE decision must be absolutely certain and remain to be valid for the future horizon so that the safety condition will be always satisfied while ego is driving inside the unprotected area.
  3. (TODO): Since it is almost impossible to make perfectly safe decision beforehand given the limited detection range/velocity tracking performance, intersection module should plan risk-evasive acceleration velocity profile AND/OR relax lateral acceleration limit while ego is driving inside the unprotected area, if the safety decision is \"betrayed\" later due to the following reasons:
    1. The situation turned out to be dangerous later, mainly because velocity tracking was underestimated or the object accelerated beyond TTC margin
    2. The situation turned dangerous later, mainly because the object is suddenly detected out of nowhere

The position which is before the boundary of unprotected area by the braking distance which is obtained by

\\[ \\dfrac{v_{\\mathrm{ego}}^{2}}{2a_{\\mathrm{max}}} + v_{\\mathrm{ego}} * t_{\\mathrm{delay}} \\]

is called pass_judge_line, and safety decision must be made before ego passes this position because ego does not stop anymore.

1st_pass_judge_line is before the first upcoming lane, and at intersections with multiple upcoming lanes, 2nd_pass_judge_line is defined as the position which is before the centerline of the first attention lane by the braking distance. 1st/2nd_pass_judge_line are illustrated in the following figure.

Intersection module will command to GO if

because it is expected to stop or continue stop decision if

  1. ego is before default_stopline && common.enable_pass_judge_before_default_stopline is false OR
    1. reason: default_stopline is defined on the map and should be respected
  2. ego is before 1st_pass_judge_line OR
    1. reason: it has enough braking distance margin
  3. ego judged UNSAFE previously
    1. reason: ego is now trying to stop and should continue stop decision if collision is detected in later calculation
  4. (ego is between 1st and 2nd pass_judge_line and the most probable collision is expected to happen in the 2nd attention lane)

For the 3rd condition, it is possible that ego stops with some overshoot to the unprotected area while it is trying to stop for collision detection, because ego should keep stop decision while UNSAFE decision is made even if it passed 1st_pass_judge_line during deceleration.

For the 4th condition, at intersections with 2nd attention lane, even if ego is over the 1st pass_judge_line, still intersection module commands to stop if the most probable collision is expected to happen in the 2nd attention lane.

Also if occlusion.enable is true, the position of 1st_pass_judge line changes to occlusion_peeking_stopline if ego passed the original 1st_pass_judge_line position while ego is peeking. Otherwise ego could inadvertently judge that it passed 1st_pass_judge during peeking and then abort peeking.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#data-structure","title":"Data Structure","text":"

Each data structure is defined in util_type.hpp.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersectionlanelets","title":"IntersectionLanelets","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersectionstoplines","title":"IntersectionStopLines","text":"

Each stop lines are generated from interpolated path points to obtain precise positions.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#targetobject","title":"TargetObject","text":"

TargetObject holds the object, its belonging lane and corresponding stopline information.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#module-parameters","title":"Module Parameters","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#common","title":"common","text":"Parameter Type Description .attention_area_length double [m] range for object detection .attention_area_margin double [m] margin for expanding attention area width .attention_area_angle_threshold double [rad] threshold of angle difference between the detected object and lane .use_intersection_area bool [-] flag to use intersection_area for collision detection .default_stopline_margin double [m] margin before_stop_line .stopline_overshoot_margin double [m] margin for the overshoot from stopline .max_accel double [m/ss] max acceleration for stop .max_jerk double [m/sss] max jerk for stop .delay_response_time double [s] action delay before stop .enable_pass_judge_before_default_stopline bool [-] flag not to stop before default_stopline even if ego is over pass_judge_line"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#stuck_vehicleyield_stuck","title":"stuck_vehicle/yield_stuck","text":"Parameter Type Description stuck_vehicle.turn_direction - [-] turn_direction specifier for stuck vehicle detection stuck_vehicle.stuck_vehicle_detect_dist double [m] length toward from the exit of intersection for stuck vehicle detection stuck_vehicle.stuck_vehicle_velocity_threshold double [m/s] velocity threshold for stuck vehicle detection yield_stuck.distance_threshold double [m/s] distance threshold of yield stuck vehicle from ego path along the lane"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#collision_detection","title":"collision_detection","text":"Parameter Type Description .consider_wrong_direction_vehicle bool [-] flag to detect objects in the wrong direction .collision_detection_hold_time double [s] hold time of collision detection .min_predicted_path_confidence double [-] minimum confidence value of predicted path to use for collision detection .keep_detection_velocity_threshold double [s] ego velocity threshold for continuing collision detection before pass judge line .velocity_profile.use_upstream bool [-] flag to use velocity profile planned by upstream modules .velocity_profile.minimum_upstream_velocity double [m/s] minimum velocity of upstream velocity profile to avoid zero division .velocity_profile.default_velocity double [m/s] constant velocity profile when use_upstream is false .velocity_profile.minimum_default_velocity double [m/s] minimum velocity of default velocity profile to avoid zero division .yield_on_green_traffic_light - [-] description .ignore_amber_traffic_light - [-] description .ignore_on_red_traffic_light - [-] description"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion","title":"occlusion","text":"Parameter Type Description .enable bool [-] flag to calculate occlusion detection .occlusion_attention_area_length double [m] the length of attention are for occlusion detection .free_space_max int [-] maximum value of occupancy grid cell to treat at occluded .occupied_min int [-] minimum value of occupancy grid cell to treat at occluded .denoise_kernel double [m] morphology window size for preprocessing raw occupancy grid .attention_lane_crop_curvature_threshold double [m] curvature threshold for trimming curved part of the lane .attention_lane_crop_curvature_ds double [m] discretization interval of centerline for lane curvature calculation .creep_during_peeking.enable bool [-] flag to insert creep_velocity while peeking to intersection occlusion stopline .creep_during_peeking.creep_velocity double [m/s] the command velocity while peeking to intersection occlusion stopline .peeking_offset double [m] the offset of the front of the vehicle into the attention area for peeking to occlusion .occlusion_required_clearance_distance double [m] threshold for the distance to nearest occlusion cell from ego path .possible_object_bbox [double] [m] minimum bounding box size for checking if occlusion polygon is small enough .ignore_parked_vehicle_speed_threshold double [m/s] velocity threshold for checking parked vehicle .occlusion_detection_hold_time double [s] hold time of occlusion detection .temporal_stop_time_before_peeking double [s] temporal stop duration at default_stopline before starting peeking .temporal_stop_before_attention_area bool [-] flag to temporarily stop at first_attention_stopline before peeking into attention_area .creep_velocity_without_traffic_light double [m/s] creep velocity to occlusion_wo_tl_pass_judge_line .static_occlusion_with_traffic_light_timeout double [s] the timeout duration for ignoring static occlusion at intersection with traffic light"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#trouble-shooting","title":"Trouble shooting","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersection-module-stops-against-unrelated-vehicles","title":"Intersection module stops against unrelated vehicles","text":"

In this case, first visualize /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection topic and check the attention_area polygon. Intersection module performs collision checking for vehicles running on this polygon, so if it extends to unintended lanes, it needs to have RightOfWay tag.

By lowering common.attention_area_length you can check which lanes are conflicting with the intersection lane. Then set part of the conflicting lanes as the yield_lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#the-stop-line-of-intersection-is-chattering","title":"The stop line of intersection is chattering","text":"

The parameter collision_detection.collision_detection_hold_time suppresses the chattering by keeping UNSAFE decision for this duration until SAFE decision is finally made. The role of this parameter is to account for unstable detection/tracking of objects. By increasing this value you can suppress the chattering. However it could elongate the stopping duration excessively.

If the chattering arises from the acceleration/deceleration of target vehicles, increase collision_detection.collision_detection.collision_end_margin_time and/or collision_detection.collision_detection.collision_end_margin_time.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#the-stop-line-is-released-too-fastslow","title":"The stop line is released too fast/slow","text":"

If the intersection wall appears too fast, or ego tends to stop too conservatively for upcoming vehicles, lower the parameter collision_detection.collision_detection.collision_start_margin_time. If it lasts too long after the target vehicle passed, then lower the parameter collision_detection.collision_detection.collision_end_margin_time.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#ego-suddenly-stops-at-intersection-with-traffic-light","title":"Ego suddenly stops at intersection with traffic light","text":"

If the traffic light color changed from AMBER/RED to UNKNOWN, the intersection module works in the GREEN color mode. So collision and occlusion are likely to be detected again.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-is-detected-overly","title":"Occlusion is detected overly","text":"

You can check which areas are detected as occlusion by visualizing /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection/occlusion_polygons.

If you do not want to detect / do want to ignore occlusion far from ego or lower the computational cost of occlusion detection, occlusion.occlusion_attention_area_length should be set to lower value.

If you want to care the occlusion nearby ego more cautiously, set occlusion.occlusion_required_clearance_distance to a larger value. Then ego will approach the occlusion_peeking_stopline more closely to assure more clear FOV.

occlusion.possible_object_bbox is used for checking if detected occlusion area is small enough that no vehicles larger than this size can exist inside. By decreasing this size ego will ignore small occluded area.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occupancy-grid-map-tuning","title":"occupancy grid map tuning","text":"

Refer to the document of autoware_probabilistic_occupancy_grid_map for details. If occlusion tends to be detected at apparently free space, increase occlusion.free_space_max to ignore them.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#in-simple_planning_simulator","title":"in simple_planning_simulator","text":"

intersection_occlusion feature is not recommended for use in planning_simulator because the laserscan_based_occupancy_grid_map generates unnatural UNKNOWN cells in 2D manner:

Also many users do not set traffic light information frequently although it is very critical for intersection_occlusion (and in real traffic environment too).

For these reasons, occlusion.enable is false by default.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#on-real-vehicle-in-end-to-end-simulator","title":"on real vehicle / in end-to-end simulator","text":"

On real vehicle or in end-to-end simulator like AWSIM the following pointcloud_based_occupancy_grid_map configuration is highly recommended:

scan_origin_frame: \"velodyne_top\"\n\ngrid_map_type: \"OccupancyGridMapProjectiveBlindSpot\"\nOccupancyGridMapProjectiveBlindSpot:\nprojection_dz_threshold: 0.01 # [m] for avoiding null division\nobstacle_separation_threshold: 1.0 # [m] fill the interval between obstacles with unknown for this length\n

You should set the top lidar link as the scan_origin_frame. In the example it is velodyne_top. The method OccupancyGridMapProjectiveBlindSpot estimates the FOV by running projective ray-tracing from scan_origin to obstacle or up to the ground and filling the cells on the \"shadow\" of the object as UNKNOWN.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#flowchart","title":"Flowchart","text":"

WIP

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#merge-from-private","title":"Merge From Private","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#role_1","title":"Role","text":"

When an ego enters a public road from a private road (e.g. a parking lot), it needs to face and stop before entering the public road to make sure it is safe.

This module is activated when there is an intersection at the private area from which the vehicle enters the public road. The stop line is generated both when the goal is in the intersection lane and when the path goes beyond the intersection lane. The basic behavior is the same as the intersection module, but ego must stop once at the stop line.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when the following conditions are met:

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#module-parameters_1","title":"Module Parameters","text":"Parameter Type Description merge_from_private_road/stop_duration_sec double [m] time margin to change state"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#known-issue","title":"Known Issue","text":"

If ego go over the stop line for a certain distance, then it will not transit from STOP.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#test-maps","title":"Test Maps","text":"

The intersections lanelet map consist of a variety of intersections including:

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#no-drivable-lane","title":"No Drivable Lane","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#role","title":"Role","text":"

This module plans the velocity of the related part of the path in case there is a no drivable lane referring to it.

A no drivable lane is a lanelet or more that are out of operation design domain (ODD), i.e., the vehicle must not drive autonomously in this lanelet. A lanelet can be no drivable (out of ODD) due to many reasons, either technical limitations of the SW and/or HW, business requirements, safety considerations, .... etc, or even a combination of those.

Some examples of No Drivable Lanes

A lanelet becomes invalid by adding a new tag under the relevant lanelet in the map file <tag k=\"no_drivable_lane\" v=\"yes\"/>.

The target of this module is to stop the vehicle before entering the no drivable lane (with configurable stop margin) or keep the vehicle stationary if autonomous mode started inside a no drivable lane. Then ask the human driver to take the responsibility of the driving task (Takeover Request / Request to Intervene)

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#activation-timing","title":"Activation Timing","text":"

This function is activated when the lane id of the target path has an no drivable lane label (i.e. the no_drivable_lane attribute is yes).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double [m] margin for ego vehicle to stop before speed_bump print_debug_info bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#future-work","title":"Future Work","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#no-stopping-area","title":"No Stopping Area","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#role","title":"Role","text":"

This module plans to avoid stop in 'no stopping area`.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#limitation","title":"Limitation","text":"

This module allows developers to design vehicle velocity in no_stopping_area module using specific rules. Once ego vehicle go through pass through point, ego vehicle does't insert stop velocity and does't change decision from GO. Also this module only considers dynamic object in order to avoid unnecessarily stop.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#modelparameter","title":"ModelParameter","text":"Parameter Type Description state_clear_time double [s] time to clear stop state stuck_vehicle_vel_thr double [m/s] vehicles below this velocity are considered as stuck vehicle. stop_margin double [m] margin to stop line at no stopping area dead_line_margin double [m] if ego pass this position GO stop_line_margin double [m] margin to auto-gen stop line at no stopping area detection_area_length double [m] length of searching polygon stuck_vehicle_front_margin double [m] obstacle stop max distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#occlusion-spot","title":"Occlusion Spot","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#role","title":"Role","text":"

This module plans safe velocity to slow down before reaching collision point that hidden object is darting out from occlusion spot where driver can't see clearly because of obstacles.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_occlusion_spot becomes true. To make pedestrian first zone map tag is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#limitation-and-todos","title":"Limitation and TODOs","text":"

This module is prototype implementation to care occlusion spot. To solve the excessive deceleration due to false positive of the perception, the logic of detection method can be selectable. This point has not been discussed in detail and needs to be improved.

TODOs are written in each Inner-workings / Algorithms (see the description below).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#logics-working","title":"Logics Working","text":"

There are several types of occlusions, such as \"occlusions generated by parked vehicles\" and \"occlusions caused by obstructions\". In situations such as driving on road with obstacles, where people jump out of the way frequently, all possible occlusion spots must be taken into account. This module considers all occlusion spots calculated from the occupancy grid, but it is not reasonable to take into account all occlusion spots for example, people jumping out from behind a guardrail, or behind cruising vehicle. Therefore currently detection area will be limited to to use predicted object information.

Note that this decision logic is still under development and needs to be improved.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#detectionarea-polygon","title":"DetectionArea Polygon","text":"

This module considers TTV from pedestrian velocity and lateral distance to occlusion spot. TTC is calculated from ego velocity and acceleration and longitudinal distance until collision point using motion velocity smoother. To compute fast this module only consider occlusion spot whose TTV is less than TTC and only consider area within \"max lateral distance\".

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#occlusion-spot-occupancy-grid-base","title":"Occlusion Spot Occupancy Grid Base","text":"

This module considers any occlusion spot around ego path computed from the occupancy grid. Due to the computational cost occupancy grid is not high resolution and this will make occupancy grid noisy so this module add information of occupancy to occupancy grid map.

TODO: consider hight of obstacle point cloud to generate occupancy grid.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#collision-free-judgement","title":"Collision Free Judgement","text":"

obstacle that can run out from occlusion should have free space until intersection from ego vehicle

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#partition-lanelet","title":"Partition Lanelet","text":"

By using lanelet information of \"guard_rail\", \"fence\", \"wall\" tag, it's possible to remove unwanted occlusion spot.

By using static object information, it is possible to make occupancy grid more accurate.

To make occupancy grid for planning is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#possible-collision","title":"Possible Collision","text":"

obstacle that can run out from occlusion is interrupted by moving vehicle.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#about-safe-motion","title":"About safe motion","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#the-concept-of-safe-velocity-and-margin","title":"The Concept of Safe Velocity and Margin","text":"

The safe slowdown velocity is calculated from the below parameters of ego emergency braking system and time to collision. Below calculation is included but change velocity dynamically is not recommended for planner.

This module defines safe margin to consider ego distance to stop and collision path point geometrically. While ego is cruising from safe margin to collision path point, ego vehicle keeps the same velocity as occlusion spot safe velocity.

Note: This logic assumes high-precision vehicle speed tracking and margin for decel point might not be the best solution, and override with manual driver is considered if pedestrian really run out from occlusion spot.

TODO: consider one of the best choices

  1. stop in front of occlusion spot
  2. insert 1km/h velocity in front of occlusion spot
  3. slowdown this way
  4. etc... .
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#maximum-slowdown-velocity","title":"Maximum Slowdown Velocity","text":"

The maximum slowdown velocity is calculated from the below parameters of ego current velocity and acceleration with maximum slowdown jerk and maximum slowdown acceleration in order not to slowdown too much.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description pedestrian_vel double [m/s] maximum velocity assumed pedestrian coming out from occlusion point. pedestrian_radius double [m] assumed pedestrian radius which fits in occlusion spot. Parameter Type Description use_object_info bool [-] whether to reflect object info to occupancy grid map or not. use_partition_lanelet bool [-] whether to use partition lanelet map data. Parameter /debug Type Description is_show_occlusion bool [-] whether to show occlusion point markers. is_show_cv_window bool [-] whether to show open_cv debug window. is_show_processing_time bool [-] whether to show processing time. Parameter /threshold Type Description detection_area_length double [m] the length of path to consider occlusion spot stuck_vehicle_vel double [m/s] velocity below this value is assumed to stop lateral_distance double [m] maximum lateral distance to consider hidden collision Parameter /motion Type Description safety_ratio double [-] safety ratio for jerk and acceleration max_slow_down_jerk double [m/s^3] jerk for safe brake max_slow_down_accel double [m/s^2] deceleration for safe brake non_effective_jerk double [m/s^3] weak jerk for velocity planning. non_effective_acceleration double [m/s^2] weak deceleration for velocity planning. min_allowed_velocity double [m/s] minimum velocity allowed safe_margin double [m] maximum error to stop with emergency braking system. Parameter /detection_area Type Description min_occlusion_spot_size double [m] the length of path to consider occlusion spot slice_length double [m] the distance of divided detection area max_lateral_distance double [m] buffer around the ego path used to build the detection_area area. Parameter /grid Type Description free_space_max double [-] maximum value of a free space cell in the occupancy grid occupied_min double [-] buffer around the ego path used to build the detection_area area."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#rough-overview-of-the-whole-process","title":"Rough overview of the whole process","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#detail-process-for-predicted-objectnot-updated","title":"Detail process for predicted object(not updated)","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#detail-process-for-occupancy-grid-base","title":"Detail process for Occupancy grid base","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#behavior-velocity-planner","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#overview","title":"Overview","text":"

behavior_velocity_planner is a planner that adjust velocity based on the traffic rules. It loads modules as plugins. Please refer to the links listed below for detail on each module.

When each module plans velocity, it considers based on base_link(center of rear-wheel axis) pose. So for example, in order to stop at a stop line with the vehicles' front on the stop line, it calculates base_link position from the distance between base_link to front and modifies path velocity from the base_link position.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/path_with_lane_id tier4_planning_msgs::msg::PathWithLaneId path with lane_id ~input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~input/vehicle_odometry nav_msgs::msg::Odometry vehicle velocity ~input/dynamic_objects autoware_perception_msgs::msg::PredictedObjects dynamic objects ~input/no_ground_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/compare_map_filtered_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud filtered by compare map. Note that this is used only when the detection method of run out module is Points. ~input/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray traffic light states"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/path autoware_planning_msgs::msg::Path path to be followed ~output/stop_reasons tier4_planning_msgs::msg::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_modules vector<string> module names to launch forward_path_length double forward path length backward_path_length double backward path length max_accel double (to be a global parameter) max acceleration of the vehicle system_delay double (to be a global parameter) delay time until output control command delay_response_time double (to be a global parameter) delay time of the vehicle's response to control commands"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#traffic-light-handling-in-simreal","title":"Traffic Light Handling in sim/real","text":"

The handling of traffic light information varies depending on the usage. In the below table, the traffic signal topic element for the corresponding lane is denoted as info, and if info is not available, it is denoted as null.

module \\ case info is null info is not null intersection_occlusion(is_simulation = *) GO(occlusion is ignored) intersection_occlusion uses the latest non UNKNOWN observation in the queue up to present. traffic_light(sim, is_simulation = true) GO traffic_light uses the perceived traffic light information at present directly. traffic_light(real, is_simulation = false) STOP \u2060 crosswalk with Traffic Light(is_simulation = *) default map_based_prediction(is_simulation = *) default If a pedestrian traffic light is"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner_common/","title":"Behavior Velocity Planner Common","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner_common/#behavior-velocity-planner-common","title":"Behavior Velocity Planner Common","text":"

This package provides a behavior velocity interface without RTC, and common functions as a library, which are used in the behavior_velocity_planner node and modules.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_rtc_interface/","title":"Behavior Velocity RTC Interface","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_rtc_interface/#behavior-velocity-rtc-interface","title":"Behavior Velocity RTC Interface","text":"

This package provides a behavior velocity interface with RTC, which are used in the behavior_velocity_planner node and modules.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#run-out","title":"Run Out","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#role","title":"Role","text":"

run_out is the module that decelerates and stops for dynamic obstacles such as pedestrians, bicycles and motorcycles.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_run_out becomes true

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#flow-chart","title":"Flow chart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#preprocess-path","title":"Preprocess path","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#calculate-the-expected-target-velocity-for-ego-vehicle","title":"Calculate the expected target velocity for ego vehicle","text":"

Calculate the expected target velocity for the ego vehicle path to calculate time to collision with obstacles more precisely. The expected target velocity is calculated with autoware velocity smoother module by using current velocity, current acceleration and velocity limits directed by the map and external API.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#extend-the-path","title":"Extend the path","text":"

The path is extended by the length of base link to front to consider obstacles after the goal.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#trim-path-from-ego-position","title":"Trim path from ego position","text":"

The path is trimmed from ego position to a certain distance to reduce calculation time. Trimmed distance is specified by parameter of detection_distance.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#preprocess-obstacles","title":"Preprocess obstacles","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#create-data-of-abstracted-dynamic-obstacle","title":"Create data of abstracted dynamic obstacle","text":"

This module can handle multiple types of obstacles by creating abstracted dynamic obstacle data layer. Currently we have 3 types of detection method (Object, ObjectWithoutPath, Points) to create abstracted obstacle data.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#abstracted-dynamic-obstacle","title":"Abstracted dynamic obstacle","text":"

Abstracted obstacle data has following information.

Name Type Description pose geometry_msgs::msg::Pose pose of the obstacle classifications std::vector<autoware_perception_msgs::msg::ObjectClassification> classifications with probability shape autoware_perception_msgs::msg::Shape shape of the obstacle predicted_paths std::vector<DynamicObstacle::PredictedPath> predicted paths with confidence. this data doesn't have time step because we use minimum and maximum velocity instead. min_velocity_mps float minimum velocity of the obstacle. specified by parameter of dynamic_obstacle.min_vel_kmph max_velocity_mps float maximum velocity of the obstacle. specified by parameter of dynamic_obstacle.max_vel_kmph

Enter the maximum/minimum velocity of the object as a parameter, adding enough margin to the expected velocity. This parameter is used to create polygons for collision detection.

Future work: Determine the maximum/minimum velocity from the estimated velocity with covariance of the object

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#3-types-of-detection-method","title":"3 types of detection method","text":"

We have 3 types of detection method to meet different safety and availability requirements. The characteristics of them are shown in the table below. Method of Object has high availability (less false positive) because it detects only objects whose predicted path is on the lane. However, sometimes it is not safe because perception may fail to detect obstacles or generate incorrect predicted paths. On the other hand, method of Points has high safety (less false negative) because it uses pointcloud as input. Since points don't have a predicted path, the path that moves in the direction normal to the path of ego vehicle is considered to be the predicted path of abstracted dynamic obstacle data. However, without proper adjustment of filter of points, it may detect a lot of points and it will result in very low availability. Method of ObjectWithoutPath has the characteristics of an intermediate of Object and Points.

Method Description Object use an object with the predicted path for collision detection. ObjectWithoutPath use an object but not use the predicted path for collision detection. replace the path assuming that an object jumps out to the lane at specified velocity. Points use filtered points for collision detection. the path is created assuming that points jump out to the lane. points are regarded as an small circular shaped obstacle.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-outside-of-partition","title":"Exclude obstacles outside of partition","text":"

This module can exclude the obstacles outside of partition such as guardrail, fence, and wall. We need lanelet map that has the information of partition to use this feature. By this feature, we can reduce unnecessary deceleration by obstacles that are unlikely to jump out to the lane. You can choose whether to use this feature by parameter of use_partition_lanelet.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-by-label","title":"Exclude obstacles by label","text":"

This module only acts on target obstacles defined by the target_obstacle_types parameter. If an obstacle of a type not specified by said parameter is detected, it will be ignored by this module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-that-are-already-on-the-ego-vehicles-path","title":"Exclude obstacles that are already on the ego vehicle's path","text":"

In the cases were an obstacle is already on the ego vehicle's path, it cannot \"cut in\" into the ego's path anymore (which is the situation this module tries to handle) so it might be useful to exclude obstacles already on the vehicle's footprint path. By setting the parameter exclude_obstacles_already_in_path to true, this module will exclude the obstacles that are considered to be already on the ego vehicle's path for more than keep_obstacle_on_path_time_threshold. The module considers the ego vehicle's closest path point to each obstacle's current position, and determines the lateral distance between the obstacle and the right and left sides of the ego vehicle. If the obstacle is located within the left and right extremes of the vehicle for that pose, then it is considered to be inside the ego vehicle's footprint path and it is excluded. The virtual width of the vehicle used to detect if an obstacle is within the path or not, can be adjusted by the ego_footprint_extra_margin parameter.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-that-cross-the-ego-vehicles-cut-line","title":"Exclude obstacles that cross the ego vehicle's \"cut line\"","text":"

This module can exclude obstacles that have predicted paths that will cross the back side of the ego vehicle. It excludes obstacles if their predicted path crosses the ego's \"cut line\". The \"cut line\" is a virtual line segment that is perpendicular to the ego vehicle and that passes through the ego's base link.

You can choose whether to use this feature by setting the parameter use_ego_cut_line to true or false. The width of the line can be tuned with the parameter ego_cut_line_length.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#collision-detection","title":"Collision detection","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#detect-collision-with-dynamic-obstacles","title":"Detect collision with dynamic obstacles","text":"

Along the ego vehicle path, determine the points where collision detection is to be performed for each detection_span.

The travel times to the each points are calculated from the expected target velocity.

For the each points, collision detection is performed using the footprint polygon of the ego vehicle and the polygon of the predicted location of the obstacles. The predicted location of the obstacles is described as rectangle or polygon that has the range calculated by min velocity, max velocity and the ego vehicle's travel time to the point. If the input type of the dynamic obstacle is Points, the obstacle shape is defined as a small cylinder.

Multiple points are detected as collision points because collision detection is calculated between two polygons. So we select the point that is on the same side as the obstacle and close to ego vehicle as the collision point.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#insert-velocity","title":"Insert velocity","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#insert-velocity-to-decelerate-for-obstacles","title":"Insert velocity to decelerate for obstacles","text":"

If the collision is detected, stop point is inserted on distance of base link to front + stop margin from the selected collision point. The base link to front means the distance between base_link (center of rear-wheel axis) and front of the car. Stop margin is determined by the parameter of stop_margin.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#insert-velocity-to-approach-the-obstacles","title":"Insert velocity to approach the obstacles","text":"

If you select the method of Points or ObjectWithoutPath, sometimes ego keeps stopping in front of the obstacle. To avoid this problem, This feature has option to approach the obstacle with slow velocity after stopping. If the parameter of approaching.enable is set to true, ego will approach the obstacle after ego stopped for state.stop_time_thresh seconds. The maximum velocity of approaching can be specified by the parameter of approaching.limit_vel_kmph. The decision to approach the obstacle is determined by a simple state transition as following image.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#limit-velocity-with-specified-jerk-and-acc-limit","title":"Limit velocity with specified jerk and acc limit","text":"

The maximum slowdown velocity is calculated in order not to slowdown too much. See the Occlusion Spot document for more details. You can choose whether to use this feature by parameter of slow_down_limit.enable.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description detection_method string [-] candidate: Object, ObjectWithoutPath, Points target_obstacle_types vector of string [-] specifies which obstacle types will be considered by the module, if the obstacles classification type is not written here, it will be ignored. candidate: [\"PEDESTRIAN\", \"BICYCLE\",\"MOTORCYCLE\"] use_partition_lanelet bool [-] whether to use partition lanelet map data specify_decel_jerk bool [-] whether to specify jerk when ego decelerates stop_margin double [m] the vehicle decelerates to be able to stop with this margin passing_margin double [m] the vehicle begins to accelerate if the vehicle's front in predicted position is ahead of the obstacle + this margin deceleration_jerk double [m/s^3] ego decelerates with this jerk when stopping for obstacles detection_distance double [m] ahead distance from ego to detect the obstacles detection_span double [m] calculate collision with this span to reduce calculation time min_vel_ego_kmph double [km/h] min velocity to calculate time to collision Parameter /detection_area Type Description margin_ahead double [m] ahead margin for detection area polygon margin_behind double [m] behind margin for detection area polygon Parameter /dynamic_obstacle Type Description use_mandatory_area double [-] whether to use mandatory detection area assume_fixed_velocity.enable double [-] If enabled, the obstacle's velocity is assumed to be within the minimum and maximum velocity values specified below assume_fixed_velocity.min_vel_kmph double [km/h] minimum velocity for dynamic obstacles assume_fixed_velocity.max_vel_kmph double [km/h] maximum velocity for dynamic obstacles diameter double [m] diameter of obstacles. used for creating dynamic obstacles from points height double [m] height of obstacles. used for creating dynamic obstacles from points max_prediction_time double [sec] create predicted path until this time time_step double [sec] time step for each path step. used for creating dynamic obstacles from points or objects without path points_interval double [m] divide obstacle points into groups with this interval, and detect only lateral nearest point. used only for Points method Parameter /approaching Type Description enable bool [-] whether to enable approaching after stopping margin double [m] distance on how close ego approaches the obstacle limit_vel_kmph double [km/h] limit velocity for approaching after stopping Parameter /state Type Description stop_thresh double [m/s] threshold to decide if ego is stopping stop_time_thresh double [sec] threshold for stopping time to transit to approaching state disable_approach_dist double [m] end the approaching state if distance to the obstacle is longer than this value keep_approach_duration double [sec] keep approach state for this duration to avoid chattering of state transition Parameter /slow_down_limit Type Description enable bool [-] whether to enable to limit velocity with max jerk and acc max_jerk double [m/s^3] minimum jerk deceleration for safe brake. max_acc double [m/s^2] minimum accel deceleration for safe brake. Parameter /ignore_momentary_detection Type Description enable bool [-] whether to ignore momentary detection time_threshold double [sec] ignores detections that persist for less than this duration"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#speed-bump","title":"Speed Bump","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#role","title":"Role","text":"

This module plans the velocity of the related part of the path in case there is speed bump regulatory element referring to it.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#activation-timing","title":"Activation Timing","text":"

The manager launch speed bump scene module when there is speed bump regulatory element referring to the reference path.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description slow_start_margin double [m] margin for ego vehicle to slow down before speed_bump slow_end_margin double [m] margin for ego vehicle to accelerate after speed_bump print_debug_info bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#speed-calculation","title":"Speed Calculation","text":" Parameter Type Description min_height double [m] minimum height assumption of the speed bump max_height double [m] maximum height assumption of the speed bump min_speed double [m/s] minimum speed assumption of slow down speed max_speed double [m/s] maximum speed assumption of slow down speed"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Note: If in speed bump annotation slow_down_speed tag is used then calculating the speed wrt the speed bump height will be ignored. In such case, specified slow_down_speed value in [kph] is being used.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#future-work","title":"Future Work","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/","title":"Stop Line","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#stop-line","title":"Stop Line","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#role","title":"Role","text":"

This module plans the vehicle's velocity to ensure it stops just before stop lines and can resume movement after stopping.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a stop line in a target lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double Margin that the vehicle tries to stop in before stop_line stop_duration_sec double [s] Time parameter for the ego vehicle to stop before stop line hold_stop_margin_distance double [m] Parameter for restart prevention (See Algorithm section). Also, when the ego vehicle is within this distance from a stop line, the ego state becomes STOPPED from APPROACHING use_initialization_stop_state bool Flag to determine whether to return to the approaching state when the vehicle moves away from a stop line."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#flowchart","title":"Flowchart","text":"

This algorithm is based on segment. segment consists of two node points. It's useful for removing boundary conditions because if segment(i) exists we can assume node(i) and node(i+1) exist.

First, this algorithm finds a collision between reference path and stop line. Then, we can get collision segment and collision point.

Next, based on collision point, it finds offset segment by iterating backward points up to a specific offset length. The offset length is stop_margin(parameter) + base_link to front(to adjust head pose to stop line). Then, we can get offset segment and offset from segment start.

After that, we can calculate an offset point from offset segment and offset. This will be stop_pose.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#restart-prevention","title":"Restart Prevention","text":"

If the vehicle requires X meters (e.g. 0.5 meters) to stop once it starts moving due to poor vehicle control performance, it may overshoot the stopping position, which must be strictly observed. This happens when the vehicle begins to move in order to approach a nearby stop point (e.g. 0.3 meters away).

This module includes the parameter hold_stop_margin_distance to prevent redundant restarts in such a situation. If the vehicle is stopped within hold_stop_margin_distance meters of the stop point (_front_to_stop_line < hold_stop_margin_distance), the module determines that the vehicle has already stopped at the stop point and will maintain the current stopping position, even if the vehicle has come to a stop due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#template","title":"Template","text":"

A template for behavior velocity modules based on the autoware_behavior_velocity_speed_bump_module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#autoware-behavior-velocity-module-template","title":"Autoware Behavior Velocity Module Template","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#scene","title":"Scene","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemodule-class","title":"TemplateModule Class","text":"

The TemplateModule class serves as a foundation for creating a scene module within the Autoware behavior velocity planner. It defines the core methods and functionality needed for the module's behavior. You should replace the placeholder code with actual implementations tailored to your specific behavior velocity module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#constructor","title":"Constructor","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#modifypathvelocity-method","title":"modifyPathVelocity Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#createdebugmarkerarray-method","title":"createDebugMarkerArray Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#createvirtualwalls-method","title":"createVirtualWalls Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#manager","title":"Manager","text":"

The managing of your modules is defined in manager.hpp and manager.cpp. The managing is handled by two classes:

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemodulemanager-class","title":"TemplateModuleManager Class","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#constructor-templatemodulemanager","title":"Constructor TemplateModuleManager","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#getmodulename-method","title":"getModuleName() Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#launchnewmodules-method","title":"launchNewModules() Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#getmoduleexpiredfunction-method","title":"getModuleExpiredFunction() Method","text":"

Please note that the specific functionality of the methods launchNewModules() and getModuleExpiredFunction() would depend on the details of your behavior velocity modules and how they are intended to be managed within the Autoware system. You would need to implement these methods according to your module's requirements.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemoduleplugin-class","title":"TemplateModulePlugin Class","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemoduleplugin-class_1","title":"TemplateModulePlugin Class","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#example-usage","title":"Example Usage","text":"

In the following example, we take each point of the path, and multiply it by 2. Essentially duplicating the speed. Note that the velocity smoother will further modify the path speed after all the behavior velocity modules are executed.

bool TemplateModule::modifyPathVelocity(\n[[maybe_unused]] PathWithLaneId * path, [[maybe_unused]] StopReason * stop_reason)\n{\nfor (auto & p : path->points) {\np.point.longitudinal_velocity_mps *= 2.0;\n}\n\nreturn false;\n}\n
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#traffic-light","title":"Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#role","title":"Role","text":"

Judgement whether a vehicle can go into an intersection or not by traffic light status, and planning a velocity of the stop if necessary. This module is designed for rule-based velocity decision that is easy for developers to design its behavior. It generates proper velocity for traffic light scene.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#limitations","title":"Limitations","text":"

This module allows developers to design STOP/GO in traffic light module using specific rules. Due to the property of rule-based planning, the algorithm is greatly depends on object detection and perception accuracy considering traffic light. Also, this module only handles STOP/Go at traffic light scene, so rushing or quick decision according to traffic condition is future work.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is traffic light in ego lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#algorithm","title":"Algorithm","text":"
  1. Obtains a traffic light mapped to the route and a stop line correspond to the traffic light from a map information.

  2. Uses the highest reliability one of the traffic light recognition result and if the color of that was not green or corresponding arrow signal, generates a stop point.

  3. When vehicle current velocity is

  4. When it to be judged that vehicle can\u2019t stop before stop line, autoware chooses one of the following behaviors

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#dilemma-zone","title":"Dilemma Zone","text":" "},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double [m] margin before stop point tl_state_timeout double [s] time out for detected traffic light result. stop_time_hysteresis double [s] time threshold to decide stop planning for chattering prevention yellow_lamp_period double [s] time for yellow lamp enable_pass_judge bool [-] whether to use pass judge"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#virtual-traffic-light","title":"Virtual Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#role","title":"Role","text":"

Autonomous vehicles have to cooperate with the infrastructures such as:

The following items are example cases:

  1. Traffic control by traffic lights with V2X support

  2. Intersection coordination of multiple vehicles by FMS.

It's possible to make each function individually, however, the use cases can be generalized with these three elements.

  1. start: Start a cooperation procedure after the vehicle enters a certain zone.
  2. stop: Stop at a defined stop line according to the status received from infrastructures.
  3. end: Finalize the cooperation procedure after the vehicle reaches the exit zone. This should be done within the range of stable communication.

This module sends/receives status from infrastructures and plans the velocity of the cooperation result.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#system-configuration-diagram","title":"System Configuration Diagram","text":"

Planner and each infrastructure communicate with each other using common abstracted messages.

FMS: Intersection coordination when multiple vehicles are in operation and the relevant lane is occupied

Support different communication methods for different infrastructures

Have different meta-information for each geographic location

FMS: Fleet Management System

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description max_delay_sec double [s] maximum allowed delay for command near_line_distance double [m] threshold distance to stop line to check ego stop. dead_line_margin double [m] threshold distance that this module continue to insert stop line. hold_stop_margin_distance double [m] parameter for restart prevention (See following section) check_timeout_after_stop_line bool [-] check timeout to stop when linkage is disconnected"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#map-format","title":"Map Format","text":" \\[ \\begin{align} l_{\\mathrm{min}} = -\\frac{v_0^2}{2 a_{\\mathrm{min}}} \\end{align} \\]"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_walkway_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_walkway_module/#walkway","title":"Walkway","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_walkway_module/#role","title":"Role","text":"

This module decide to stop before the ego will cross the walkway including crosswalk to enter or exit the private area.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/","title":"Index","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#dynamic-obstacle-stop","title":"Dynamic Obstacle Stop","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#role","title":"Role","text":"

dynamic_obstacle_stop is a module that stops the ego vehicle from entering the immediate path of a dynamic object.

The immediate path of an object is the area that the object would traverse during a given time horizon, assuming constant velocity and heading.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#activation-timing","title":"Activation Timing","text":"

This module is activated if the launch parameter launch_dynamic_obstacle_stop_module is set to true in the motion planning launch file.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The module insert a stop point where the ego trajectory collides with the immediate path of an object. The overall module flow can be summarized with the following 4 steps.

  1. Filter dynamic objects.
  2. Calculate immediate path rectangles of the dynamic objects.
  3. Find earliest collision where ego collides with an immediate path rectangle.
  4. Insert stop point before the collision.

In addition to these 4 steps, 2 mechanisms are in place to make the stop point of this module more stable: an hysteresis and a decision duration buffer.

The hysteresis parameter is used when a stop point was already being inserted in the previous iteration and it increases the range where dynamic objects are considered close enough to the ego trajectory to be used by the module.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#filter-dynamic-objects","title":"Filter dynamic objects","text":"

An object is considered by the module only if it meets all of the following conditions:

An object is considered unavoidable if it is heading towards the ego vehicle such that even if ego stops, a collision would still occur (assuming the object keeps driving in a straight line).

For the last condition, the object is considered close enough if its lateral distance from the ego trajectory is less than the threshold parameter minimum_object_distance_from_ego_trajectory plus half the width of ego and of the object (including the extra_object_width parameter). In addition, the value of the hysteresis parameter is added to the minimum distance if a stop point was inserted in the previous iteration.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#calculate-immediate-path-rectangles","title":"Calculate immediate path rectangles","text":"

For each considered object, a rectangle is created representing its immediate path. The rectangle has the width of the object plus the extra_object_width parameter and its length is the current speed of the object multiplied by the time_horizon.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#find-earliest-collision","title":"Find earliest collision","text":"

We build the ego trajectory footprints as the set of ego footprint polygons projected on each trajectory point. We then calculate the intersections between these ego trajectory footprints and the previously calculated immediate path rectangles. An intersection is ignored if the object is not driving toward ego, i.e., the absolute angle between the object and the trajectory point is larger than \\(\\frac{3 \\pi}{4}\\).

The collision point with the lowest arc length when projected on the ego trajectory will be used to calculate the final stop point.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#insert-stop-point","title":"Insert stop point","text":"

Before inserting a stop point, we calculate the range of trajectory arc lengths where it can be inserted. The minimum is calculated to satisfy the acceleration and jerk constraints of the vehicle. If a stop point was inserted in the previous iteration of the module, its arc length is used as the maximum. Finally, the stop point arc length is calculated to be the arc length of the previously found collision point minus the stop_distance_buffer and the ego vehicle longitudinal offset, clamped between the minimum and maximum values.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#duration-buffer","title":"Duration buffer","text":"

To prevent chatter caused by noisy perception, two duration parameters are used.

Timers and collision points are tracked for each dynamic object independently.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description extra_object_width double [m] extra width around detected objects minimum_object_velocity double [m/s] objects with a velocity bellow this value are ignored stop_distance_buffer double [m] extra distance to add between the stop point and the collision point time_horizon double [s] time horizon used for collision checks hysteresis double [m] once a collision has been detected, this hysteresis is used on the collision detection add_stop_duration_buffer double [s] duration where a collision must be continuously detected before a stop decision is added remove_stop_duration_buffer double [s] duration between no collision being detected and the stop decision being remove minimum_object_distance_from_ego_trajectory double [m] minimum distance between the footprints of ego and an object to consider for collision ignore_unavoidable_collisions bool [-] if true, ignore collisions that cannot be avoided by stopping (assuming the obstacle continues going straight)"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/","title":"Obstacle Velocity Limiter","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacle-velocity-limiter","title":"Obstacle Velocity Limiter","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#purpose","title":"Purpose","text":"

This node limits the velocity when driving in the direction of an obstacle. For example, it allows to reduce the velocity when driving close to a guard rail in a curve.

Without this node With this node"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Using a parameter min_ttc (minimum time to collision), the node set velocity limits such that no collision with an obstacle would occur, even without new control inputs for a duration of min_ttc.

To achieve this, the motion of the ego vehicle is simulated forward in time at each point of the trajectory to create a corresponding footprint. If the footprint collides with some obstacle, the velocity at the trajectory point is reduced such that the new simulated footprint do not have any collision.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#simulated-motion-footprint-and-collision-distance","title":"Simulated Motion, Footprint, and Collision Distance","text":"

The motion of the ego vehicle is simulated at each trajectory point using the heading, velocity, and steering defined at the point. Footprints are then constructed from these simulations and checked for collision. If a collision is found, the distance from the trajectory point is used to calculate the adjusted velocity that would produce a collision-free footprint. Parameter simulation.distance_method allow to switch between an exact distance calculation and a less expensive approximation using a simple euclidean distance.

Two models can be selected with parameter simulation.model for simulating the motion of the vehicle: a simple particle model and a more complicated bicycle model.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#particle-model","title":"Particle Model","text":"

The particle model uses the constant heading and velocity of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to a straight line and the footprint to a rectangle.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#footprint","title":"Footprint","text":"

The rectangle footprint is built from 2 lines parallel to the simulated forward motion and at a distance of half the vehicle width.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#distance","title":"Distance","text":"

When a collision point is found within the footprint, the distance is calculated as described in the following figure.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#bicycle-model","title":"Bicycle Model","text":"

The bicycle model uses the constant heading, velocity, and steering of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to an arc around the circle of curvature associated with the steering. Uncertainty in the steering can be introduced with the simulation.steering_offset parameter which will generate a range of motion from a left-most to a right-most steering. This results in 3 curved lines starting from the same trajectory point. A parameter simulation.nb_points is used to adjust the precision of these lines, with a minimum of 2 resulting in straight lines and higher values increasing the precision of the curves.

By default, the steering values contained in the trajectory message are used. Parameter trajectory_preprocessing.calculate_steering_angles allows to recalculate these values when set to true.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#footprint_1","title":"Footprint","text":"

The footprint of the bicycle model is created from lines parallel to the left and right simulated motion at a distance of half the vehicle width. In addition, the two points on the left and right of the end point of the central simulated motion are used to complete the polygon.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#distance_1","title":"Distance","text":"

The distance to a collision point is calculated by finding the curvature circle passing through the trajectory point and the collision point.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacle-detection","title":"Obstacle Detection","text":"

Obstacles are represented as points or linestrings (i.e., sequence of points) around the obstacles and are constructed from an occupancy grid, a pointcloud, or the lanelet map. The lanelet map is always checked for obstacles but the other source is switched using parameter obstacles.dynamic_source.

To efficiently find obstacles intersecting with a footprint, they are stored in a R-tree. Two trees are used, one for the obstacle points, and one for the obstacle linestrings (which are decomposed into segments to simplify the R-tree).

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacle-masks","title":"Obstacle masks","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#dynamic-obstacles","title":"Dynamic obstacles","text":"

Moving obstacles such as other cars should not be considered by this module. These obstacles are detected by the perception modules and represented as polygons. Obstacles inside these polygons are ignored.

Only dynamic obstacles with a velocity above parameter obstacles.dynamic_obstacles_min_vel are removed.

To deal with delays and precision errors, the polygons can be enlarged with parameter obstacles.dynamic_obstacles_buffer.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacles-outside-of-the-safety-envelope","title":"Obstacles outside of the safety envelope","text":"

Obstacles that are not inside any forward simulated footprint are ignored if parameter obstacles.filter_envelope is set to true. The safety envelope polygon is built from all the footprints and used as a positive mask on the occupancy grid or pointcloud.

This option can reduce the total number of obstacles which reduces the cost of collision detection. However, the cost of masking the envelope is usually too high to be interesting.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacles-on-the-ego-path","title":"Obstacles on the ego path","text":"

If parameter obstacles.ignore_obstacles_on_path is set to true, a polygon mask is built from the trajectory and the vehicle dimension. Any obstacle in this polygon is ignored.

The size of the polygon can be increased using parameter obstacles.ignore_extra_distance which is added to the vehicle lateral offset.

This option is a bit expensive and should only be used in case of noisy dynamic obstacles where obstacles are wrongly detected on the ego path, causing unwanted velocity limits.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#lanelet-map","title":"Lanelet Map","text":"

Information about static obstacles can be stored in the Lanelet map using the value of the type tag of linestrings. If any linestring has a type with one of the value from parameter obstacles.static_map_tags, then it will be used as an obstacle.

Obstacles from the lanelet map are not impacted by the masks.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#occupancy-grid","title":"Occupancy Grid","text":"

Masking is performed by iterating through the cells inside each polygon mask using the autoware::grid_map_utils::PolygonIterator function. A threshold is then applied to only keep cells with an occupancy value above parameter obstacles.occupancy_grid_threshold. Finally, the image is converted to an image and obstacle linestrings are extracted using the opencv function findContour.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#pointcloud","title":"Pointcloud","text":"

Masking is performed using the pcl::CropHull function. Points from the pointcloud are then directly used as obstacles.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#velocity-adjustment","title":"Velocity Adjustment","text":"

If a collision is found, the velocity at the trajectory point is adjusted such that the resulting footprint would no longer collide with an obstacle: \\(velocity = \\frac{dist\\_to\\_collision}{min\\_ttc}\\)

To prevent sudden deceleration of the ego vehicle, the parameter max_deceleration limits the deceleration relative to the current ego velocity. For a trajectory point occurring at a duration t in the future (calculated from the original velocity profile), the adjusted velocity cannot be set lower than \\(v_{current} - t * max\\_deceleration\\).

Furthermore, a parameter min_adjusted_velocity provides a lower bound on the modified velocity.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#trajectory-preprocessing","title":"Trajectory preprocessing","text":"

The node only modifies part of the input trajectory, starting from the current ego position. Parameter trajectory_preprocessing.start_distance is used to adjust how far ahead of the ego position the velocities will start being modified. Parameters trajectory_preprocessing.max_length and trajectory_preprocessing.max_duration are used to control how much of the trajectory will see its velocity adjusted.

To reduce computation cost at the cost of precision, the trajectory can be downsampled using parameter trajectory_preprocessing.downsample_factor. For example a value of 1 means all trajectory points will be evaluated while a value of 10 means only 1/10th of the points will be evaluated.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#parameters","title":"Parameters","text":"Name Type Description min_ttc float [s] required minimum time with no collision at each point of the trajectory assuming constant heading and velocity. distance_buffer float [m] required distance buffer with the obstacles. min_adjusted_velocity float [m/s] minimum adjusted velocity this node can set. max_deceleration float [m/s\u00b2] maximum deceleration an adjusted velocity can cause. trajectory_preprocessing.start_distance float [m] controls from which part of the trajectory (relative to the current ego pose) the velocity is adjusted. trajectory_preprocessing.max_length float [m] controls the maximum length (starting from the start_distance) where the velocity is adjusted. trajectory_preprocessing.max_distance float [s] controls the maximum duration (measured from the start_distance) where the velocity is adjusted. trajectory_preprocessing.downsample_factor int trajectory downsampling factor to allow tradeoff between precision and performance. trajectory_preprocessing.calculate_steering_angle bool if true, the steering angles of the trajectory message are not used but are recalculated. simulation.model string model to use for forward simulation. Either \"particle\" or \"bicycle\". simulation.distance_method string method to use for calculating distance to collision. Either \"exact\" or \"approximation\". simulation.steering_offset float offset around the steering used by the bicycle model. simulation.nb_points int number of points used to simulate motion with the bicycle model. obstacles.dynamic_source string source of dynamic obstacle used for collision checking. Can be \"occupancy_grid\", \"point_cloud\", or \"static_only\" (no dynamic obstacle). obstacles.occupancy_grid_threshold int value in the occupancy grid above which a cell is considered an obstacle. obstacles.dynamic_obstacles_buffer float buffer around dynamic obstacles used when masking an obstacle in order to prevent noise. obstacles.dynamic_obstacles_min_vel float velocity above which to mask a dynamic obstacle. obstacles.static_map_tags string list linestring of the lanelet map with this tags are used as obstacles. obstacles.filter_envelope bool wether to use the safety envelope to filter the dynamic obstacles source."},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The velocity profile produced by this node is not meant to be a realistic velocity profile and can contain sudden jumps of velocity with no regard for acceleration and jerk. This velocity profile is meant to be used as an upper bound on the actual velocity of the vehicle.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":"

The critical case for this node is when an obstacle is falsely detected very close to the trajectory such that the corresponding velocity suddenly becomes very low. This can cause a sudden brake and two mechanisms can be used to mitigate these errors.

Parameter min_adjusted_velocity allow to set a minimum to the adjusted velocity, preventing the node to slow down the vehicle too much. Parameter max_deceleration allow to set a maximum deceleration (relative to the current ego velocity) that the adjusted velocity would incur.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/","title":"Out of Lane","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#out-of-lane","title":"Out of Lane","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#role","title":"Role","text":"

There are cases where the ego vehicle footprint goes out of the driving lane, for example when taking a narrow turn with a large vehicle. The out_of_lane module adds deceleration and stop points to the ego trajectory in order to prevent collisions from occurring in these out of lane cases.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#activation","title":"Activation","text":"

This module is activated if the launch parameter launch_out_of_lane_module is set to true.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This module calculates if out of lane collisions occur and insert stop point before the collisions if necessary.

The algorithm assumes the input ego trajectory contains accurate time_from_start values in order to calculate accurate time to collisions with the predicted objects.

Next we explain the inner-workings of the module in more details.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#1-ego-trajectory-footprints","title":"1. Ego trajectory footprints","text":"

In this first step, the ego footprint is projected at each trajectory point and its size is modified based on the ego.extra_..._offset parameters.

The length of the trajectory used for generating the footprints is limited by the max_arc_length parameter.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#2-other-lanelets","title":"2. Other lanelets","text":"

In the second step, we calculate the lanelets where collisions should be avoided. We consider all lanelets around the ego vehicle that are not crossed by the trajectory linestring (sequence of trajectory points) or their preceding lanelets.

In the debug visualization, these other lanelets are shown as blue polygons.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#3-out-of-lane-areas","title":"3. Out of lane areas","text":"

Next, for each trajectory point, we create the corresponding out of lane areas by intersection the other lanelets (from step 2) with the trajectory point footprint (from step 1). Each area is associated with the lanelets overlapped by the area and with the corresponding ego trajectory point.

In the debug visualization, the out of lane area polygon is connected to the corresponding trajectory point by a line.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#4-predicted-objects-filtering","title":"4. Predicted objects filtering","text":"

We filter objects and their predicted paths with the following conditions:

cut_predicted_paths_beyond_red_lights = false cut_predicted_paths_beyond_red_lights = true

In the debug visualization, the filtered predicted paths are shown in green and the stop lines of red traffic lights are shown in red.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#5-time-to-collisions","title":"5. Time to collisions","text":"

For each out of lane area, we calculate the times when a dynamic object will overlap the area based on its filtered predicted paths.

In the case where parameter mode is set to threshold and the calculated time is less than threshold.time_threshold parameter, then we decide to avoid the out of lane area.

In the case where parameter mode is set to ttc, we calculate the time to collision by comparing the predicted time of the object with the time_from_start field contained in the trajectory point. If the time to collision is bellow the ttc.threshold parameter value, we decide to avoid the out of lane area.

In the debug visualization, the ttc (in seconds) is displayed on top of its corresponding trajectory point. The color of the text is red if the collision should be avoided and green otherwise.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#6-calculate-the-stop-or-slowdown-point","title":"6. Calculate the stop or slowdown point","text":"

First, the minimum stopping distance of the ego vehicle is calculated based on the jerk and deceleration constraints set by the velocity smoother parameters.

We then search for the furthest pose along the trajectory where the ego footprint stays inside of the ego lane (calculate in step 2) and constraint the search to be between the minimum stopping distance and the 1st trajectory point with a collision to avoid (as determined in the previous step). The search is done by moving backward along the trajectory with a distance step set by the action.precision parameter.

We first do this search for a footprint expanded with the ego.extra_..._offset, action.longitudinal_distance_buffer and action.lateral_distance_buffer parameters. If no valid pose is found, we search again while only considering the extra offsets but without considering the distance buffers. If still no valid pose is found, we use the base ego footprint without any offset. In case no pose is found, we fallback to using the pose before the detected collision without caring if it is out of lane or not.

Whether it is decided to slow down or stop is determined by the distance between the ego vehicle and the trajectory point to avoid. If this distance is bellow the actions.slowdown.threshold, a velocity of actions.slowdown.velocity will be used. If the distance is bellow the actions.stop.threshold, a velocity of 0m/s will be used.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#about-stability-of-the-stopslowdown-pose","title":"About stability of the stop/slowdown pose","text":"

As the input trajectory can change significantly between iterations, it is expected that the decisions of this module will also change. To make the decision more stable, a stop or slowdown pose is used for a minimum duration set by the action.min_duration parameter. If during that time a new pose closer to the ego vehicle is generated, then it replaces the previous one. Otherwise, the stop or slowdown pose will only be discarded after no out of lane collision is detection for the set duration.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description mode string [-] mode used to consider a dynamic object. Candidates: threshold, intervals, ttc skip_if_already_overlapping bool [-] if true, do not run this module when ego already overlaps another lane max_arc_length double [m] maximum trajectory arc length that is checked for out_of_lane collisions Parameter /threshold Type Description time_threshold double [s] consider objects that will reach an overlap within this time Parameter /ttc Type Description threshold double [s] consider objects with an estimated time to collision bellow this value while ego is on the overlap Parameter /objects Type Description minimum_velocity double [m/s] ignore objects with a velocity lower than this value predicted_path_min_confidence double [-] minimum confidence required for a predicted path to be considered cut_predicted_paths_beyond_red_lights bool [-] if true, predicted paths are cut beyond the stop line of red traffic lights ignore_behind_ego bool [-] if true, objects behind the ego vehicle are ignored Parameter /action Type Description precision double [m] precision when inserting a stop pose in the trajectory longitudinal_distance_buffer double [m] safety distance buffer to keep in front of the ego vehicle lateral_distance_buffer double [m] safety distance buffer to keep on the side of the ego vehicle min_duration double [s] minimum duration needed before a decision can be canceled slowdown.distance_threshold double [m] insert a slow down when closer than this distance from an overlap slowdown.velocity double [m] slow down velocity stop.distance_threshold double [m] insert a stop when closer than this distance from an overlap Parameter /ego Type Description extra_front_offset double [m] extra front distance to add to the ego footprint extra_rear_offset double [m] extra rear distance to add to the ego footprint extra_left_offset double [m] extra left distance to add to the ego footprint extra_right_offset double [m] extra right distance to add to the ego footprint"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/","title":"Motion Velocity Planner","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#motion-velocity-planner","title":"Motion Velocity Planner","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#overview","title":"Overview","text":"

motion_velocity_planner is a planner to adjust the trajectory velocity based on the obstacles around the vehicle. It loads modules as plugins. Please refer to the links listed below for detail on each module.

Each module calculates stop and slow down points to be inserted in the ego trajectory. These points are assumed to correspond to the base_link frame of the ego vehicle as it follows the trajectory. This means that to stop before a wall, a stop point is inserted in the trajectory at a distance ahead of the wall equal to the vehicle front offset (wheelbase + front overhang, see the vehicle dimensions.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#input-topics","title":"Input topics","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::msg::Trajectory input trajectory ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/vehicle_odometry nav_msgs::msg::Odometry vehicle position and velocity ~/input/accel geometry_msgs::msg::AccelWithCovarianceStamped vehicle acceleration ~/input/dynamic_objects autoware_perception_msgs::msg::PredictedObjects dynamic objects ~/input/no_ground_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray traffic light states ~/input/occupancy_grid nav_msgs::msg::OccupancyGrid occupancy grid"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory autoware_planning_msgs::msg::Trajectory Ego trajectory with updated velocity profile ~/output/planning_factors/<MODULE_NAME> tier4_planning_msgs::msg::PlanningFactorsArray factors causing change in the ego velocity profile"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#services","title":"Services","text":"Name Type Description ~/service/load_plugin autoware_motion_velocity_planner_node::srv::LoadPlugin To request loading a plugin ~/service/unload_plugin autoware_motion_velocity_planner_node::srv::UnloadPlugin To request unloaded a plugin"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_modules vector\\<string> module names to launch

In addition, the following parameters should be provided to the node:

"},{"location":"planning/sampling_based_planner/autoware_bezier_sampler/","title":"B\u00e9zier sampler","text":""},{"location":"planning/sampling_based_planner/autoware_bezier_sampler/#bezier-sampler","title":"B\u00e9zier sampler","text":"

Implementation of b\u00e9zier curves and their generation following the sampling strategy from https://ieeexplore.ieee.org/document/8932495

"},{"location":"planning/sampling_based_planner/autoware_frenet_planner/","title":"Frenet planner","text":""},{"location":"planning/sampling_based_planner/autoware_frenet_planner/#frenet-planner","title":"Frenet planner","text":"

Trajectory generation in Frenet frame.

"},{"location":"planning/sampling_based_planner/autoware_frenet_planner/#description","title":"Description","text":"

Original paper

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#path-sampler","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#purpose","title":"Purpose","text":"

This package implements a node that uses sampling based planning to generate a drivable trajectory.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#feature","title":"Feature","text":"

This package is able to:

Note that the velocity is just taken over from the input path.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#input","title":"input","text":"Name Type Description ~/input/path autoware_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry nav_msgs/msg/Odometry Current state of the ego vehicle ~/input/objects autoware_perception_msgs/msg/PredictedObjects objects to avoid"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#output","title":"output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/msg/Trajectory generated trajectory that is feasible to drive and collision-free"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#algorithm","title":"Algorithm","text":"

Sampling based planning is decomposed into 3 successive steps:

  1. Sampling: candidate trajectories are generated.
  2. Pruning: invalid candidates are discarded.
  3. Selection: the best remaining valid candidate is selected.
"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#sampling","title":"Sampling","text":"

Candidate trajectories are generated based on the current ego state and some target state. 2 sampling algorithms are currently implemented: sampling with b\u00e9zier curves or with polynomials in the frenet frame.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#pruning","title":"Pruning","text":"

The validity of each candidate trajectory is checked using a set of hard constraints.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#selection","title":"Selection","text":"

Among the valid candidate trajectories, the best one is determined using a set of soft constraints (i.e., objective functions).

Each soft constraint is associated with a weight to allow tuning of the preferences.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#limitations","title":"Limitations","text":"

The quality of the candidates generated with polynomials in frenet frame greatly depend on the reference path. If the reference path is not smooth, the resulting candidates will probably be undriveable.

Failure to find a valid trajectory current results in a suddenly stopping trajectory.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#comparison-with-the-autoware_path_optimizer","title":"Comparison with the autoware_path_optimizer","text":"

The autoware_path_optimizer uses an optimization based approach, finding the optimal solution of a mathematical problem if it exists. When no solution can be found, it is often hard to identify the issue due to the intermediate mathematical representation of the problem.

In comparison, the sampling based approach cannot guarantee an optimal solution but is much more straightforward, making it easier to debug and tune.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#how-to-tune-parameters","title":"How to Tune Parameters","text":"

The sampling based planner mostly offers a trade-off between the consistent quality of the trajectory and the computation time. To guarantee that a good trajectory is found requires generating many candidates which linearly increases the computation time.

TODO

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#computation-time","title":"Computation time","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#robustness","title":"Robustness","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#other-options","title":"Other options","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#how-to-debug","title":"How To Debug","text":"

TODO

"},{"location":"planning/sampling_based_planner/autoware_sampler_common/","title":"Sampler Common","text":""},{"location":"planning/sampling_based_planner/autoware_sampler_common/#sampler-common","title":"Sampler Common","text":"

Common functions for sampling based planners. This includes classes for representing paths and trajectories, hard and soft constraints, conversion between cartesian and frenet frames, ...

"},{"location":"sensing/autoware_cuda_utils/","title":"autoware_cuda_utils","text":""},{"location":"sensing/autoware_cuda_utils/#autoware_cuda_utils","title":"autoware_cuda_utils","text":""},{"location":"sensing/autoware_cuda_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions related to CUDA.

"},{"location":"sensing/autoware_gnss_poser/","title":"gnss_poser","text":""},{"location":"sensing/autoware_gnss_poser/#gnss_poser","title":"gnss_poser","text":""},{"location":"sensing/autoware_gnss_poser/#purpose","title":"Purpose","text":"

The gnss_poser is a node that subscribes gnss sensing messages and calculates vehicle pose with covariance.

This node subscribes to NavSatFix to publish the pose of base_link. The data in NavSatFix represents the antenna's position. Therefore, it performs a coordinate transformation using the tf from base_link to the antenna's position. The frame_id of the antenna's position refers to NavSatFix's header.frame_id. (Note that header.frame_id in NavSatFix indicates the antenna's frame_id, not the Earth or reference ellipsoid. See also NavSatFix definition.)

If the transformation from base_link to the antenna cannot be obtained, it outputs the pose of the antenna position without performing coordinate transformation.

"},{"location":"sensing/autoware_gnss_poser/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_gnss_poser/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_gnss_poser/#input","title":"Input","text":"Name Type Description /map/map_projector_info autoware_map_msgs::msg::MapProjectorInfo map projection info ~/input/fix sensor_msgs::msg::NavSatFix gnss status message ~/input/autoware_orientation autoware_sensing_msgs::msg::GnssInsOrientationStamped orientation click here for more details"},{"location":"sensing/autoware_gnss_poser/#output","title":"Output","text":"Name Type Description ~/output/pose geometry_msgs::msg::PoseStamped vehicle pose calculated from gnss sensing data ~/output/gnss_pose_cov geometry_msgs::msg::PoseWithCovarianceStamped vehicle pose with covariance calculated from gnss sensing data ~/output/gnss_fixed tier4_debug_msgs::msg::BoolStamped gnss fix status"},{"location":"sensing/autoware_gnss_poser/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_gnss_poser/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range base_frame string frame id for base_frame base_link N/A gnss_base_frame string frame id for gnss_base_frame gnss_base_link N/A map_frame string frame id for map_frame map N/A use_gnss_ins_orientation boolean use Gnss-Ins orientation true N/A gnss_pose_pub_method integer 0: Instant Value 1: Average Value 2: Median Value. If 0 is chosen buffer_epoch parameter loses affect. 0 \u22650\u22642 buff_epoch integer Buffer epoch 1 \u22650"},{"location":"sensing/autoware_gnss_poser/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_gnss_poser/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_gnss_poser/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_gnss_poser/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_gnss_poser/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_image_diagnostics/","title":"image_diagnostics","text":""},{"location":"sensing/autoware_image_diagnostics/#image_diagnostics","title":"image_diagnostics","text":""},{"location":"sensing/autoware_image_diagnostics/#purpose","title":"Purpose","text":"

The image_diagnostics is a node that check the status of the input raw image.

"},{"location":"sensing/autoware_image_diagnostics/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Below figure shows the flowchart of image diagnostics node. Each image is divided into small blocks for block state assessment.

Each small image block state is assessed as below figure.

After all image's blocks state are evaluated, the whole image status is summarized as below.

"},{"location":"sensing/autoware_image_diagnostics/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_image_diagnostics/#input","title":"Input","text":"Name Type Description input/raw_image sensor_msgs::msg::Image raw image"},{"location":"sensing/autoware_image_diagnostics/#output","title":"Output","text":"Name Type Description image_diag/debug/gray_image sensor_msgs::msg::Image gray image image_diag/debug/dft_image sensor_msgs::msg::Image discrete Fourier transformation image image_diag/debug/diag_block_image sensor_msgs::msg::Image each block state colorization image_diag/image_state_diag autoware_internal_debug_msgs::msg::Int32Stamped image diagnostics status value /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics"},{"location":"sensing/autoware_image_diagnostics/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_image_diagnostics/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":" "},{"location":"sensing/autoware_image_transport_decompressor/","title":"image_transport_decompressor","text":""},{"location":"sensing/autoware_image_transport_decompressor/#image_transport_decompressor","title":"image_transport_decompressor","text":""},{"location":"sensing/autoware_image_transport_decompressor/#purpose","title":"Purpose","text":"

The image_transport_decompressor is a node that decompresses images.

"},{"location":"sensing/autoware_image_transport_decompressor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_image_transport_decompressor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_image_transport_decompressor/#input","title":"Input","text":"Name Type Description ~/input/compressed_image sensor_msgs::msg::CompressedImage compressed image"},{"location":"sensing/autoware_image_transport_decompressor/#output","title":"Output","text":"Name Type Description ~/output/raw_image sensor_msgs::msg::Image decompressed image"},{"location":"sensing/autoware_image_transport_decompressor/#parameters","title":"Parameters","text":"Name Type Description Default Range encoding string The image encoding to use for the decompressed image default N/A"},{"location":"sensing/autoware_image_transport_decompressor/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_imu_corrector/","title":"autoware_imu_corrector","text":""},{"location":"sensing/autoware_imu_corrector/#autoware_imu_corrector","title":"autoware_imu_corrector","text":""},{"location":"sensing/autoware_imu_corrector/#imu_corrector","title":"imu_corrector","text":"

imu_corrector_node is a node that correct imu data.

  1. Correct yaw rate offset \\(b\\) by reading the parameter.
  2. Correct yaw rate standard deviation \\(\\sigma\\) by reading the parameter.

Mathematically, we assume the following equation:

\\[ \\tilde{\\omega}(t) = \\omega(t) + b(t) + n(t) \\]

where \\(\\tilde{\\omega}\\) denotes observed angular velocity, \\(\\omega\\) denotes true angular velocity, \\(b\\) denotes an offset, and \\(n\\) denotes a gaussian noise. We also assume that \\(n\\sim\\mathcal{N}(0, \\sigma^2)\\).

"},{"location":"sensing/autoware_imu_corrector/#input","title":"Input","text":"Name Type Description ~input sensor_msgs::msg::Imu raw imu data"},{"location":"sensing/autoware_imu_corrector/#output","title":"Output","text":"Name Type Description ~output sensor_msgs::msg::Imu corrected imu data"},{"location":"sensing/autoware_imu_corrector/#parameters","title":"Parameters","text":"Name Type Description angular_velocity_offset_x double roll rate offset in imu_link [rad/s] angular_velocity_offset_y double pitch rate offset imu_link [rad/s] angular_velocity_offset_z double yaw rate offset imu_link [rad/s] angular_velocity_stddev_xx double roll rate standard deviation imu_link [rad/s] angular_velocity_stddev_yy double pitch rate standard deviation imu_link [rad/s] angular_velocity_stddev_zz double yaw rate standard deviation imu_link [rad/s] acceleration_stddev double acceleration standard deviation imu_link [m/s^2]"},{"location":"sensing/autoware_imu_corrector/#gyro_bias_estimator","title":"gyro_bias_estimator","text":"

gyro_bias_validator is a node that validates the bias of the gyroscope. It subscribes to the sensor_msgs::msg::Imu topic and validate if the bias of the gyroscope is within the specified range.

Note that the node calculates bias from the gyroscope data by averaging the data only when the vehicle is stopped.

"},{"location":"sensing/autoware_imu_corrector/#input_1","title":"Input","text":"Name Type Description ~/input/imu_raw sensor_msgs::msg::Imu raw imu data ~/input/pose geometry_msgs::msg::PoseWithCovarianceStamped ndt pose

Note that the input pose is assumed to be accurate enough. For example when using NDT, we assume that the NDT is appropriately converged.

Currently, it is possible to use methods other than NDT as a pose_source for Autoware, but less accurate methods are not suitable for IMU bias estimation.

In the future, with careful implementation for pose errors, the IMU bias estimated by NDT could potentially be used not only for validation but also for online calibration.

"},{"location":"sensing/autoware_imu_corrector/#output_1","title":"Output","text":"Name Type Description ~/output/gyro_bias geometry_msgs::msg::Vector3Stamped bias of the gyroscope [rad/s]"},{"location":"sensing/autoware_imu_corrector/#parameters_1","title":"Parameters","text":"

Note that this node also uses angular_velocity_offset_x, angular_velocity_offset_y, angular_velocity_offset_z parameters from imu_corrector.param.yaml.

Name Type Description gyro_bias_threshold double threshold of the bias of the gyroscope [rad/s] timer_callback_interval_sec double seconds about the timer callback function [sec] diagnostics_updater_interval_sec double period of the diagnostics updater [sec] straight_motion_ang_vel_upper_limit double upper limit of yaw angular velocity, beyond which motion is not considered straight [rad/s]"},{"location":"sensing/autoware_pcl_extensions/","title":"autoware_pcl_extensions","text":""},{"location":"sensing/autoware_pcl_extensions/#autoware_pcl_extensions","title":"autoware_pcl_extensions","text":""},{"location":"sensing/autoware_pcl_extensions/#purpose","title":"Purpose","text":"

The autoware_pcl_extensions is a pcl extension library. The voxel grid filter in this package works with a different algorithm than the original one.

"},{"location":"sensing/autoware_pcl_extensions/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pcl_extensions/#original-algorithm-1","title":"Original Algorithm [1]","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with their centroid
"},{"location":"sensing/autoware_pcl_extensions/#extended-algorithm","title":"Extended Algorithm","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with the closest point to their centroid
"},{"location":"sensing/autoware_pcl_extensions/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pcl_extensions/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pcl_extensions/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pcl_extensions/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pcl_extensions/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pcl_extensions/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://pointclouds.org/documentation/tutorials/voxel_grid.html

"},{"location":"sensing/autoware_pcl_extensions/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/","title":"autoware_pointcloud_preprocessor","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#autoware_pointcloud_preprocessor","title":"autoware_pointcloud_preprocessor","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#purpose","title":"Purpose","text":"

The autoware_pointcloud_preprocessor is a package that includes the following filters:

"},{"location":"sensing/autoware_pointcloud_preprocessor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each filter's algorithm is in the following links.

Filter Name Description Detail concatenate_data subscribe multiple pointclouds and concatenate them into a pointcloud link crop_box_filter remove points within a given box link distortion_corrector compensate pointcloud distortion caused by ego vehicle's movement during 1 scan link downsample_filter downsampling input pointcloud link outlier_filter remove points caused by hardware problems, rain drops and small insects as a noise link passthrough_filter remove points on the outside of a range in given field (e.g. x, y, z, intensity) link pointcloud_accumulator accumulate pointclouds for a given amount of time link vector_map_filter remove points on the outside of lane by using vector map link vector_map_inside_area_filter remove points inside of vector map area that has given type by parameter link"},{"location":"sensing/autoware_pointcloud_preprocessor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/autoware_pointcloud_preprocessor/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"sensing/autoware_pointcloud_preprocessor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

autoware::pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"sensing/autoware_pointcloud_preprocessor/#measuring-the-performance","title":"Measuring the performance","text":"

In Autoware, point cloud data from each LiDAR sensor undergoes preprocessing in the sensing pipeline before being input into the perception pipeline. The preprocessing stages are illustrated in the diagram below:

Each stage in the pipeline incurs a processing delay. Mostly, we've used ros2 topic delay /topic_name to measure the time between the message header and the current time. This approach works well for small-sized messages. However, when dealing with large point cloud messages, this method introduces an additional delay. This is primarily because accessing these large point cloud messages externally impacts the pipeline's performance.

Our sensing/perception nodes are designed to run within composable node containers, leveraging intra-process communication. External subscriptions to these messages (like using ros2 topic delay or rviz2) impose extra delays and can even slow down the pipeline by subscribing externally. Therefore, these measurements will not be accurate.

To mitigate this issue, we've adopted a method where each node in the pipeline reports its pipeline latency time. This approach ensures the integrity of intra-process communication and provides a more accurate measure of delays in the pipeline.

"},{"location":"sensing/autoware_pointcloud_preprocessor/#benchmarking-the-pipeline","title":"Benchmarking The Pipeline","text":"

The nodes within the pipeline report the pipeline latency time, indicating the duration from the sensor driver's pointcloud output to the node's output. This data is crucial for assessing the pipeline's health and efficiency.

When running Autoware, you can monitor the pipeline latency times for each node in the pipeline by subscribing to the following ROS 2 topics:

These topics provide the pipeline latency times, giving insights into the delays at various stages of the pipeline from the sensor output of LidarX to each subsequent node.

"},{"location":"sensing/autoware_pointcloud_preprocessor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"sensing/autoware_pointcloud_preprocessor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/","title":"blockage_diag","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#blockage_diag","title":"blockage_diag","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#purpose","title":"Purpose","text":"

To ensure the performance of LiDAR and safety for autonomous driving, the abnormal condition diagnostics feature is needed. LiDAR blockage is abnormal condition of LiDAR when some unwanted objects stitch to and block the light pulses and return signal. This node's purpose is to detect the existing of blockage on LiDAR and its related size and location.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#inner-workings-algorithmsblockage-detection","title":"Inner-workings / Algorithms(Blockage detection)","text":"

This node bases on the no-return region and its location to decide if it is a blockage.

The logic is showed as below

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#inner-workings-algorithmsdust-detection","title":"Inner-workings /Algorithms(Dust detection)","text":"

About dust detection, morphological processing is implemented. If the lidar's ray cannot be acquired due to dust in the lidar area where the point cloud is considered to return from the ground, black pixels appear as noise in the depth image. The area of noise is found by erosion and dilation these black pixels.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#input","title":"Input","text":"Name Type Description ~/input/pointcloud_raw_ex sensor_msgs::msg::PointCloud2 The raw point cloud data is used to detect the no-return region"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#output","title":"Output","text":"Name Type Description ~/output/blockage_diag/debug/blockage_mask_image sensor_msgs::msg::Image The mask image of detected blockage ~/output/blockage_diag/debug/ground_blockage_ratio autoware_internal_debug_msgs::msg::Float32Stamped The area ratio of blockage region in ground region ~/output/blockage_diag/debug/sky_blockage_ratio autoware_internal_debug_msgs::msg::Float32Stamped The area ratio of blockage region in sky region ~/output/blockage_diag/debug/lidar_depth_map sensor_msgs::msg::Image The depth map image of input point cloud ~/output/blockage_diag/debug/single_frame_dust_mask sensor_msgs::msg::Image The mask image of detected dusty area in latest single frame ~/output/blockage_diag/debug/multi_frame_dust_mask sensor_msgs::msg::Image The mask image of continuous detected dusty area ~/output/blockage_diag/debug/blockage_dust_merged_image sensor_msgs::msg::Image The merged image of blockage detection(red) and multi frame dusty area detection(yellow) results ~/output/blockage_diag/debug/ground_dust_ratio autoware_internal_debug_msgs::msg::Float32Stamped The ratio of dusty area divided by area where ray usually returns from the ground."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#parameters","title":"Parameters","text":"Name Type Description Default Range blockage_ratio_threshold float The threshold of blockage area ratio. If the blockage value exceeds this threshold, the diagnostic state will be set to ERROR. 0.1 \u22650 blockage_count_threshold float The threshold of number continuous blockage frames 50 \u22650 blockage_buffering_frames integer The number of buffering about blockage detection [range:1-200] 2 \u22651\u2264200 blockage_buffering_interval integer The interval of buffering about blockage detection 1 \u22650 enable_dust_diag boolean enable dust diagnostic false N/A publish_debug_image boolean publish debug image false N/A dust_ratio_threshold float The threshold of dusty area ratio 0.2 \u22650 dust_count_threshold integer The threshold of number continuous frames include dusty area 10 \u22650 dust_kernel_size integer The kernel size of morphology processing in dusty area detection 2 \u22650 dust_buffering_frames integer The number of buffering about dusty area detection [range:1-200] 10 \u22651\u2264200 dust_buffering_interval integer The interval of buffering about dusty area detection 1 \u22650 max_distance_range float Maximum view range for the LiDAR 200.0 \u22650 horizontal_resolution float The horizontal resolution of depth map image [deg/pixel] 0.4 \u22650 blockage_kernel integer The kernel size of morphology processing the detected blockage area 10 \u22650 angle_range array The effective range of LiDAR [0.0, 360.0] N/A vertical_bins integer The LiDAR channel 40 \u22650 is_channel_order_top2down boolean If the lidar channels are indexed from top to down true N/A horizontal_ring_id integer The id of horizontal ring of the LiDAR 18 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Only Hesai Pandar40P and Hesai PandarQT were tested. For a new LiDAR, it is necessary to check order of channel id in vertical distribution manually and modify the code.
  2. About dusty area detection, False positives occur when there are water puddles on the road surface due to rain, etc. Also, the area of the ray to the sky is currently undetectable.
"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/","title":"concatenate_data","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#concatenate_data","title":"concatenate_data","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#purpose","title":"Purpose","text":"

Many self-driving cars combine multiple LiDARs to expand the sensing range. Therefore, a function to combine a plurality of point clouds is required.

To combine multiple sensor data with a similar timestamp, the message_filters is often used in the ROS-based system, but this requires the assumption that all inputs can be received. Since safety must be strongly considered in autonomous driving, the point clouds concatenate node must be designed so that even if one sensor fails, the remaining sensor information can be output.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The figure below represents the reception time of each sensor data and how it is combined in the case.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#input","title":"Input","text":"Name Type Description ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped The vehicle odometry is used to interpolate the timestamp of each sensor data"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::Pointcloud2 concatenated point clouds"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#parameters","title":"Parameters","text":"Name Type Default Value Description input/points vector of string [] input topic names that type must be sensor_msgs::msg::Pointcloud2 input_frame string \"\" input frame id output_frame string \"\" output frame id has_static_tf_only bool false flag to listen TF only once max_queue_size int 5 max queue size of input/output topics"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timeout_sec double 0.1 tolerance of time to publish next pointcloud [s]When this time limit is exceeded, the filter concatenates and publishes pointcloud, even if not all the point clouds are subscribed. input_offset vector of double [] This parameter can control waiting time for each input sensor pointcloud [s]. You must to set the same length of offsets with input pointclouds numbers. For its tuning, please see actual usage page. publish_synchronized_pointcloud bool false If true, publish the time synchronized pointclouds. All input pointclouds are transformed and then re-published as message named <original_msg_name>_synchronized. input_twist_topic_type std::string twist Topic type for twist. Currently support twist or odom."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#actual-usage","title":"Actual Usage","text":"

For the example of actual usage of this node, please refer to the preprocessor.launch.py file.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#how-to-tuning-timeout_sec-and-input_offset","title":"How to tuning timeout_sec and input_offset","text":"

The values in timeout_sec and input_offset are used in the timer_callback to control concatenation timings.

Name Description How to tune timeout_sec timeout sec for default timer To avoid mis-concatenation, at least this value must be shorter than sampling time. input_offset timeout extension when a pointcloud comes to buffer. The amount of waiting time will be timeout_sec - input_offset. So, you will need to set larger value for the last-coming pointcloud and smaller for fore-coming."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#node-separation-options-for-future","title":"Node separation options for future","text":"

Since the pointcloud concatenation has two process, \"time synchronization\" and \"pointcloud concatenation\", it is possible to separate these processes.

In the future, Nodes will be completely separated in order to achieve node loosely coupled nature, but currently both nodes can be selected for backward compatibility (See this PR).

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is necessary to assume that the vehicle odometry value exists, the sensor data and odometry timestamp are correct, and the TF from base_link to sensor_frame is also correct.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/","title":"crop_box_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#crop_box_filter","title":"crop_box_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#purpose","title":"Purpose","text":"

The crop_box_filter is a node that removes points with in a given box region. This filter is used to remove the points that hit the vehicle itself.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

pcl::CropBox is used, which filters all points inside a given box.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range min_x float minimum x-coordinate value for crop range in meters -1.0 N/A min_y float minimum y-coordinate value for crop range in meters -1.0 N/A min_z float minimum z-coordinate value for crop range in meters -1.0 N/A max_x float maximum x-coordinate value for crop range in meters 1.0 N/A max_y float maximum y-coordinate value for crop range in meters 1.0 N/A max_z float maximum z-coordinate value for crop range in meters 1.0 N/A negative boolean if true, remove points within the box from the pointcloud; otherwise, remove points outside the box. false N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/","title":"distortion_corrector","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#distortion_corrector","title":"distortion_corrector","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#purpose","title":"Purpose","text":"

The distortion_corrector is a node that compensates for pointcloud distortion caused by the ego-vehicle's movement during one scan.

Since the LiDAR sensor scans by rotating an internal laser, the resulting point cloud will be distorted if the ego-vehicle moves during a single scan (as shown by the figure below). The node corrects this by interpolating sensor data using the odometry of the ego-vehicle.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The node uses twist information (linear and angular velocity) from the ~/input/twist topic to correct each point in the point cloud. If the user sets use_imu to true, the node will replace the twist's angular velocity with the angular velocity from IMU.

The node supports two different modes of distortion correction: 2D distortion correction and 3D distortion correction. The main difference is that the 2D distortion corrector only utilizes the x-axis of linear velocity and the z-axis of angular velocity to correct the point positions. On the other hand, the 3D distortion corrector utilizes all linear and angular velocity components to correct the point positions.

Please note that the processing time difference between the two distortion methods is significant; the 3D corrector takes 50% more time than the 2D corrector. Therefore, it is recommended that in general cases, users should set use_3d_distortion_correction to false. However, in scenarios such as a vehicle going over speed bumps, using the 3D corrector can be beneficial.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 Topic of the distorted pointcloud. ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped Topic of the twist information. ~/input/imu sensor_msgs::msg::Imu Topic of the IMU data."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#output","title":"Output","text":"Name Type Description ~/output/pointcloud sensor_msgs::msg::PointCloud2 Topic of the undistorted pointcloud"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range base_frame string The undistortion algorithm is based on a base frame, which must be the same as the twist frame. base_link N/A use_imu boolean Use IMU angular velocity, otherwise, use twist angular velocity. true N/A use_3d_distortion_correction boolean Use 3d distortion correction algorithm, otherwise, use 2d distortion correction algorithm. false N/A update_azimuth_and_distance boolean Flag to update the azimuth and distance values of each point after undistortion. If set to false, the azimuth and distance values will remain unchanged after undistortion, resulting in a mismatch with the updated x, y, z coordinates. false N/A has_static_tf_only boolean Flag to indicate if only static TF is used. false N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#launch","title":"Launch","text":"
ros2 launch autoware_pointcloud_preprocessor distortion_corrector.launch.xml\n
"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":" Velodyne azimuth coordinate Hesai azimuth coordinate"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#referencesexternal-links","title":"References/External links","text":"

https://docs.opencv.org/3.4/db/de0/group__core__utils.html#ga7b356498dd314380a0c386b059852270

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/","title":"downsample_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#downsample_filter","title":"downsample_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#purpose","title":"Purpose","text":"

The downsample_filter is a node that reduces the number of points.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter","title":"Approximate Downsample Filter","text":"

pcl::VoxelGridNearestCentroid is used. The algorithm is described in autoware_pcl_extensions

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter","title":"Random Downsample Filter","text":"

pcl::RandomSample is used, which points are sampled with uniform probability.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter","title":"Voxel Grid Downsample Filter","text":"

pcl::VoxelGrid is used, which points in each voxel are approximated with their centroid.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#pickup-based-voxel-grid-downsample-filter","title":"Pickup Based Voxel Grid Downsample Filter","text":"

This algorithm samples a single actual point existing within the voxel, not the centroid. The computation cost is low compared to Centroid Based Voxel Grid Filter.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

These implementations inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#note-parameters","title":"Note Parameters","text":"

These implementations inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#core-parameters","title":"Core Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter_1","title":"Approximate Downsample Filter","text":"Name Type Description Default Range voxel_size_x float voxel size along the x-axis [m] 0.3 \u22650.0 voxel_size_y float voxel size along the y-axis [m] 0.3 \u22650.0 voxel_size_z float voxel size along the z-axis [m] 0.1 \u22650.0"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter_1","title":"Random Downsample Filter","text":"Name Type Description Default Range sample_num integer number of indices to be sampled 1500 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter_1","title":"Voxel Grid Downsample Filter","text":"Name Type Description Default Range voxel_size_x float the voxel size along x-axis [m] 0.3 \u22650 voxel_size_y float the voxel size along y-axis [m] 0.3 \u22650 voxel_size_z float the voxel size along z-axis [m] 0.1 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#pickup-based-voxel-grid-downsample-filter_1","title":"Pickup Based Voxel Grid Downsample Filter","text":"Name Type Description Default Range voxel_size_x float voxel size along the x-axis [m] 1 \u22650 voxel_size_y float voxel size along the y-axis [m] 1 \u22650 voxel_size_z float voxel size along the z-axis [m] 1 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This implementation uses the robin_hood.h hashing library by martinus, available under the MIT License at martinus/robin-hood-hashing on GitHub. Special thanks to martinus for this contribution.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/","title":"dual_return_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#dual_return_outlier_filter","title":"dual_return_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as fog and rain and publish visibility as a diagnostic topic.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This node can remove rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. The dual_return_outlier_filter is named because it removes noise using data that contains two types of return values separated by attenuation factor, as shown in the figure below.

Therefore, in order to use this node, the sensor driver must publish custom data including return_type. please refer to PointXYZIRCAEDT data structure.

Another feature of this node is that it publishes visibility as a diagnostic topic. With this function, for example, in heavy rain, the sensing module can notify that the processing performance has reached its limit, which can lead to ensuring the safety of the vehicle.

In some complicated road scenes where normal objects also reflect the light in two stages, for instance plants, leaves, some plastic net etc, the visibility faces some drop in fine weather condition. To deal with that, optional settings of a region of interest (ROI) are added.

  1. Fixed_xyz_ROI mode: Visibility estimation based on the weak points in a fixed cuboid surrounding region of ego-vehicle, defined by x, y, z in base_link perspective.
  2. Fixed_azimuth_ROI mode: Visibility estimation based on the weak points in a fixed surrounding region of ego-vehicle, defined by azimuth and distance of LiDAR perspective.

When select 2 fixed ROI modes, due to the range of weak points is shrink, the sensitivity of visibility is decrease so that a trade of between weak_first_local_noise_threshold and visibility_threshold is needed.

The figure below describe how the node works.

The below picture shows the ROI options.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#output","title":"Output","text":"Name Type Description /dual_return_outlier_filter/frequency_image sensor_msgs::msg::Image The histogram image that represent visibility /dual_return_outlier_filter/visibility autoware_internal_debug_msgs::msg::Float32Stamped A representation of visibility with a value from 0 to 1 /dual_return_outlier_filter/pointcloud_noise sensor_msgs::msg::Pointcloud2 The pointcloud removed as noise"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range x_max float Maximum of x in meters for Fixed_xyz_ROI mode 18.0 N/A x_min float Minimum of x in meters for Fixed_xyz_ROI mode -12.0 N/A y_max float Maximum of y in meters for Fixed_xyz_ROI mode 2.0 N/A y_min float Minimum of y in meters for Fixed_xyz_ROI mode -2.0 N/A z_max float Maximum of z in meters for Fixed_xyz_ROI mode 10.0 N/A z_min float Minimum of z in meters for Fixed_xyz_ROI mode 0.0 N/A min_azimuth_deg float The left limit of azimuth in degrees for Fixed_azimuth_ROI mode 135.0 \u22650\u2264360 max_azimuth_deg float The right limit of azimuth in degrees for Fixed_azimuth_ROI mode 225.0 \u22650\u2264360 max_distance float The limit distance in meters for Fixed_azimuth_ROI mode 12.0 \u22650.0 vertical_bins integer The number of vertical bins for the visibility histogram 128 \u22651 max_azimuth_diff float The azimuth difference threshold in degrees for ring_outlier_filter 50.0 \u22650.0 weak_first_distance_ratio float The maximum allowed ratio between consecutive weak point distances 1.004 \u22650.0 general_distance_ratio float The maximum allowed ratio between consecutive normal point distances 1.03 \u22650.0 weak_first_local_noise_threshold integer If the number of outliers among weak points is less than the weak_first_local_noise_threshold in the (max_azimuth - min_azimuth) / horizontal_bins interval, all points within the interval will not be filtered out. 2 \u22650 roi_mode string roi mode Fixed_xyz_ROI ['Fixed_xyz_ROI', 'No_ROI', 'Fixed_azimuth_ROI'] visibility_error_threshold float When the proportion of white pixels in the binary histogram falls below this parameter the diagnostic status becomes ERR 0.5 \u22650.0\u22641.0 visibility_warn_threshold float When the proportion of white pixels in the binary histogram falls below this parameter the diagnostic status becomes WARN 0.7 \u22650.0\u22641.0"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Not recommended for use as it is under development. Input data must be PointXYZIRCAEDT type data including return_type.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/","title":"outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#outlier_filter","title":"outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#purpose","title":"Purpose","text":"

The outlier_filter is a package for filtering outlier of points.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Filter Name Description Detail radius search 2d outlier filter A method of removing point cloud noise based on the number of points existing within a certain radius link ring outlier filter A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points link voxel grid outlier filter A method of removing point cloud noise based on the number of points existing within a voxel link dual return outlier filter (under development) A method of removing rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. link"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/","title":"passthrough_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#passthrough_filter","title":"passthrough_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#purpose","title":"Purpose","text":"

The passthrough_filter is a node that removes points on the outside of a range in a given field (e.g. x, y, z, intensity, ring, etc).

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range filter_limit_min integer minimum allowed field value 0 \u22650 filter_limit_max integer maximum allowed field value 127 \u22650 filter_field_name string filtering field name channel N/A keep_organized boolean flag to keep indices structure false N/A filter_limit_negative boolean flag to return whether the data is inside limit or not false N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/","title":"pointcloud_accumulator","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#pointcloud_accumulator","title":"pointcloud_accumulator","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#purpose","title":"Purpose","text":"

The pointcloud_accumulator is a node that accumulates pointclouds for a given amount of time.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range accumulation_time_sec float accumulation period [s] 2 \u22650 pointcloud_buffer_size integer buffer size 50 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#radius_search_2d_outlier_filter","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

RadiusOutlierRemoval filter which removes all indices in its input cloud that don\u2019t have at least some number of neighbors within a certain range.

The description above is quoted from [1]. pcl::search::KdTree [2] is used to implement this package.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range min_neighbors integer If points in the circle centered on reference point is less than min_neighbors, a reference point is judged as outlier 5 \u22650 search_radius float Searching number of points included in search_radius 0.2 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Since the method is to count the number of points contained in the cylinder with the direction of gravity as the direction of the cylinder axis, it is a prerequisite that the ground has been removed.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#referencesexternal-links","title":"References/External links","text":"

[1] https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html

[2] https://pcl.readthedocs.io/projects/tutorials/en/latest/kdtree_search.html#kdtree-search

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/","title":"ring_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#ring_outlier_filter","title":"ring_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points

Another feature of this node is that it calculates visibility score based on outlier pointcloud and publish score as a topic.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#visibility-score-calculation-algorithm","title":"visibility score calculation algorithm","text":"

The pointcloud is divided into vertical bins (rings) and horizontal bins (azimuth divisions). The algorithm starts by splitting the input point cloud into separate rings based on the ring value of each point. Then, for each ring, it iterates through the points and calculates the frequency of points within each horizontal bin. The frequency is determined by incrementing a counter for the corresponding bin based on the point's azimuth value. The frequency values are stored in a frequency image matrix, where each cell represents a specific ring and azimuth bin. After calculating the frequency image, the algorithm applies a noise threshold to create a binary image. Points with frequency values above the noise threshold are considered valid, while points below the threshold are considered noise. Finally, the algorithm calculates the visibility score by counting the number of non-zero pixels in the frequency image and dividing it by the total number of pixels (vertical bins multiplied by horizontal bins).

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range distance_ratio float distance_ratio 1.03 \u22650.0 object_length_threshold float object_length_threshold 0.1 \u22650.0 num_points_threshold integer num_points_threshold 4 \u22650 max_rings_num integer max_rings_num 128 \u22651 max_points_num_per_ring integer Set this value large enough such that HFoV / resolution < max_points_num_per_ring 4000 \u22650 publish_outlier_pointcloud boolean Flag to publish outlier pointcloud and visibility score. Due to performance concerns, please set to false during experiments. false N/A min_azimuth_deg float The left limit of azimuth for visibility score calculation 0.0 \u22650.0 max_azimuth_deg float The right limit of azimuth for visibility score calculation 360.0 \u22650.0\u2264360.0 max_distance float The limit distance for visibility score calculation 12.0 \u22650.0 vertical_bins integer The number of vertical bin for visibility histogram 128 \u22651 horizontal_bins integer The number of horizontal bin for visibility histogram 36 \u22651 noise_threshold integer The threshold value for distinguishing noise from valid points in the frequency image 2 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This nodes requires that the points of the input point cloud are in chronological order and that individual points follow the memory layout specified by PointXYZIRCAEDT.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/","title":"vector_map_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#vector_map_filter","title":"vector_map_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#purpose","title":"Purpose","text":"

The vector_map_filter is a node that removes points on the outside of lane by using vector map.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range voxel_size_x float voxel size along x-axis [m] 0.04 \u22650 voxel_size_y float voxel size along y-axis [m] 0.04 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#vector_map_inside_area_filter","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#purpose","title":"Purpose","text":"

The vector_map_inside_area_filter is a node that removes points inside the vector map area that has given type by parameter.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, so please see also README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 input points ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map used for filtering points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range polygon_type string polygon type to be filtered no_obstacle_segmentation_area N/A use_z_filter boolean use z value for filtering false N/A z_threshold float z threshold for filtering 0.0 N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#voxel_grid_outlier_filter","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Removing point cloud noise based on the number of points existing within a voxel. The radius_search_2d_outlier_filter is better for accuracy, but this method has the advantage of low calculation cost.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range voxel_size_x float the voxel size along x-axis [m] 0.3 \u22650 voxel_size_y float the voxel size along y-axis [m] 0.3 \u22650 voxel_size_z float the voxel size along z-axis [m] 0.1 \u22650 voxel_points_threshold integer the minimum number of points in each voxel 2 \u22651"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2_node","title":"radar_scan_to_pointcloud2_node","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan"},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#output-topics","title":"Output topics","text":"Name Type Description output/amplitude_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is amplitude. output/doppler_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is doppler velocity."},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#parameters","title":"Parameters","text":"Name Type Description publish_amplitude_pointcloud bool Whether publish radar pointcloud whose intensity is amplitude. Default is true. publish_doppler_pointcloud bool Whether publish radar pointcloud whose intensity is doppler velocity. Default is false."},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_scan_to_pointcloud2 radar_scan_to_pointcloud2.launch.xml\n
"},{"location":"sensing/autoware_radar_static_pointcloud_filter/","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/autoware_radar_static_pointcloud_filter/#radar_static_pointcloud_filter","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/autoware_radar_static_pointcloud_filter/#radar_static_pointcloud_filter_node","title":"radar_static_pointcloud_filter_node","text":"

Extract static/dynamic radar pointcloud by using doppler velocity and ego motion. Calculation cost is O(n). n is the number of radar pointcloud.

"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan input/odometry nav_msgs::msg::Odometry Ego vehicle odometry topic"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#output-topics","title":"Output topics","text":"Name Type Description output/static_radar_scan radar_msgs::msg::RadarScan static radar pointcloud output/dynamic_radar_scan radar_msgs::msg::RadarScan dynamic radar pointcloud"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#parameters","title":"Parameters","text":"Name Type Description doppler_velocity_sd double Standard deviation for radar doppler velocity. [m/s]"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_static_pointcloud_filter radar_static_pointcloud_filter.launch.xml\n
"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#algorithm","title":"Algorithm","text":""},{"location":"sensing/autoware_radar_threshold_filter/","title":"radar_threshold_filter","text":""},{"location":"sensing/autoware_radar_threshold_filter/#radar_threshold_filter","title":"radar_threshold_filter","text":""},{"location":"sensing/autoware_radar_threshold_filter/#radar_threshold_filter_node","title":"radar_threshold_filter_node","text":"

Remove noise from radar return by threshold.

Calculation cost is O(n). n is the number of radar return.

"},{"location":"sensing/autoware_radar_threshold_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs/msg/RadarScan.msg Radar pointcloud data"},{"location":"sensing/autoware_radar_threshold_filter/#output-topics","title":"Output topics","text":"Name Type Description output/radar radar_msgs/msg/RadarScan.msg Filtered radar pointcloud"},{"location":"sensing/autoware_radar_threshold_filter/#parameters","title":"Parameters","text":" Name Type Description is_amplitude_filter bool if this parameter is true, apply amplitude filter (publish amplitude_min < amplitude < amplitude_max) amplitude_min double [dBm^2] amplitude_max double [dBm^2] is_range_filter bool if this parameter is true, apply range filter (publish range_min < range < range_max) range_min double [m] range_max double [m] is_azimuth_filter bool if this parameter is true, apply angle filter (publish azimuth_min < range < azimuth_max) azimuth_min double [rad] azimuth_max double [rad] is_z_filter bool if this parameter is true, apply z position filter (publish z_min < z < z_max) z_min double [m] z_max double [m]"},{"location":"sensing/autoware_radar_threshold_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_threshold_filter radar_threshold_filter.launch.xml\n
"},{"location":"sensing/autoware_radar_tracks_noise_filter/","title":"autoware_radar_tracks_noise_filter","text":""},{"location":"sensing/autoware_radar_tracks_noise_filter/#autoware_radar_tracks_noise_filter","title":"autoware_radar_tracks_noise_filter","text":"

This package contains a radar object filter module for radar_msgs/msg/RadarTrack. This package can filter noise objects in RadarTracks.

"},{"location":"sensing/autoware_radar_tracks_noise_filter/#algorithm","title":"Algorithm","text":"

The core algorithm of this package is RadarTrackCrossingNoiseFilterNode::isNoise() function. See the function and the parameters for details.

Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis velocity. Some radar can estimate y-axis velocity inside the device, but it sometimes lack precision. In y-axis threshold filter, if y-axis velocity of RadarTrack is more than velocity_y_threshold, it treats as noise objects.

"},{"location":"sensing/autoware_radar_tracks_noise_filter/#input","title":"Input","text":"Name Type Description ~/input/tracks radar_msgs/msg/RadarTracks.msg 3D detected tracks."},{"location":"sensing/autoware_radar_tracks_noise_filter/#output","title":"Output","text":"Name Type Description ~/output/noise_tracks radar_msgs/msg/RadarTracks.msg Noise objects ~/output/filtered_tracks radar_msgs/msg/RadarTracks.msg Filtered objects"},{"location":"sensing/autoware_radar_tracks_noise_filter/#parameters","title":"Parameters","text":"Name Type Description Default value velocity_y_threshold double Y-axis velocity threshold [m/s]. If y-axis velocity of RadarTrack is more than velocity_y_threshold, it treats as noise objects. 7.0"},{"location":"sensing/autoware_vehicle_velocity_converter/","title":"autoware_vehicle_velocity_converter","text":""},{"location":"sensing/autoware_vehicle_velocity_converter/#autoware_vehicle_velocity_converter","title":"autoware_vehicle_velocity_converter","text":""},{"location":"sensing/autoware_vehicle_velocity_converter/#purpose","title":"Purpose","text":"

This package converts autoware_vehicle_msgs::msg::VehicleReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro odometer node.

"},{"location":"sensing/autoware_vehicle_velocity_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_vehicle_velocity_converter/#input","title":"Input","text":"Name Type Description velocity_status autoware_vehicle_msgs::msg::VehicleReport vehicle velocity"},{"location":"sensing/autoware_vehicle_velocity_converter/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance converted from VehicleReport"},{"location":"sensing/autoware_vehicle_velocity_converter/#parameters","title":"Parameters","text":"Name Type Description speed_scale_factor double speed scale factor (ideal value is 1.0) frame_id string frame id for output message velocity_stddev_xx double standard deviation for vx angular_velocity_stddev_zz double standard deviation for yaw rate"},{"location":"sensing/livox/autoware_livox_tag_filter/","title":"livox_tag_filter","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#livox_tag_filter","title":"livox_tag_filter","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#purpose","title":"Purpose","text":"

The livox_tag_filter is a node that removes noise from pointcloud by using the following tags:

"},{"location":"sensing/livox/autoware_livox_tag_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/livox/autoware_livox_tag_filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/livox/autoware_livox_tag_filter/#parameters","title":"Parameters","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#node-parameters","title":"Node Parameters","text":"Name Type Description ignore_tags vector ignored tags (See the following table)"},{"location":"sensing/livox/autoware_livox_tag_filter/#tag-parameters","title":"Tag Parameters","text":"Bit Description Options 0~1 Point property based on spatial position 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Low confidence level of the noise 2~3 Point property based on intensity 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Reserved 4~5 Return number 00: return 0 01: return 1 10: return 2 11: return 3 6~7 Reserved

You can download more detail description about the livox from external link [1].

"},{"location":"sensing/livox/autoware_livox_tag_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://www.livoxtech.com/downloads

"},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"simulator/autoware_carla_interface/","title":"autoware_carla_interface","text":""},{"location":"simulator/autoware_carla_interface/#autoware_carla_interface","title":"autoware_carla_interface","text":""},{"location":"simulator/autoware_carla_interface/#ros-2autowareuniverse-bridge-for-carla-simulator","title":"ROS 2/Autoware.universe bridge for CARLA simulator","text":"

Thanks to https://github.com/gezp for ROS 2 Humble support for CARLA Communication. This ros package enables communication between Autoware and CARLA for autonomous driving simulation.

"},{"location":"simulator/autoware_carla_interface/#supported-environment","title":"Supported Environment","text":"ubuntu ros carla autoware 22.04 humble 0.9.15 Main"},{"location":"simulator/autoware_carla_interface/#setup","title":"Setup","text":""},{"location":"simulator/autoware_carla_interface/#install","title":"Install","text":"
  1. Download maps (y-axis inverted version) to arbitrary location
  2. Change names and create the map folder (example: Town01) inside autoware_map. (point_cloud/Town01.pcd -> autoware_map/Town01/pointcloud_map.pcd, vector_maps/lanelet2/Town01.osm-> autoware_map/Town01/lanelet2_map.osm)
  3. Create map_projector_info.yaml on the folder and add projector_type: Local on the first line.
"},{"location":"simulator/autoware_carla_interface/#build","title":"Build","text":"
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
"},{"location":"simulator/autoware_carla_interface/#run","title":"Run","text":"
  1. Run carla, change map, spawn object if you need

    cd CARLA\n./CarlaUE4.sh -prefernvidia -quality-level=Low -RenderOffScreen\n
  2. Run ros nodes

    ros2 launch autoware_launch e2e_simulator.launch.xml map_path:=$HOME/autoware_map/Town01 vehicle_model:=sample_vehicle sensor_model:=awsim_sensor_kit simulator_type:=carla carla_map:=Town01\n
  3. Set initial pose (Init by GNSS)

  4. Set goal position
  5. Wait for planning
  6. Engage
"},{"location":"simulator/autoware_carla_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The InitializeInterface class is key to setting up both the CARLA world and the ego vehicle. It fetches configuration parameters through the autoware_carla_interface.launch.xml.

The main simulation loop runs within the carla_ros2_interface class. This loop ticks simulation time inside the CARLA simulator at fixed_delta_seconds time, where data is received and published as ROS 2 messages at frequencies defined in self.sensor_frequencies.

Ego vehicle commands from Autoware are processed through the autoware_raw_vehicle_cmd_converter, which calibrates these commands for CARLA. The calibrated commands are then fed directly into CARLA control via CarlaDataProvider.

"},{"location":"simulator/autoware_carla_interface/#configurable-parameters-for-world-loading","title":"Configurable Parameters for World Loading","text":"

All the key parameters can be configured in autoware_carla_interface.launch.xml.

Name Type Default Value Description host string \"localhost\" Hostname for the CARLA server port int \"2000\" Port number for the CARLA server timeout int 20 Timeout for the CARLA client ego_vehicle_role_name string \"ego_vehicle\" Role name for the ego vehicle vehicle_type string \"vehicle.toyota.prius\" Blueprint ID of the vehicle to spawn. The Blueprint ID of vehicles can be found in CARLA Blueprint ID spawn_point string None Coordinates for spawning the ego vehicle (None is random). Format = [x, y, z, roll, pitch, yaw] carla_map string \"Town01\" Name of the map to load in CARLA sync_mode bool True Boolean flag to set synchronous mode in CARLA fixed_delta_seconds double 0.05 Time step for the simulation (related to client FPS) objects_definition_file string \"$(find-pkg-share autoware_carla_interface)/objects.json\" Sensor parameters file that are used for spawning sensor in CARLA use_traffic_manager bool True Boolean flag to set traffic manager in CARLA max_real_delta_seconds double 0.05 Parameter to limit the simulation speed below fixed_delta_seconds config_file string \"$(find-pkg-share autoware_carla_interface)/raw_vehicle_cmd_converter.param.yaml\" Control mapping file to be used in autoware_raw_vehicle_cmd_converter. Current control are calibrated based on vehicle.toyota.prius Blueprints ID in CARLA. Changing the vehicle type may need a recalibration."},{"location":"simulator/autoware_carla_interface/#configurable-parameters-for-sensors","title":"Configurable Parameters for Sensors","text":"

Below parameters can be configured in carla_ros.py.

Name Type Default Value Description self.sensor_frequencies dict {\"top\": 11, \"left\": 11, \"right\": 11, \"camera\": 11, \"imu\": 50, \"status\": 50, \"pose\": 2} (line 67) Calculates the time interval since the last publication and checks if this interval meets the minimum required to not exceed the desired frequency. It will only affect ROS publishing frequency not CARLA sensor tick. "},{"location":"simulator/autoware_carla_interface/#world-loading","title":"World Loading","text":"

The carla_ros.py sets up the CARLA world:

  1. Client Connection:

    client = carla.Client(self.local_host, self.port)\nclient.set_timeout(self.timeout)\n
  2. Load the Map:

    Map loaded in CARLA world with map according to carla_map parameter.

    client.load_world(self.map_name)\nself.world = client.get_world()\n
  3. Spawn Ego Vehicle:

    Vehicle are spawn according to vehicle_type, spawn_point, and agent_role_name parameter.

    spawn_point = carla.Transform()\npoint_items = self.spawn_point.split(\",\")\nif len(point_items) == 6:\n   spawn_point.location.x = float(point_items[0])\n   spawn_point.location.y = float(point_items[1])\n   spawn_point.location.z = float(point_items[2]) + 2\n   spawn_point.rotation.roll = float(point_items[3])\n   spawn_point.rotation.pitch = float(point_items[4])\n   spawn_point.rotation.yaw = float(point_items[5])\nCarlaDataProvider.request_new_actor(self.vehicle_type, spawn_point, self.agent_role_name)\n
"},{"location":"simulator/autoware_carla_interface/#traffic-light-recognition","title":"Traffic Light Recognition","text":"

The maps provided by the Carla Simulator (Carla Lanelet2 Maps) currently lack proper traffic light components for Autoware and have different latitude and longitude coordinates compared to the pointcloud map. To enable traffic light recognition, follow the steps below to modify the maps.

"},{"location":"simulator/autoware_carla_interface/#tips","title":"Tips","text":""},{"location":"simulator/autoware_carla_interface/#known-issues-and-future-works","title":"Known Issues and Future Works","text":""},{"location":"simulator/dummy_perception_publisher/","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#dummy_perception_publisher","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#purpose","title":"Purpose","text":"

This node publishes the result of the dummy detection with the type of perception.

"},{"location":"simulator/dummy_perception_publisher/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/dummy_perception_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/dummy_perception_publisher/#input","title":"Input","text":"Name Type Description /tf tf2_msgs/TFMessage TF (self-pose) input/object tier4_simulation_msgs::msg::DummyObject dummy detection objects"},{"location":"simulator/dummy_perception_publisher/#output","title":"Output","text":"Name Type Description output/dynamic_object tier4_perception_msgs::msg::DetectedObjectsWithFeature dummy detection objects output/points_raw sensor_msgs::msg::PointCloud2 point cloud of objects output/debug/ground_truth_objects autoware_perception_msgs::msg::TrackedObjects ground truth objects"},{"location":"simulator/dummy_perception_publisher/#parameters","title":"Parameters","text":"Name Type Default Value Explanation visible_range double 100.0 sensor visible range [m] detection_successful_rate double 0.8 sensor detection rate. (min) 0.0 - 1.0(max) enable_ray_tracing bool true if True, use ray tracking use_object_recognition bool true if True, publish objects topic use_base_link_z bool true if True, node uses z coordinate of ego base_link publish_ground_truth bool false if True, publish ground truth objects use_fixed_random_seed bool false if True, use fixed random seed random_seed int 0 random seed"},{"location":"simulator/dummy_perception_publisher/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/fault_injection/","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#fault_injection","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#purpose","title":"Purpose","text":"

This package is used to convert pseudo system faults from PSim to Diagnostics and notify Autoware. The component diagram is as follows:

"},{"location":"simulator/fault_injection/#test","title":"Test","text":"
source install/setup.bash\ncd fault_injection\nlaunch_test test/test_fault_injection_node.test.py\n
"},{"location":"simulator/fault_injection/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/fault_injection/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/fault_injection/#input","title":"Input","text":"Name Type Description ~/input/simulation_events tier4_simulation_msgs::msg::SimulationEvents simulation events"},{"location":"simulator/fault_injection/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs::msg::DiagnosticArray Dummy diagnostics"},{"location":"simulator/fault_injection/#parameters","title":"Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/learning_based_vehicle_model/","title":"Learned Model","text":""},{"location":"simulator/learning_based_vehicle_model/#learned-model","title":"Learned Model","text":"

This is the design document for the Python learned model used in the simple_planning_simulator package.

"},{"location":"simulator/learning_based_vehicle_model/#purpose-use-cases","title":"Purpose / Use cases","text":"

This library creates an interface between models in Python and PSIM (C++). It is used to quickly deploy learned Python models in PSIM without a need for complex C++ implementation.

"},{"location":"simulator/learning_based_vehicle_model/#design","title":"Design","text":"

The idea behind this package is that the model we want to use for simulation consists of multiple sub-models (e.g., steering model, drive model, vehicle kinematics, etc.). These sub-models are implemented in Python and can be trainable. Each sub-model has string names for all of its inputs/outputs, which are used to create model interconnections automatically (see image below). This allows us to easily switch sub-models for better customization of the simulator.

"},{"location":"simulator/learning_based_vehicle_model/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To use this package python3 and pybind11 need to be installed. The only assumption on Python sub-models is their interface.

class PythonSubmodelInterface:\n\n    def forward(self, action, state):  # Required\n\"\"\"\n        Calculate forward pass through the model and returns next_state.\n        \"\"\"\n        return list()\n\n    def get_state_names(self):  # Required\n\"\"\"\n        Return list of string names of the model states (outputs).\n        \"\"\"\n        return list()\n\n    def get_action_names(self):  # Required\n\"\"\"\n        Return list of string names of the model actions (inputs).\n        \"\"\"\n        return list()\n\n    def reset(self):  # Required\n\"\"\"\n        Reset model. This function is called after load_params().\n        \"\"\"\n        pass\n\n    def load_params(self, path):  # Required\n\"\"\"\n        Load parameters of the model.\n        Inputs:\n            - path: Path to a parameter file to load by the model.\n        \"\"\"\n        pass\n\n    def dtSet(self, dt):  # Required\n\"\"\"\n        Set dt of the model.\n        Inputs:\n            - dt: time step\n        \"\"\"\n        pass\n
"},{"location":"simulator/learning_based_vehicle_model/#api","title":"API","text":"

To successfully create a vehicle model an InterconnectedModel class needs to be set up correctly.

"},{"location":"simulator/learning_based_vehicle_model/#interconnectedmodel-class","title":"InterconnectedModel class","text":""},{"location":"simulator/learning_based_vehicle_model/#constructor","title":"Constructor","text":"

The constructor takes no arguments.

"},{"location":"simulator/learning_based_vehicle_model/#void-addsubmodelstdtuplestdstring-stdstring-stdstring-model_descriptor","title":"void addSubmodel(std::tuple<std::string, std::string, std::string> model_descriptor)","text":"

Add a new sub-model to the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#void-generateconnectionsstdvectorchar-in_names-stdvectorchar-out_names","title":"void generateConnections(std::vector<char *> in_names, std::vector<char*> out_names)","text":"

Generate connections between sub-models and inputs/outputs of the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#void-initstatestdvectordouble-new_state","title":"void initState(std::vector<double> new_state)","text":"

Set the initial state of the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#stdvectordouble-updatepymodelstdvectordouble-psim_input","title":"std::vector<double> updatePyModel(std::vector<double> psim_input)","text":"

Calculate the next state of the model by calculating the next state of all of the sub-models.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#dtsetdouble-dt","title":"dtSet(double dt)","text":"

Set the time step of the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#example","title":"Example","text":"

Firstly we need to set up the model.

InterconnectedModel vehicle;\n\n// Example of model descriptors\nstd::tuple<char*, char*, char*> model_descriptor_1 = {\n(char*)\"path_to_python_module_with_model_class_1\",\n(char*)nullptr,  // If no param file is needed you can pass 'nullptr'\n(char*)\"ModelClass1\"\n};\n\nstd::tuple<char*, char*, char*> model_descriptor_2 =   {\n(char*)\"path_to_python_module_with_model_class_2\",\n(char*)\"/path_to/param_file\",\n(char*)\"ModelClass2\"  // Name of the python class. Needs to use the interface from 'Assumptions'\n};\n\n// Create sub-models based on descriptors\nvehicle.addSubmodel(model_descriptor_1);\nvehicle.addSubmodel(model_descriptor_2);\n\n// Define STATE and INPUT names of the system\nstd::vector<char*> state_names = {(char*)\"STATE_NAME_1\", (char*)\"STATE_NAME_2\"};\nstd::vector<char*> input_names = {(char*)\"INPUT_NAME_1\", (char*)\"INPUT_NAME_2\"};\n\n// Automatically connect sub-systems with model input\nvehicle.generateConnections(input_names, state_names);\n\n// Set the time step of the model\nvehicle.dtSet(dt);\n

After the model is correctly set up, we can use it the following way.

// Example of an model input\nstd::vector<double> vehicle_input = {0.0, 1.0}; // INPUT_NAME_1, INPUT_NAME_2\n\n// Example of an model state\nstd::vector<double> current_state = {0.2, 0.5}; // STATE_NAME_1, STATE_NAME_2\n\n// Set model state\nvehicle.initState(current_state);\n\n// Calculate the next state of the model\nstd::vector<double> next_state = vehicle.updatePyModel(vehicle_input);\n
"},{"location":"simulator/learning_based_vehicle_model/#references-external-links","title":"References / External links","text":""},{"location":"simulator/learning_based_vehicle_model/#related-issues","title":"Related issues","text":""},{"location":"simulator/simple_planning_simulator/","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#simple_planning_simulator","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node simulates the vehicle motion for a vehicle command in 2D using a simple vehicle model.

"},{"location":"simulator/simple_planning_simulator/#design","title":"Design","text":"

The purpose of this simulator is for the integration test of planning and control modules. This does not simulate sensing or perception, but is implemented in pure c++ only and works without GPU.

"},{"location":"simulator/simple_planning_simulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"simulator/simple_planning_simulator/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"simulator/simple_planning_simulator/#input","title":"input","text":""},{"location":"simulator/simple_planning_simulator/#output","title":"output","text":""},{"location":"simulator/simple_planning_simulator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/simple_planning_simulator/#common-parameters","title":"Common Parameters","text":"Name Type Description Default value simulated_frame_id string set to the child_frame_id in output tf \"base_link\" origin_frame_id string set to the frame_id in output tf \"odom\" initialize_source string If \"ORIGIN\", the initial pose is set at (0,0,0). If \"INITIAL_POSE_TOPIC\", node will wait until the input/initialpose topic is published. \"INITIAL_POSE_TOPIC\" add_measurement_noise bool If true, the Gaussian noise is added to the simulated results. true pos_noise_stddev double Standard deviation for position noise 0.01 rpy_noise_stddev double Standard deviation for Euler angle noise 0.0001 vel_noise_stddev double Standard deviation for longitudinal velocity noise 0.0 angvel_noise_stddev double Standard deviation for angular velocity noise 0.0 steer_noise_stddev double Standard deviation for steering angle noise 0.0001"},{"location":"simulator/simple_planning_simulator/#vehicle-model-parameters","title":"Vehicle Model Parameters","text":""},{"location":"simulator/simple_planning_simulator/#vehicle_model_type-options","title":"vehicle_model_type options","text":"

The following models receive ACTUATION_CMD generated from raw_vehicle_cmd_converter. Therefore, when these models are selected, the raw_vehicle_cmd_converter is also launched.

The IDEAL model moves ideally as commanded, while the DELAY model moves based on a 1st-order with time delay model. The STEER means the model receives the steer command. The VEL means the model receives the target velocity command, while the ACC model receives the target acceleration command. The GEARED suffix means that the motion will consider the gear command: the vehicle moves only one direction following the gear.

The table below shows which models correspond to what parameters. The model names are written in abbreviated form (e.g. IDEAL_STEER_VEL = I_ST_V).

Name Type Description I_ST_V I_ST_A I_ST_A_G D_ST_V D_ST_A D_ST_A_G D_ST_A_G_WO_FG D_ST_M_ACC_G L_S_V A_C Default value unit acc_time_delay double dead time for the acceleration input x x x x o o o o x x 0.1 [s] steer_time_delay double dead time for the steering input x x x o o o o o x o 0.24 [s] vel_time_delay double dead time for the velocity input x x x o x x x x x x 0.25 [s] acc_time_constant double time constant of the 1st-order acceleration dynamics x x x x o o o o x x 0.1 [s] steer_time_constant double time constant of the 1st-order steering dynamics x x x o o o o o x o 0.27 [s] steer_dead_band double dead band for steering angle x x x o o o o x x o 0.0 [rad] vel_time_constant double time constant of the 1st-order velocity dynamics x x x o x x x x x x 0.5 [s] vel_lim double limit of velocity x x x o o o o o x x 50.0 [m/s] vel_rate_lim double limit of acceleration x x x o o o o o x x 7.0 [m/ss] steer_lim double limit of steering angle x x x o o o o o x o 1.0 [rad] steer_rate_lim double limit of steering angle change rate x x x o o o o o x o 5.0 [rad/s] steer_bias double bias for steering angle x x x o o o o o x o 0.0 [rad] debug_acc_scaling_factor double scaling factor for accel command x x x x o o o x x x 1.0 [-] debug_steer_scaling_factor double scaling factor for steer command x x x x o o o x x x 1.0 [-] acceleration_map_path string path to csv file for acceleration map which converts velocity and ideal acceleration to actual acceleration x x x x x x x o x x - [-] model_module_paths string path to a python module where the model is implemented x x x x x x x x o x - [-] model_param_paths string path to the file where model parameters are stored (can be empty string if no parameter file is required) x x x x x x x x o x - [-] model_class_names string name of the class that implements the model x x x x x x x x o x - [-]

Note: Parameters model_module_paths, model_param_paths, and model_class_names need to have the same length.

The acceleration_map is used only for DELAY_STEER_MAP_ACC_GEARED and it shows the acceleration command on the vertical axis and the current velocity on the horizontal axis, with each cell representing the converted acceleration command that is actually used in the simulator's motion calculation. Values in between are linearly interpolated.

Example of acceleration_map.csv

default,  0.00,  1.39,  2.78,  4.17,  5.56,  6.94,  8.33,  9.72, 11.11, 12.50, 13.89, 15.28, 16.67\n-4.0,    -4.40, -4.36, -4.38, -4.12, -4.20, -3.94, -3.98, -3.80, -3.77, -3.76, -3.59, -3.50, -3.40\n-3.5,    -4.00, -3.91, -3.85, -3.64, -3.68, -3.55, -3.42, -3.24, -3.25, -3.00, -3.04, -2.93, -2.80\n-3.0,    -3.40, -3.37, -3.33, -3.00, -3.00, -2.90, -2.88, -2.65, -2.43, -2.44, -2.43, -2.39, -2.30\n-2.5,    -2.80, -2.72, -2.72, -2.62, -2.41, -2.43, -2.26, -2.18, -2.11, -2.03, -1.96, -1.91, -1.85\n-2.0,    -2.30, -2.24, -2.12, -2.02, -1.92, -1.81, -1.67, -1.58, -1.51, -1.49, -1.40, -1.35, -1.30\n-1.5,    -1.70, -1.61, -1.47, -1.46, -1.40, -1.37, -1.29, -1.24, -1.10, -0.99, -0.83, -0.80, -0.78\n-1.0,    -1.30, -1.28, -1.10, -1.09, -1.04, -1.02, -0.98, -0.89, -0.82, -0.61, -0.52, -0.54, -0.56\n-0.8,    -0.96, -0.90, -0.82, -0.74, -0.70, -0.65, -0.63, -0.59, -0.55, -0.44, -0.39, -0.39, -0.35\n-0.6,    -0.77, -0.71, -0.67, -0.65, -0.58, -0.52, -0.51, -0.50, -0.40, -0.33, -0.30, -0.31, -0.30\n-0.4,    -0.45, -0.40, -0.45, -0.44, -0.38, -0.35, -0.31, -0.30, -0.26, -0.30, -0.29, -0.31, -0.25\n-0.2,    -0.24, -0.24, -0.25, -0.22, -0.23, -0.25, -0.27, -0.29, -0.24, -0.22, -0.17, -0.18, -0.12\n 0.0,     0.00,  0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08, -0.08, -0.10, -0.10, -0.10\n 0.2,     0.16,  0.12,  0.02,  0.02,  0.00,  0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08\n 0.4,     0.38,  0.30,  0.22,  0.25,  0.24,  0.23,  0.20,  0.16,  0.16,  0.14,  0.10,  0.05,  0.05\n 0.6,     0.52,  0.52,  0.51,  0.49,  0.43,  0.40,  0.35,  0.33,  0.33,  0.33,  0.32,  0.34,  0.34\n 0.8,     0.82,  0.81,  0.78,  0.68,  0.63,  0.56,  0.53,  0.48,  0.43,  0.41,  0.37,  0.38,  0.40\n 1.0,     1.00,  1.08,  1.01,  0.88,  0.76,  0.69,  0.66,  0.58,  0.54,  0.49,  0.45,  0.40,  0.40\n 1.5,     1.52,  1.50,  1.38,  1.26,  1.14,  1.03,  0.91,  0.82,  0.67,  0.61,  0.51,  0.41,  0.41\n 2.0,     1.80,  1.80,  1.64,  1.43,  1.25,  1.11,  0.96,  0.81,  0.70,  0.59,  0.51,  0.42,  0.42\n

"},{"location":"simulator/simple_planning_simulator/#actuation_cmd-model","title":"ACTUATION_CMD model","text":"

The simple_planning_simulator usually operates by receiving Control commands, but when the ACTUATION_CMD* model is selected, it receives Actuation commands instead of Control commands. This model can simulate the motion using the vehicle command that is actually sent to the real vehicle. Therefore, when this model is selected, the raw_vehicle_cmd_converter is also launched. Please refer to the actuation_cmd_sim.md for more details.

Note: The steering/velocity/acceleration dynamics is modeled by a first order system with a deadtime in a delay model. The definition of the time constant is the time it takes for the step response to rise up to 63% of its final value. The deadtime is a delay in the response to a control input.

"},{"location":"simulator/simple_planning_simulator/#example-of-learned_steer_vel-model","title":"Example of LEARNED_STEER_VEL model","text":"

We created a few basic models to showcase how LEARNED_STEER_VEL works.

  1. Install a library that contains basic Python models. (branch: v0.1_autoware)

  2. In a file src/vehicle/sample_vehicle_launch/sample_vehicle_description/config/simulator_model.param.yaml set vehicle_model_type to LEARNED_STEER_VEL. In the same file set the following parameters. These models are for testing and do not require any parameter file.

model_module_paths:\n[\n\"control_analysis_pipeline.autoware_models.vehicle.kinematic\",\n\"control_analysis_pipeline.autoware_models.steering.steer_example\",\n\"control_analysis_pipeline.autoware_models.drive.drive_example\",\n]\nmodel_param_paths: [\"\", \"\", \"\"]\nmodel_class_names: [\"KinematicModel\", \"SteerExample\", \"DriveExample\"]\n
"},{"location":"simulator/simple_planning_simulator/#default-tf-configuration","title":"Default TF configuration","text":"

Since the vehicle outputs odom->base_link tf, this simulator outputs the tf with the same frame_id configuration. In the simple_planning_simulator.launch.py, the node that outputs the map->odom tf, that usually estimated by the localization module (e.g. NDT), will be launched as well. Since the tf output by this simulator module is an ideal value, odom->map will always be 0.

"},{"location":"simulator/simple_planning_simulator/#caveat-pitch-calculation","title":"(Caveat) Pitch calculation","text":"

Ego vehicle pitch angle is calculated in the following manner.

NOTE: driving against the line direction (as depicted in image's bottom row) is not supported and only shown for illustration purposes.

"},{"location":"simulator/simple_planning_simulator/#error-detection-and-handling","title":"Error detection and handling","text":"

The only validation on inputs being done is testing for a valid vehicle model type.

"},{"location":"simulator/simple_planning_simulator/#security-considerations","title":"Security considerations","text":""},{"location":"simulator/simple_planning_simulator/#references-external-links","title":"References / External links","text":"

This is originally developed in the Autoware.AI. See the link below.

https://github.com/Autoware-AI/simulation/tree/master/wf_simulator

"},{"location":"simulator/simple_planning_simulator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/","title":"ACTUATION_CMD model","text":""},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd-model","title":"ACTUATION_CMD model","text":"

The simple_planning_simulator usually operates by receiving Control commands, but when the ACTUATION_CMD* model is selected, it receives Actuation commands instead of Control commands. This model can simulate the motion using the vehicle command that is actually sent to the real vehicle. Therefore, when this model is selected, the raw_vehicle_cmd_converter is also launched.

"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd","title":"ACTUATION_CMD","text":"

This model receives the accel/brake commands and converts them using the map to calculate the motion of the model. The steer command is used as it is. Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: false\n
"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd_steer_map","title":"ACTUATION_CMD_STEER_MAP","text":"

This model is inherited from ACTUATION_CMD and receives steering arbitrary value as the actuation command. The value is converted to the steering tire rate to calculate the motion of the model. An arbitrary value is like EPS (Electric Power Steering) Voltage.

Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: true\n
"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd_vgr","title":"ACTUATION_CMD_VGR","text":"

This model is inherited from ACTUATION_CMD and steering wheel angle is sent as the actuation command. The value is converted to the steering tire angle to calculate the motion of the model.

Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: true\n

"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd_mechanical","title":"ACTUATION_CMD_MECHANICAL","text":"

This model is inherited from ACTUATION_CMD_VGR nad has mechanical dynamics and controller for that to simulate the mechanical structure and software of the real vehicle.

Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: true\n

The mechanical structure of the vehicle is as follows.

The vehicle side software assumes that it has limiters, PID controllers, power steering, etc. for the input. The conversion in the power steering is approximated by a polynomial. Steering Dynamics is a model that represents the motion of the tire angle when the Steering Torque is input. It is represented by the following formula.

\\begin{align}\n\\dot{\\theta} &= \\omega \\\\\n\\dot{\\omega} &= \\frac{1}{I} (T_{\\text{input}} - D \\omega - K \\theta - \\text{sign}(\\omega) F_{\\text{friction}} ) \\\\\n\\end{align}\n

In this case,

Also, this dynamics has a dead zone. The steering rotation direction is different from the steering torque input direction, and the steering torque input is less than the dead zone threshold, it enters the dead zone. Once it enters the dead zone, it is judged to be in the dead zone until there is a steering input above the dead zone threshold. When in the dead zone, the steering tire angle does not move.

Please refer to the following file for the values of the parameters that have been system-identified using the actual vehicle's driving data. The blue line is the control input, the green line is the actual vehicle's tire angle output, and the red line is the simulator's tire angle output. mechanical_sample_param

This model has a smaller sum of errors with the observed values of the actual vehicle than when tuned with a normal first-order lag model. For details, please refer to #9252.

The parameters used in the ACTUATION_CMD are as follows.

Name Type Description unit accel_time_delay double dead time for the acceleration input [s] accel_time_constant double time constant of the 1st-order acceleration dynamics [s] brake_time_delay double dead time for the brake input [s] brake_time_constant double time constant of the 1st-order brake dynamics [s] convert_accel_cmd bool If true, it is assumed that the command is received converted to an accel actuation value, and it is converted back to acceleration value inside the simulator. [-] convert_brake_cmd bool If true, it is assumed that the command is received converted to a brake actuation value, and it is converted back to acceleration value inside the simulator. [-] vgr_coef_a double the value of the coefficient a of the variable gear ratio [-] vgr_coef_b double the value of the coefficient b of the variable gear ratio [-] vgr_coef_c double the value of the coefficient c of the variable gear ratio [-] enable_pub_steer bool whether to publish the steering tire angle. if false, it is expected to be converted and published from actuation_status in other nodes (e.g. raw_vehicle_cmd_converter) [-]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/","title":"tier4_dummy_object_rviz_plugin","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#tier4_dummy_object_rviz_plugin","title":"tier4_dummy_object_rviz_plugin","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin is used to generate dummy pedestrians, cars, and obstacles in planning simulator.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#overview","title":"Overview","text":"

The CarInitialPoseTool sends a topic for generating a dummy car. The PedestrianInitialPoseTool sends a topic for generating a dummy pedestrian. The UnknownInitialPoseTool sends a topic for generating a dummy obstacle. The DeleteAllObjectsTool deletes the dummy cars, pedestrians, and obstacles displayed by the above three tools.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#output","title":"Output","text":"Name Type Description /simulation/dummy_perception_publisher/object_info tier4_simulation_msgs::msg::DummyObject The topic on which to publish dummy object info"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#carpose","title":"CarPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 4.0 X standard deviation for initial pose [m] width_ float 1.8 Y standard deviation for initial pose [m] height_ float 2.0 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#buspose","title":"BusPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 10.5 X standard deviation for initial pose [m] width_ float 2.5 Y standard deviation for initial pose [m] height_ float 3.5 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#pedestrianpose","title":"PedestrianPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#unknownpose","title":"UnknownPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#deleteallobjects","title":"DeleteAllObjects","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Using a planning simulator

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select + on the tool tab.
  2. Select one of the following: tier4_dummy_object_rviz_plugin and press OK.
  3. Select the new item in the tool tab (2D Dummy Car in the example) and click on it in rviz.
"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#interactive-manipulation","title":"Interactive manipulation","text":"

You can interactively manipulate the object.

  1. Select \"Tool Properties\" in rviz.
  2. Select the corresponding object tab in the Tool Properties.
  3. Turn the \"Interactive\" checkbox on.
  4. Select the item in the tool tab in you haven't chosen yet.
  5. Key commands are as follows.
action key command ADD Shift + Click Right Button MOVE Hold down Right Button + Drug and Drop DELETE Alt + Click Right Button"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"simulator/vehicle_door_simulator/","title":"vehicle_door_simulator","text":""},{"location":"simulator/vehicle_door_simulator/#vehicle_door_simulator","title":"vehicle_door_simulator","text":"

This package is for testing operations on vehicle devices such as doors.

"},{"location":"system/autoware_component_monitor/","title":"autoware_component_monitor","text":""},{"location":"system/autoware_component_monitor/#autoware_component_monitor","title":"autoware_component_monitor","text":"

The autoware_component_monitor package allows monitoring system usage of component containers. The composable node inside the package is attached to a component container, and it publishes CPU and memory usage of the container.

"},{"location":"system/autoware_component_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/autoware_component_monitor/#input","title":"Input","text":"

None.

"},{"location":"system/autoware_component_monitor/#output","title":"Output","text":"Name Type Description ~/component_system_usage autoware_internal_msgs::msg::ResourceUsageReport CPU, Memory usage etc."},{"location":"system/autoware_component_monitor/#parameters","title":"Parameters","text":""},{"location":"system/autoware_component_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range publish_rate float Publish rate in Hz 5 N/A"},{"location":"system/autoware_component_monitor/#how-to-use","title":"How to use","text":"

Add it as a composable node in your launch file:

<launch>\n<group>\n<push-ros-namespace namespace=\"your_namespace\"/>\n...\n\n    <load_composable_node target=\"$(var container_name)\">\n<composable_node pkg=\"autoware_component_monitor\"\nplugin=\"autoware::component_monitor::ComponentMonitor\"\nname=\"component_monitor\">\n<param from=\"$(find-pkg-share autoware_component_monitor)/config/component_monitor.param.yaml\"/>\n</composable_node>\n</load_composable_node>\n\n...\n  </group>\n</launch>\n
"},{"location":"system/autoware_component_monitor/#quick-testing","title":"Quick testing","text":"

You can test the package by running the following command:

ros2 component load <container_name> autoware_component_monitor autoware::component_monitor::ComponentMonitor -p publish_rate:=10.0 --node-namespace <namespace>\n\n# Example usage\nros2 component load /pointcloud_container autoware_component_monitor autoware::component_monitor::ComponentMonitor -p publish_rate:=10.0 --node-namespace /pointcloud_container\n
"},{"location":"system/autoware_component_monitor/#how-it-works","title":"How it works","text":"

The package uses the top command under the hood. top -b -n 1 -E k -p PID command is run at 10 Hz to get the system usage of the process.

Here is a sample output:

top - 13:57:26 up  3:14,  1 user,  load average: 1,09, 1,10, 1,04\nTasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie\n%Cpu(s):  0,0 us,  0,8 sy,  0,0 ni, 99,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st\nKiB Mem : 65532208 total, 35117428 free, 17669824 used, 12744956 buff/cache\nKiB Swap: 39062524 total, 39062524 free,        0 used. 45520816 avail Mem\n\n    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND\n   3352 meb       20   0 2905940   1,2g  39292 S   0,0   2,0  23:24.01 awesome\n

We get 5th, 8th fields from the last line, which are RES, %CPU respectively.

"},{"location":"system/autoware_default_adapi/","title":"autoware_default_adapi","text":""},{"location":"system/autoware_default_adapi/#autoware_default_adapi","title":"autoware_default_adapi","text":""},{"location":"system/autoware_default_adapi/#notes","title":"Notes","text":"

Components that relay services must be executed by the Multi-Threaded Executor.

"},{"location":"system/autoware_default_adapi/#features","title":"Features","text":"

This package is a default implementation AD API.

"},{"location":"system/autoware_default_adapi/#web-server-script","title":"Web server script","text":"

This is a sample to call API using HTTP.

"},{"location":"system/autoware_default_adapi/#guide-message-script","title":"Guide message script","text":"

This is a debug script to check the conditions for transition to autonomous mode.

$ ros2 run autoware_default_adapi guide.py\n\nThe vehicle pose is not estimated. Please set an initial pose or check GNSS.\nThe route is not set. Please set a goal pose.\nThe topic rate error is detected. Please check [control,planning] components.\nThe vehicle is ready. Please change the operation mode to autonomous.\nThe vehicle is driving autonomously.\nThe vehicle has reached the goal of the route. Please reset a route.\n
"},{"location":"system/autoware_default_adapi/document/autoware-state/","title":"Autoware state compatibility","text":""},{"location":"system/autoware_default_adapi/document/autoware-state/#autoware-state-compatibility","title":"Autoware state compatibility","text":""},{"location":"system/autoware_default_adapi/document/autoware-state/#overview","title":"Overview","text":"

Since /autoware/state was so widely used, this packages creates it from the states of AD API for backwards compatibility. The diagnostic checks that ad_service_state_monitor used to perform have been replaced by component_state_monitor. The service /autoware/shutdown to change autoware state to finalizing is also supported for compatibility.

"},{"location":"system/autoware_default_adapi/document/autoware-state/#conversion","title":"Conversion","text":"

This is the correspondence between AD API states and autoware states. The launch state is the data that this node holds internally.

"},{"location":"system/autoware_default_adapi/document/fail-safe/","title":"Fail-safe API","text":""},{"location":"system/autoware_default_adapi/document/fail-safe/#fail-safe-api","title":"Fail-safe API","text":""},{"location":"system/autoware_default_adapi/document/fail-safe/#overview","title":"Overview","text":"

The fail-safe API simply relays the MRM state. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/interface/","title":"Interface API","text":""},{"location":"system/autoware_default_adapi/document/interface/#interface-api","title":"Interface API","text":""},{"location":"system/autoware_default_adapi/document/interface/#overview","title":"Overview","text":"

The interface API simply returns a version number. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/localization/","title":"Localization API","text":""},{"location":"system/autoware_default_adapi/document/localization/#localization-api","title":"Localization API","text":""},{"location":"system/autoware_default_adapi/document/localization/#overview","title":"Overview","text":"

Unify the location initialization method to the service. The topic /initialpose from rviz is now only subscribed to by adapter node and converted to API call. This API call is forwarded to the pose initializer node so it can centralize the state of pose initialization. For other nodes that require initialpose, pose initializer node publishes as /initialpose3d. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/motion/","title":"Motion API","text":""},{"location":"system/autoware_default_adapi/document/motion/#motion-api","title":"Motion API","text":""},{"location":"system/autoware_default_adapi/document/motion/#overview","title":"Overview","text":"

Provides a hook for when the vehicle starts. It is typically used for announcements that call attention to the surroundings. Add a pause function to the vehicle_cmd_gate, and API will control it based on vehicle stopped and start requested. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/motion/#states","title":"States","text":"

The implementation has more detailed state transitions to manage pause state synchronization. The correspondence with the AD API state is as follows.

"},{"location":"system/autoware_default_adapi/document/operation-mode/","title":"Operation mode API","text":""},{"location":"system/autoware_default_adapi/document/operation-mode/#operation-mode-api","title":"Operation mode API","text":""},{"location":"system/autoware_default_adapi/document/operation-mode/#overview","title":"Overview","text":"

Introduce operation mode. It handles autoware engage, gate_mode, external_cmd_selector and control_mode abstractly. When the mode is changed, it will be in-transition state, and if the transition completion condition to that mode is not satisfied, it will be returned to the previous mode. Also, currently, the condition for mode change is only WaitingForEngage in /autoware/state, and the engage state is shared between modes. After introducing the operation mode, each mode will have a transition available flag. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/operation-mode/#states","title":"States","text":"

The operation mode has the following state transitions. Disabling autoware control and changing operation mode when autoware control is disabled can be done immediately. Otherwise, enabling autoware control and changing operation mode when autoware control is enabled causes the state will be transition state. If the mode change completion condition is not satisfied within the timeout in the transition state, it will return to the previous mode.

"},{"location":"system/autoware_default_adapi/document/operation-mode/#compatibility","title":"Compatibility","text":"

Ideally, vehicle_cmd_gate and external_cmd_selector should be merged so that the operation mode can be handled directly. However, currently the operation mode transition manager performs the following conversions to match the implementation. The transition manager monitors each topic in the previous interface and synchronizes the operation mode when it changes. When the operation mode is changed with the new interface, the transition manager disables synchronization and changes the operation mode using the previous interface.

"},{"location":"system/autoware_default_adapi/document/routing/","title":"Routing API","text":""},{"location":"system/autoware_default_adapi/document/routing/#routing-api","title":"Routing API","text":""},{"location":"system/autoware_default_adapi/document/routing/#overview","title":"Overview","text":"

Unify the route setting method to the service. This API supports two waypoint formats, poses and lanelet segments. The goal and checkpoint topics from rviz is only subscribed to by adapter node and converted to API call. This API call is forwarded to the mission planner node so it can centralize the state of routing. For other nodes that require route, mission planner node publishes as /planning/mission_planning/route. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_processing_time_checker/","title":"Processing Time Checker","text":""},{"location":"system/autoware_processing_time_checker/#macro-rendering-error","title":"Macro Rendering Error","text":"

File: system/autoware_processing_time_checker/README.md

KeyError: 'default'

Traceback (most recent call last):\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 688, in render\n    return md_template.render(**page_variables)\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1295, in render\n    self.environment.handle_exception()\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 942, in handle_exception\n    raise rewrite_traceback_stack(source=source)\n  File \"<template>\", line 32, in top-level template code\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 72, in json_to_markdown\n    return format_json(data)\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 63, in format_json\n    markdown_table = tabulate(extract_parameter_info(parameters), headers=\"keys\", tablefmt=\"github\")\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 52, in extract_parameter_info\n    param[\"Default\"] = v[\"default\"]\nKeyError: 'default'\n
"},{"location":"system/bluetooth_monitor/","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#bluetooth_monitor","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#description","title":"Description","text":"

This node monitors a Bluetooth connection to a wireless device by using L2ping. L2ping generates PING echo command on Bluetooth L2CAP layer, and it is able to receive and check echo response from a wireless device.

"},{"location":"system/bluetooth_monitor/#block-diagram","title":"Block diagram","text":"

L2ping is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/bluetooth_monitor/#output","title":"Output","text":""},{"location":"system/bluetooth_monitor/#bluetooth_monitor-bluetooth_connection","title":"bluetooth_monitor: bluetooth_connection","text":"

[summary]

level message OK OK WARN RTT warning ERROR Lost Function error

[values]

key value (example) Device [0-9]: Status OK / RTT warning / Verify error / Lost / Ping rejected / Function error Device [0-9]: Name Wireless Controller Device [0-9]: Manufacturer MediaTek, Inc. Device [0-9]: Address AA:BB:CC:DD:EE:FF Device [0-9]: RTT 0.00ms key (example) value (example) Device [0-9]: connect No such file or directory"},{"location":"system/bluetooth_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range addresses array Bluetooth addresses of the device to monitor ['4C:B9:9B:6E:7F:9A'] N/A port integer Port number to connect to L2ping service on the host 7640 N/A timeout integer Time in seconds to wait for a response from the device 5 N/A rtt_warn float Time in seconds to warn if the round trip time is greater than this value 0.1 N/A "},{"location":"system/bluetooth_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Assign capability to l2ping_service since L2ping requires cap_net_raw+eip capability.

    sudo setcap 'cap_net_raw+eip' ./build/bluetooth_monitor/l2ping_service\n
  2. Run l2ping_service and bluetooth_monitor.

    ./build/bluetooth_monitor/l2ping_service\nros2 launch bluetooth_monitor bluetooth_monitor.launch.xml\n
"},{"location":"system/bluetooth_monitor/#known-limitations-and-issues","title":"Known limitations and issues","text":"

None.

"},{"location":"system/component_state_monitor/","title":"component_state_monitor","text":""},{"location":"system/component_state_monitor/#component_state_monitor","title":"component_state_monitor","text":"

The component state monitor checks the state of each component using topic state monitor. This is an implementation for backward compatibility with the AD service state monitor. It will be replaced in the future using a diagnostics tree.

"},{"location":"system/default_ad_api_helpers/ad_api_adaptors/","title":"ad_api_adaptors","text":""},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#macro-rendering-error","title":"Macro Rendering Error","text":"

File: system/default_ad_api_helpers/ad_api_adaptors/README.md

FileNotFoundError: [Errno 2] No such file or directory: '/system/default_ad_api_helpers/ad_api_adaptors/schema/ad_api_adaptors.schema.json'

Traceback (most recent call last):\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 688, in render\n    return md_template.render(**page_variables)\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1295, in render\n    self.environment.handle_exception()\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 942, in handle_exception\n    raise rewrite_traceback_stack(source=source)\n  File \"<template>\", line 35, in top-level template code\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 70, in json_to_markdown\n    with open(json_schema_file_path) as f:\nFileNotFoundError: [Errno 2] No such file or directory: '/system/default_ad_api_helpers/ad_api_adaptors/schema/ad_api_adaptors.schema.json'\n
"},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer_1","title":"automatic_pose_initializer","text":"

This node calls localization initialize API when the localization initialization state is uninitialized. Since the API uses GNSS pose when no pose is specified, initialization using GNSS can be performed automatically.

Interface Local Name Global Name Description Subscription - /api/localization/initialization_state The localization initialization state API. Client - /api/localization/initialize The localization initialize API."},{"location":"system/diagnostic_graph_aggregator/","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#diagnostic_graph_aggregator","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#overview","title":"Overview","text":"

The diagnostic graph aggregator node subscribes to diagnostic array and publishes aggregated diagnostic graph. As shown in the diagram below, this node introduces extra diagnostic status for intermediate functional units.

"},{"location":"system/diagnostic_graph_aggregator/#diagnostic-graph-structures","title":"Diagnostic graph structures","text":"

The diagnostic graph is actually a set of fault tree analysis (FTA) for each operation mode of Autoware. Since the status of the same node may be referenced by multiple nodes, the overall structure is a directed acyclic graph (DAG). Each node in the diagnostic graph represents the diagnostic status of a specific functional unit, including the input diagnostics. So we define this as \"unit\", and call the unit corresponding to the input diagnosis \"diag unit\" and the others \"node unit\".

Every unit has an error level that is the same as DiagnosticStatus, a unit type, and optionally a unit path. In addition, every diag unit has a message, a hardware_id, and values that are the same as DiagnosticStatus. The unit type represents how the unit status is calculated, such as AND or OR. The unit path is any unique string that represents the functionality of the unit.

NOTE: This feature is currently under development. The diagnostic graph also supports \"link\" because there are cases where connections between units have additional status. For example, it is natural that many functional units will have an error status until initialization is complete.

"},{"location":"system/diagnostic_graph_aggregator/#operation-mode-availability","title":"Operation mode availability","text":"

For MRM, this node publishes the status of the top-level functional units in the dedicated message. Therefore, the diagnostic graph must contain functional units with the following names. This feature breaks the generality of the graph and may be changed to a plugin or another node in the future.

"},{"location":"system/diagnostic_graph_aggregator/#interfaces","title":"Interfaces","text":"Interface Type Interface Name Data Type Description subscription /diagnostics diagnostic_msgs/msg/DiagnosticArray Diagnostics input. publisher /diagnostics_graph/unknowns diagnostic_msgs/msg/DiagnosticArray Diagnostics not included in graph. publisher /diagnostics_graph/struct tier4_system_msgs/msg/DiagGraphStruct Diagnostic graph (static part). publisher /diagnostics_graph/status tier4_system_msgs/msg/DiagGraphStatus Diagnostic graph (dynamic part). publisher /system/operation_mode/availability tier4_system_msgs/msg/OperationModeAvailability Operation mode availability."},{"location":"system/diagnostic_graph_aggregator/#parameters","title":"Parameters","text":"Parameter Name Data Type Description graph_file string Path of the config file. rate double Rate of aggregation and topic publication. input_qos_depth uint QoS depth of input array topic. graph_qos_depth uint QoS depth of output graph topic. use_operation_mode_availability bool Use operation mode availability publisher."},{"location":"system/diagnostic_graph_aggregator/#examples","title":"Examples","text":"

This is an example of a diagnostic graph configuration. The configuration can be split into multiple files.

ros2 launch diagnostic_graph_aggregator example-main.launch.xml\n

You can reuse the graph by making partial edits. For example, disable hardware checks for simulation.

ros2 launch diagnostic_graph_aggregator example-edit.launch.xml\n
"},{"location":"system/diagnostic_graph_aggregator/#debug-tools","title":"Debug tools","text":""},{"location":"system/diagnostic_graph_aggregator/#graph-file-format","title":"Graph file format","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/","title":"Edit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/#edit","title":"Edit","text":"

The edit is a base object that edits the existing diagnostic graph. Any derived object can be used where a edit object is required.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/#format","title":"Format","text":"Name Type Required Description type string yes The string indicating the type of derived object."},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/#derived-objects","title":"Derived objects","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/graph/","title":"Graph","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/graph/#graph","title":"Graph","text":"

The graph object is the top level structure that makes up the configuration file.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/graph/#format","title":"Format","text":"Name Type Required Description files list no List of path objects for importing subgraphs. units list no List of unit objects that make up the graph. edits list no List of edit objects to partially edit the graph."},{"location":"system/diagnostic_graph_aggregator/doc/format/path/","title":"Path","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#path","title":"Path","text":"

The path object specifies the file path of the subgraph to be imported. The structure of the subgraph file should be graph object.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#format","title":"Format","text":"Name Type Required Description path string yes The file path of the subgraph."},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#substitutions","title":"Substitutions","text":"

File paths can contain substitutions like ROS 2 launch. The supported substitutions are as follows.

Substitution Description $(dirname) The path of this file directory. $(find-pkg-share <package>) The path of the package."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/","title":"Unit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#unit","title":"Unit","text":"

The unit is a base object that makes up the diagnostic graph. Any derived object can be used where a unit object is required.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#format","title":"Format","text":"Name Type Required Description type string yes The string indicating the type of derived object. path string no Any string to reference from other units."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#derived-objects","title":"Derived objects","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/remove/","title":"Remove","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/remove/#remove","title":"Remove","text":"

The remove object is a edit that removes other units.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/remove/#format","title":"Format","text":"Name Type Required Description type string yes Specify remove when using this object. path string yes The path of the unit to remove."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/","title":"And","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/#and","title":"And","text":"

The and object is a unit that is evaluated as the maximum error level of the input units. Note that error level stale is treated as error.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/#format","title":"Format","text":"Name Type Required Description type string yes Specify and or short-circuit-and when using this object. list list[unit] yes List of input unit objects."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/#short-circuit-evaluation","title":"Short-circuit evaluation","text":"

Warning

Theshort-circuit-and is work in progress (WIP).

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/","title":"Constant","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/#constant","title":"Constant","text":"

The constant object is a unit with a fixed error level.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/#format","title":"Format","text":"Name Type Required Description type string yes Specify error level when using this object."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/#error-levels","title":"Error levels","text":"

The supported error levels are as follows.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/diag/","title":"Diag","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/diag/#diag","title":"Diag","text":"

The diag object is a unit that refers to a specific status within the source diagnostics.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/diag/#format","title":"Format","text":"Name Type Required Description type string yes Specify diag when using this object. diag string yes The name of the diagnostic status."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/link/","title":"Link","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/link/#link","title":"Link","text":"

The link object is a unit that refers to another unit.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/link/#format","title":"Format","text":"Name Type Required Description type string yes Specify link when using this object. link string yes The path of the unit to reference."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/or/","title":"Or","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/or/#or","title":"Or","text":"

The or object is a unit that is evaluated as the minimum error level of the input units. Note that error level stale is treated as error.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/or/#format","title":"Format","text":"Name Type Required Description type string yes Specify or when using this object. list list[unit] yes List of input unit objects."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/","title":"Constant","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/#constant","title":"Constant","text":"

Warning

This object is under development. It may be removed in the future.

The remapping object is a unit that converts error levels.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/#format","title":"Format","text":"Name Type Required Description type string yes Specify remapping type when using this object. item unit yes Input unit object."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/#remapping-types","title":"Remapping types","text":"

The supported remapping types are as follows.

"},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/","title":"Tree tool","text":""},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/#tree-tool","title":"Tree tool","text":"

This tool displays the graph structure of the configuration file in tree format.

"},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/#usage","title":"Usage","text":"
ros2 run diagnostic_graph_aggregator tree <graph-config-path>\n
"},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/#examples","title":"Examples","text":"
ros2 run diagnostic_graph_aggregator tree '$(find-pkg-share diagnostic_graph_aggregator)/example/graph/main.yaml'\n
===== Top-level trees ============================\n- /autoware/modes/autonomous (and)\n    - /functions/pose_estimation (and)\n    - /functions/obstacle_detection (or)\n- /autoware/modes/local (and)\n    - /external/joystick_command (diag)\n- /autoware/modes/remote (and)\n    - /external/remote_command (diag)\n- /autoware/modes/comfortable_stop (and)\n    - /functions/obstacle_detection (or)\n- /autoware/modes/pull_over (and)\n    - /functions/pose_estimation (and)\n    - /functions/obstacle_detection (or)\n===== Subtrees ===================================\n- /functions/pose_estimation (and)\n    - /sensing/lidars/top (diag)\n- /functions/obstacle_detection (or)\n    - /sensing/lidars/front (diag)\n    - /sensing/radars/front (diag)\n===== Isolated units =============================\n- /autoware/modes/stop (ok)\n- /autoware/modes/emergency_stop (ok)\n
"},{"location":"system/diagnostic_graph_utils/","title":"diagnostic_graph_utils","text":""},{"location":"system/diagnostic_graph_utils/#diagnostic_graph_utils","title":"diagnostic_graph_utils","text":"

This package is a utility for diagnostic graph published by diagnostic_graph_aggregator.

"},{"location":"system/diagnostic_graph_utils/#ros-node","title":"ROS node","text":""},{"location":"system/diagnostic_graph_utils/#c-library","title":"C++ library","text":""},{"location":"system/diagnostic_graph_utils/doc/node/converter/","title":"Converter tool","text":""},{"location":"system/diagnostic_graph_utils/doc/node/converter/#converter-tool","title":"Converter tool","text":"

This tool converts /diagnostics_graph to /diagnostics_array so it can be read by tools such as rqt_runtime_monitor.

"},{"location":"system/diagnostic_graph_utils/doc/node/converter/#usage","title":"Usage","text":"
ros2 run diagnostic_graph_utils converter_node\n
"},{"location":"system/diagnostic_graph_utils/doc/node/converter/#examples","title":"Examples","text":"

Terminal 1:

ros2 launch diagnostic_graph_aggregator example-main.launch.xml\n

Terminal 2:

ros2 run diagnostic_graph_utils converter_node\n

Terminal 3:

ros2 run rqt_runtime_monitor rqt_runtime_monitor --ros-args -r diagnostics:=diagnostics_array\n

"},{"location":"system/diagnostic_graph_utils/doc/node/dump/","title":"Dump tool","text":""},{"location":"system/diagnostic_graph_utils/doc/node/dump/#dump-tool","title":"Dump tool","text":"

This tool displays /diagnostics_graph in table format.

"},{"location":"system/diagnostic_graph_utils/doc/node/dump/#usage","title":"Usage","text":"
ros2 run diagnostic_graph_utils dump_node\n
"},{"location":"system/diagnostic_graph_utils/doc/node/dump/#examples","title":"Examples","text":"

Terminal 1:

ros2 launch diagnostic_graph_aggregator example-main.launch.xml\n

Terminal 2:

ros2 run diagnostic_graph_utils dump_node\n

Output:

|----|-------|----------------------------------|------|----------|\n| No | Level | Path                             | Type | Children |\n|----|-------|----------------------------------|------|----------|\n|  1 | OK    | /autoware/modes/stop             | ok   |          |\n|  2 | ERROR | /autoware/modes/autonomous       | and  | 8 9      |\n|  3 | OK    | /autoware/modes/local            | and  | 13       |\n|  4 | OK    | /autoware/modes/remote           | and  | 14       |\n|  5 | OK    | /autoware/modes/emergency_stop   | ok   |          |\n|  6 | OK    | /autoware/modes/comfortable_stop | and  | 9        |\n|  7 | ERROR | /autoware/modes/pull_over        | and  | 8 9      |\n|  8 | ERROR | /functions/pose_estimation       | and  | 10       |\n|  9 | OK    | /functions/obstacle_detection    | or   | 11 12    |\n| 10 | ERROR | /sensing/lidars/top              | diag |          |\n| 11 | OK    | /sensing/lidars/front            | diag |          |\n| 12 | OK    | /sensing/radars/front            | diag |          |\n| 13 | OK    | /external/joystick_command       | diag |          |\n| 14 | OK    | /external/remote_command         | diag |          |\n
"},{"location":"system/dummy_diag_publisher/","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#dummy_diag_publisher","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#purpose","title":"Purpose","text":"

This package outputs a dummy diagnostic data for debugging and developing.

"},{"location":"system/dummy_diag_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_diag_publisher/#outputs","title":"Outputs","text":"Name Type Description /diagnostics diagnostic_msgs::msgs::DiagnosticArray Diagnostics outputs"},{"location":"system/dummy_diag_publisher/#parameters","title":"Parameters","text":""},{"location":"system/dummy_diag_publisher/#node-parameters","title":"Node Parameters","text":"

The parameter DIAGNOSTIC_NAME must be a name that exists in the parameter YAML file. If the parameter status is given from a command line, the parameter is_active is automatically set to true.

Name Type Default Value Explanation Reconfigurable update_rate int 10 Timer callback period [Hz] false DIAGNOSTIC_NAME.is_active bool true Force update or not true DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher true"},{"location":"system/dummy_diag_publisher/#yaml-format-for-dummy_diag_publisher","title":"YAML format for dummy_diag_publisher","text":"

If the value is default, the default value will be set.

Key Type Default Value Explanation required_diags.DIAGNOSTIC_NAME.is_active bool true Force update or not required_diags.DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher"},{"location":"system/dummy_diag_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/dummy_diag_publisher/#usage","title":"Usage","text":""},{"location":"system/dummy_diag_publisher/#launch","title":"launch","text":"
ros2 launch dummy_diag_publisher dummy_diag_publisher.launch.xml\n
"},{"location":"system/dummy_diag_publisher/#reconfigure","title":"reconfigure","text":"
ros2 param set /dummy_diag_publisher velodyne_connection.status \"Warn\"\nros2 param set /dummy_diag_publisher velodyne_connection.is_active true\n
"},{"location":"system/dummy_infrastructure/","title":"dummy_infrastructure","text":""},{"location":"system/dummy_infrastructure/#dummy_infrastructure","title":"dummy_infrastructure","text":"

This is a debug node for infrastructure communication.

"},{"location":"system/dummy_infrastructure/#usage","title":"Usage","text":"
ros2 launch dummy_infrastructure dummy_infrastructure.launch.xml\nros2 run rqt_reconfigure rqt_reconfigure\n
"},{"location":"system/dummy_infrastructure/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_infrastructure/#inputs","title":"Inputs","text":"Name Type Description ~/input/command_array tier4_v2x_msgs::msg::InfrastructureCommandArray Infrastructure command"},{"location":"system/dummy_infrastructure/#outputs","title":"Outputs","text":"Name Type Description ~/output/state_array tier4_v2x_msgs::msg::VirtualTrafficLightStateArray Virtual traffic light array"},{"location":"system/dummy_infrastructure/#parameters","title":"Parameters","text":""},{"location":"system/dummy_infrastructure/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period [Hz] use_first_command bool true Consider instrument id or not use_command_state bool false Consider command state or not instrument_id string `` Used as command id approval bool false set approval filed to ros param is_finalized bool false Stop at stop_line if finalization isn't completed"},{"location":"system/dummy_infrastructure/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/duplicated_node_checker/","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#duplicated-node-checker","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#purpose","title":"Purpose","text":"

This node monitors the ROS 2 environments and detect duplication of node names in the environment. The result is published as diagnostics.

"},{"location":"system/duplicated_node_checker/#standalone-startup","title":"Standalone Startup","text":"
ros2 launch duplicated_node_checker duplicated_node_checker.launch.xml\n
"},{"location":"system/duplicated_node_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The types of topic status and corresponding diagnostic status are following.

Duplication status Diagnostic status Description OK OK No duplication is detected Duplicated Detected ERROR Duplication is detected"},{"location":"system/duplicated_node_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/duplicated_node_checker/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/duplicated_node_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float The scanning and update frequency of the checker. 10 >2"},{"location":"system/duplicated_node_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_comfortable_stop_operator/","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#mrm_comfortable_stop_operator","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#purpose","title":"Purpose","text":"

MRM comfortable stop operator is a node that generates comfortable stop commands according to the comfortable stop MRM order.

"},{"location":"system/mrm_comfortable_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_comfortable_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_comfortable_stop_operator/#input","title":"Input","text":"Name Type Description ~/input/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order"},{"location":"system/mrm_comfortable_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/velocity_limit tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/velocity_limit/clear tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command"},{"location":"system/mrm_comfortable_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_comfortable_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 10 Timer callback frequency [Hz]"},{"location":"system/mrm_comfortable_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation min_acceleration double -1.0 Minimum acceleration for comfortable stop [m/s^2] max_jerk double 0.3 Maximum jerk for comfortable stop [m/s^3] min_jerk double -0.3 Minimum jerk for comfortable stop [m/s^3]"},{"location":"system/mrm_comfortable_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_emergency_stop_operator/","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#mrm_emergency_stop_operator","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#purpose","title":"Purpose","text":"

MRM emergency stop operator is a node that generates emergency stop commands according to the emergency stop MRM order.

"},{"location":"system/mrm_emergency_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_emergency_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_emergency_stop_operator/#input","title":"Input","text":"Name Type Description ~/input/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order ~/input/control/control_cmd autoware_control_msgs::msg::Control Control command output from the last node of the control component. Used for the initial value of the emergency stop command."},{"location":"system/mrm_emergency_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/mrm/emergency_stop/control_cmd autoware_control_msgs::msg::Control Emergency stop command"},{"location":"system/mrm_emergency_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_emergency_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 30 Timer callback frequency [Hz]"},{"location":"system/mrm_emergency_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation target_acceleration double -2.5 Target acceleration for emergency stop [m/s^2] target_jerk double -1.5 Target jerk for emergency stop [m/s^3]"},{"location":"system/mrm_emergency_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_handler/","title":"mrm_handler","text":""},{"location":"system/mrm_handler/#mrm_handler","title":"mrm_handler","text":""},{"location":"system/mrm_handler/#purpose","title":"Purpose","text":"

MRM Handler is a node to select a proper MRM from a system failure state contained in OperationModeAvailability.

"},{"location":"system/mrm_handler/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_handler/#state-transitions","title":"State Transitions","text":""},{"location":"system/mrm_handler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_handler/#input","title":"Input","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry Used to decide whether vehicle is stopped or not /system/operation_mode/availability tier4_system_msgs::msg::OperationModeAvailability Used to select proper MRM from system available mrm behavior contained in operationModeAvailability /vehicle/status/control_mode autoware_vehicle_msgs::msg::ControlModeReport Used to check vehicle mode: autonomous or manual /system/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM emergency stop operation is available /system/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM comfortable stop operation is available /system/mrm/pull_over_manager/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM pull over operation is available /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState Used to check whether the current operation mode is AUTO or STOP."},{"location":"system/mrm_handler/#output","title":"Output","text":"Name Type Description /system/emergency/gear_cmd autoware_vehicle_msgs::msg::GearCommand Required to execute proper MRM (send gear cmd) /system/emergency/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand Required to execute proper MRM (send turn signal cmd) /system/fail_safe/mrm_state autoware_adapi_v1_msgs::msg::MrmState Inform MRM execution state and selected MRM behavior /system/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM emergency stop /system/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM comfortable stop /system/mrm/pull_over_manager/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM pull over"},{"location":"system/mrm_handler/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate integer Timer callback period. 10 N/A timeout_operation_mode_availability float If the input operation_mode_availability topic cannot be received for more than timeout_operation_mode_availability, vehicle will make an emergency stop. 0.5 N/A timeout_call_mrm_behavior float If a service is requested to the mrm operator and the response is not returned by timeout_call_mrm_behavior, the timeout occurs. 0.01 N/A timeout_cancel_mrm_behavior float If a service is requested to the mrm operator and the response is not returned by timeout_cancel_mrm_behavior, the timeout occurs. 0.01 N/A use_emergency_holding boolean If this parameter is true, the handler does not recover to the NORMAL state. False N/A timeout_emergency_recovery float If the duration of the EMERGENCY state is longer than timeout_emergency_recovery, it does not recover to the NORMAL state. 5.0 N/A use_parking_after_stopped boolean If this parameter is true, it will publish PARKING shift command. false N/A use_pull_over boolean If this parameter is true, operate pull over when latent faults occur. false N/A use_comfortable_stop boolean If this parameter is true, operate comfortable stop when latent faults occur. false N/A turning_hazard_on.emergency boolean If this parameter is true, hazard lamps will be turned on during emergency state. true N/A"},{"location":"system/mrm_handler/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/system_diagnostic_monitor/","title":"system_diagnostic_monitor","text":""},{"location":"system/system_diagnostic_monitor/#system_diagnostic_monitor","title":"system_diagnostic_monitor","text":"

This package contains default configurations of diagnostic graph and scripts for system integration.

"},{"location":"system/system_diagnostic_monitor/#configs","title":"Configs","text":"Name Description autoware-main.yaml Diagnostic graphs for basic monitoring of Autoware. autoware-psim.yaml Diagnostic graph with some units disabled for the simulator."},{"location":"system/system_diagnostic_monitor/#scripts","title":"Scripts","text":"Name Description component_state_diagnostics Node that converts component states to diagnostics."},{"location":"system/system_monitor/","title":"System Monitor for Autoware","text":""},{"location":"system/system_monitor/#system-monitor-for-autoware","title":"System Monitor for Autoware","text":"

Further improvement of system monitor functionality for Autoware.

"},{"location":"system/system_monitor/#description","title":"Description","text":"

This package provides the following nodes for monitoring system:

"},{"location":"system/system_monitor/#supported-architecture","title":"Supported architecture","text":""},{"location":"system/system_monitor/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/#how-to-use","title":"How to use","text":"

Use colcon build and launch in the same way as other packages.

colcon build\nsource install/setup.bash\nros2 launch system_monitor system_monitor.launch.xml\n

CPU and GPU monitoring method differs depending on platform. CMake automatically chooses source to be built according to build environment. If you build this package on intel platform, CPU monitor and GPU monitor which run on intel platform are built.

"},{"location":"system/system_monitor/#ros-topics-published-by-system-monitor","title":"ROS topics published by system monitor","text":"

Every topic is published in 1 minute interval.

[Usage] \u2713\uff1aSupported, -\uff1aNot supported

Node Message Intel arm64(tegra) arm64(raspi) Notes CPU Monitor CPU Temperature \u2713 \u2713 \u2713 CPU Usage \u2713 \u2713 \u2713 CPU Load Average \u2713 \u2713 \u2713 CPU Thermal Throttling \u2713 - \u2713 CPU Frequency \u2713 \u2713 \u2713 Notification of frequency only, normally error not generated. HDD Monitor HDD Temperature \u2713 \u2713 \u2713 HDD PowerOnHours \u2713 \u2713 \u2713 HDD TotalDataWritten \u2713 \u2713 \u2713 HDD RecoveredError \u2713 \u2713 \u2713 HDD Usage \u2713 \u2713 \u2713 HDD ReadDataRate \u2713 \u2713 \u2713 HDD WriteDataRate \u2713 \u2713 \u2713 HDD ReadIOPS \u2713 \u2713 \u2713 HDD WriteIOPS \u2713 \u2713 \u2713 HDD Connection \u2713 \u2713 \u2713 Memory Monitor Memory Usage \u2713 \u2713 \u2713 Net Monitor Network Connection \u2713 \u2713 \u2713 Network Usage \u2713 \u2713 \u2713 Notification of usage only, normally error not generated. Network CRC Error \u2713 \u2713 \u2713 Warning occurs when the number of CRC errors in the period reaches the threshold value. The number of CRC errors that occur is the same as the value that can be confirmed with the ip command. IP Packet Reassembles Failed \u2713 \u2713 \u2713 UDP Buf Errors \u2713 \u2713 \u2713 NTP Monitor NTP Offset \u2713 \u2713 \u2713 Process Monitor Tasks Summary \u2713 \u2713 \u2713 High-load Proc[0-9] \u2713 \u2713 \u2713 High-mem Proc[0-9] \u2713 \u2713 \u2713 GPU Monitor GPU Temperature \u2713 \u2713 - GPU Usage \u2713 \u2713 - GPU Memory Usage \u2713 - - GPU Thermal Throttling \u2713 - - GPU Frequency \u2713 \u2713 - For Intel platform, monitor whether current GPU clock is supported by the GPU. Voltage Monitor CMOS Battery Status \u2713 - - Battery Health for RTC and BIOS -"},{"location":"system/system_monitor/#ros-parameters","title":"ROS parameters","text":"

See ROS parameters.

"},{"location":"system/system_monitor/#notes","title":"Notes","text":""},{"location":"system/system_monitor/#cpu-monitor-for-intel-platform","title":"CPU monitor for intel platform","text":"

Thermal throttling event can be monitored by reading contents of MSR(Model Specific Register), and accessing MSR is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Create a user to run 'msr_reader'.

    sudo adduser <username>\n
  2. Load kernel module 'msr' into your target system. The path '/dev/cpu/CPUNUM/msr' appears.

    sudo modprobe msr\n
  3. Allow user to access MSR with read-only privilege using the Access Control List (ACL).

    sudo setfacl -m u:<username>:r /dev/cpu/*/msr\n
  4. Assign capability to 'msr_reader' since msr kernel module requires rawio capability.

    sudo setcap cap_sys_rawio=ep install/system_monitor/lib/system_monitor/msr_reader\n
  5. Run 'msr_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/msr_reader\n
"},{"location":"system/system_monitor/#see-also","title":"See also","text":"

msr_reader

"},{"location":"system/system_monitor/#hdd-monitor","title":"HDD Monitor","text":"

Generally, S.M.A.R.T. information is used to monitor HDD temperature and life of HDD, and normally accessing disk device node is allowed for root user or disk group. As with the CPU monitor, this package provides an approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting_1","title":"Instructions before starting","text":"
  1. Create a user to run 'hdd_reader'.

    sudo adduser <username>\n
  2. Add user to the disk group.

    sudo usermod -a -G disk <username>\n
  3. Assign capabilities to 'hdd_reader' since SCSI kernel module requires rawio capability to send ATA PASS-THROUGH (12) command and NVMe kernel module requires admin capability to send Admin Command.

    sudo setcap 'cap_sys_rawio=ep cap_sys_admin=ep' install/system_monitor/lib/system_monitor/hdd_reader\n
  4. Run 'hdd_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/hdd_reader\n
"},{"location":"system/system_monitor/#see-also_1","title":"See also","text":"

hdd_reader

"},{"location":"system/system_monitor/#gpu-monitor-for-intel-platform","title":"GPU Monitor for intel platform","text":"

Currently GPU monitor for intel platform only supports NVIDIA GPU whose information can be accessed by NVML API.

Also you need to install CUDA libraries. For installation instructions for CUDA 10.0, see NVIDIA CUDA Installation Guide for Linux.

"},{"location":"system/system_monitor/#voltage-monitor-for-cmos-battery","title":"Voltage monitor for CMOS Battery","text":"

Some platforms have built-in batteries for the RTC and CMOS. This node determines the battery status from the result of executing cat /proc/driver/rtc. Also, if lm-sensors is installed, it is possible to use the results. However, the return value of sensors varies depending on the chipset, so it is necessary to set a string to extract the corresponding voltage. It is also necessary to set the voltage for warning and error. For example, if you want a warning when the voltage is less than 2.9V and an error when it is less than 2.7V. The execution result of sensors on the chipset nct6106 is as follows, and \"in7:\" is the voltage of the CMOS battery.

$ sensors\npch_cannonlake-virtual-0\nAdapter: Virtual device\ntemp1:        +42.0\u00b0C\n\nnct6106-isa-0a10\nAdapter: ISA adapter\nin0:           728.00 mV (min =  +0.00 V, max =  +1.74 V)\nin1:             1.01 V  (min =  +0.00 V, max =  +2.04 V)\nin2:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin3:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin4:             1.07 V  (min =  +0.00 V, max =  +2.04 V)\nin5:             1.05 V  (min =  +0.00 V, max =  +2.04 V)\nin6:             1.67 V  (min =  +0.00 V, max =  +2.04 V)\nin7:             3.06 V  (min =  +0.00 V, max =  +4.08 V)\nin8:             2.10 V  (min =  +0.00 V, max =  +4.08 V)\nfan1:          2789 RPM  (min =    0 RPM)\nfan2:             0 RPM  (min =    0 RPM)\n

The setting value of voltage_monitor.param.yaml is as follows.

/**:\nros__parameters:\ncmos_battery_warn: 2.90\ncmos_battery_error: 2.70\ncmos_battery_label: \"in7:\"\n

The above values of 2.7V and 2.90V are hypothetical. Depending on the motherboard and chipset, the value may vary. However, if the voltage of the lithium battery drops below 2.7V, it is recommended to replace it. In the above example, the message output to the topic /diagnostics is as follows. If the voltage < 2.9V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Low Battery\n

If the voltage < 2.7V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Battery Died\n

If neither, then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: OK\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: OK\n

If the CMOS battery voltage drops less than voltage_error or voltage_warn,It will be a warning. If the battery runs out, the RTC will stop working when the power is turned off. However, since the vehicle can run, it is not an error. The vehicle will stop when an error occurs, but there is no need to stop immediately. It can be determined by the value of \"Low Battery\" or \"Battery Died\".

"},{"location":"system/system_monitor/#uml-diagrams","title":"UML diagrams","text":"

See Class diagrams. See Sequence diagrams.

"},{"location":"system/system_monitor/docs/class_diagrams/","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#class-diagrams","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/hdd_reader/","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#hdd_reader","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#name","title":"Name","text":"

hdd_reader - Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD

"},{"location":"system/system_monitor/docs/hdd_reader/#synopsis","title":"Synopsis","text":"

hdd_reader [OPTION]

"},{"location":"system/system_monitor/docs/hdd_reader/#description","title":"Description","text":"

Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD. This runs as a daemon process and listens to a TCP/IP port (7635 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/hdd_reader/#notes","title":"Notes","text":"

The 'hdd_reader' accesses minimal data enough to get Model number, Serial number, HDD temperature, and life of HDD. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

"},{"location":"system/system_monitor/docs/hdd_reader/#ata","title":"[ATA]","text":"Purpose Name Length Model number, Serial number IDENTIFY DEVICE data 256 words(512 bytes) HDD temperature, life of HDD SMART READ DATA 256 words(512 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#nvme","title":"[NVMe]","text":"Purpose Name Length Model number, Serial number Identify Controller data structure 4096 bytes HDD temperature, life of HDD SMART / Health Information 36 Dword(144 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#operation-confirmed-drives","title":"Operation confirmed drives","text":""},{"location":"system/system_monitor/docs/msr_reader/","title":"msr_reader","text":""},{"location":"system/system_monitor/docs/msr_reader/#msr_reader","title":"msr_reader","text":""},{"location":"system/system_monitor/docs/msr_reader/#name","title":"Name","text":"

msr_reader - Read MSR register for monitoring thermal throttling event

"},{"location":"system/system_monitor/docs/msr_reader/#synopsis","title":"Synopsis","text":"

msr_reader [OPTION]

"},{"location":"system/system_monitor/docs/msr_reader/#description","title":"Description","text":"

Read MSR register for monitoring thermal throttling event. This runs as a daemon process and listens to a TCP/IP port (7634 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/msr_reader/#notes","title":"Notes","text":"

The 'msr_reader' accesses minimal data enough to get thermal throttling event. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

Register Address Name Length 1B1H IA32_PACKAGE_THERM_STATUS 64bit

For details please see the documents below.

"},{"location":"system/system_monitor/docs/msr_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/docs/ros_parameters/","title":"ROS parameters","text":""},{"location":"system/system_monitor/docs/ros_parameters/#ros-parameters","title":"ROS parameters","text":""},{"location":"system/system_monitor/docs/ros_parameters/#cpu-monitor","title":"CPU Monitor","text":"

cpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when CPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when CPU temperature reaches a specified value or higher. usage_warn float %(1e-2) 0.90 Generates warning when CPU usage reaches a specified value or higher and last for usage_warn_count counts. usage_error float %(1e-2) 1.00 Generates error when CPU usage reaches a specified value or higher and last for usage_error_count counts. usage_warn_count int n/a 2 Generates warning when CPU usage reaches usage_warn value or higher and last for a specified counts. usage_error_count int n/a 2 Generates error when CPU usage reaches usage_error value or higher and last for a specified counts. load1_warn float %(1e-2) 0.90 Generates warning when load average 1min reaches a specified value or higher. load5_warn float %(1e-2) 0.80 Generates warning when load average 5min reaches a specified value or higher. msr_reader_port int n/a 7634 Port number to connect to msr_reader."},{"location":"system/system_monitor/docs/ros_parameters/#hdd-monitor","title":"HDD Monitor","text":"

hdd_monitor:

\u00a0\u00a0disks:

Name Type Unit Default Notes name string n/a none The disk name to monitor temperature. (e.g. /dev/sda) temp_attribute_id int n/a 0xC2 S.M.A.R.T attribute ID of temperature. temp_warn float DegC 55.0 Generates warning when HDD temperature reaches a specified value or higher. temp_error float DegC 70.0 Generates error when HDD temperature reaches a specified value or higher. power_on_hours_attribute_id int n/a 0x09 S.M.A.R.T attribute ID of power-on hours. power_on_hours_warn int Hour 3000000 Generates warning when HDD power-on hours reaches a specified value or higher. total_data_written_attribute_id int n/a 0xF1 S.M.A.R.T attribute ID of total data written. total_data_written_warn int depends on device 4915200 Generates warning when HDD total data written reaches a specified value or higher. total_data_written_safety_factor int %(1e-2) 0.05 Safety factor of HDD total data written. recovered_error_attribute_id int n/a 0xC3 S.M.A.R.T attribute ID of recovered error. recovered_error_warn int n/a 1 Generates warning when HDD recovered error reaches a specified value or higher. read_data_rate_warn float MB/s 360.0 Generates warning when HDD read data rate reaches a specified value or higher. write_data_rate_warn float MB/s 103.5 Generates warning when HDD write data rate reaches a specified value or higher. read_iops_warn float IOPS 63360.0 Generates warning when HDD read IOPS reaches a specified value or higher. write_iops_warn float IOPS 24120.0 Generates warning when HDD write IOPS reaches a specified value or higher.

hdd_monitor:

Name Type Unit Default Notes hdd_reader_port int n/a 7635 Port number to connect to hdd_reader. usage_warn float %(1e-2) 0.95 Generates warning when disk usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when disk usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#memory-monitor","title":"Memory Monitor","text":"

mem_monitor:

Name Type Unit Default Notes usage_warn float %(1e-2) 0.95 Generates warning when physical memory usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when physical memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#net-monitor","title":"Net Monitor","text":"

net_monitor:

Name Type Unit Default Notes devices list[string] n/a none The name of network interface to monitor. (e.g. eth0, * for all network interfaces) monitor_program string n/a greengrass program name to be monitored by nethogs name. crc_error_check_duration int sec 1 CRC error check duration. crc_error_count_threshold int n/a 1 Generates warning when count of CRC errors during CRC error check duration reaches a specified value or higher. reassembles_failed_check_duration int sec 1 IP packet reassembles failed check duration. reassembles_failed_check_count int n/a 1 Generates warning when count of IP packet reassembles failed during IP packet reassembles failed check duration reaches a specified value or higher. udp_buf_errors_check_duration int sec 1 UDP buf errors check duration. udp_buf_errors_check_count int n/a 1 Generates warning when count of UDP buf errors during udp_buf_errors_check_duration reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#ntp-monitor","title":"NTP Monitor","text":"

ntp_monitor:

Name Type Unit Default Notes server string n/a ntp.ubuntu.com The name of NTP server to synchronize date and time. (e.g. ntp.nict.jp for Japan) offset_warn float sec 0.1 Generates warning when NTP offset reaches a specified value or higher. (default is 100ms) offset_error float sec 5.0 Generates warning when NTP offset reaches a specified value or higher. (default is 5sec)"},{"location":"system/system_monitor/docs/ros_parameters/#process-monitor","title":"Process Monitor","text":"

process_monitor:

Name Type Unit Default Notes num_of_procs int n/a 5 The number of processes to generate High-load Proc[0-9] and High-mem Proc[0-9]."},{"location":"system/system_monitor/docs/ros_parameters/#gpu-monitor","title":"GPU Monitor","text":"

gpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when GPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when GPU temperature reaches a specified value or higher. gpu_usage_warn float %(1e-2) 0.90 Generates warning when GPU usage reaches a specified value or higher. gpu_usage_error float %(1e-2) 1.00 Generates error when GPU usage reaches a specified value or higher. memory_usage_warn float %(1e-2) 0.90 Generates warning when GPU memory usage reaches a specified value or higher. memory_usage_error float %(1e-2) 1.00 Generates error when GPU memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#voltage-monitor","title":"Voltage Monitor","text":"

voltage_monitor:

Name Type Unit Default Notes cmos_battery_warn float volt 2.9 Generates warning when voltage of CMOS Battery is lower. cmos_battery_error float volt 2.7 Generates error when voltage of CMOS Battery is lower. cmos_battery_label string n/a \"\" voltage string in sensors command outputs. if empty no voltage will be checked."},{"location":"system/system_monitor/docs/seq_diagrams/","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#sequence-diagrams","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#ros-topics-cpu-monitor","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-temperature","title":"CPU Temperature","text":"

/diagnostics/cpu_monitor: CPU Temperature

[summary]

level message OK OK

[values]

key (example) value (example) Package id 0, Core [0-9], thermal_zone[0-9] 50.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-usage","title":"CPU Usage","text":"

/diagnostics/cpu_monitor: CPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) CPU [all,0-9]: status OK / high load / very high load CPU [all,0-9]: usr 2.00% CPU [all,0-9]: nice 0.00% CPU [all,0-9]: sys 1.00% CPU [all,0-9]: idle 97.00%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-load-average","title":"CPU Load Average","text":"

/diagnostics/cpu_monitor: CPU Load Average

[summary]

level message OK OK WARN high load

[values]

key value (example) 1min 14.50% 5min 14.55% 15min 9.67%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-thermal-throttling","title":"CPU Thermal Throttling","text":"

Intel and raspi platform only. Tegra platform not supported.

/diagnostics/cpu_monitor: CPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values for intel platform]

key value (example) CPU [0-9]: Pkg Thermal Status OK / throttling

[values for raspi platform]

key value (example) status All clear / Currently throttled / Soft temperature limit active"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-frequency","title":"CPU Frequency","text":"

/diagnostics/cpu_monitor: CPU Frequency

[summary]

level message OK OK

[values]

key value (example) CPU [0-9]: clock 2879MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/","title":"ROS topics: GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_gpu_monitor/#ros-topics-gpu-monitor","title":"ROS topics: GPU Monitor","text":"

Intel and tegra platform only. Raspi platform not supported.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-temperature","title":"GPU Temperature","text":"

/diagnostics/gpu_monitor: GPU Temperature

[summary]

level message OK OK WARN warm ERROR hot

[values]

key (example) value (example) GeForce GTX 1650, thermal_zone[0-9] 46.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-usage","title":"GPU Usage","text":"

/diagnostics/gpu_monitor: GPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650, gpu.[0-9] GPU [0-9]: usage 19.0%

*key: gpu.[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-memory-usage","title":"GPU Memory Usage","text":"

Intel platform only. There is no separate gpu memory in tegra. Both cpu and gpu uses cpu memory.

/diagnostics/gpu_monitor: GPU Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: usage 13.0% GPU [0-9]: total 3G GPU [0-9]: used 1G GPU [0-9]: free 2G"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-thermal-throttling","title":"GPU Thermal Throttling","text":"

Intel platform only. Tegra platform not supported.

/diagnostics/gpu_monitor: GPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values]

key value (example) GPU [0-9]: status OK / throttling GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz GPU [0-9]: reasons GpuIdle / SwThermalSlowdown etc."},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-frequency","title":"GPU Frequency","text":"

/diagnostics/gpu_monitor: GPU Frequency

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#intel-platform","title":"Intel platform","text":"

[summary]

level message OK OK WARN unsupported clock

[values]

key value (example) GPU [0-9]: status OK / unsupported clock GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#tegra-platform","title":"Tegra platform","text":"

[summary]

level message OK OK

[values]

key (example) value (example) GPU 17000000.gv11b: clock 318 MHz"},{"location":"system/system_monitor/docs/topics_hdd_monitor/","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#ros-topics-hdd-monitor","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-temperature","title":"HDD Temperature","text":"

/diagnostics/hdd_monitor: HDD Temperature

[summary]

level message OK OK WARN hot ERROR critical hot

[values]

key value (example) HDD [0-9]: status OK / hot / critical hot HDD [0-9]: name /dev/nvme0 HDD [0-9]: model SAMSUNG MZVLB1T0HBLR-000L7 HDD [0-9]: serial S4EMNF0M820682 HDD [0-9]: temperature 37.0 DegC not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-poweronhours","title":"HDD PowerOnHours","text":"

/diagnostics/hdd_monitor: HDD PowerOnHours

[summary]

level message OK OK WARN lifetime limit

[values]

key value (example) HDD [0-9]: status OK / lifetime limit HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: power on hours 4834 Hours not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-totaldatawritten","title":"HDD TotalDataWritten","text":"

/diagnostics/hdd_monitor: HDD TotalDataWritten

[summary]

level message OK OK WARN warranty period

[values]

key value (example) HDD [0-9]: status OK / warranty period HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: total data written 146295330 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-recoverederror","title":"HDD RecoveredError","text":"

/diagnostics/hdd_monitor: HDD RecoveredError

[summary]

level message OK OK WARN high soft error rate

[values]

key value (example) HDD [0-9]: status OK / high soft error rate HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: recovered error 0 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-usage","title":"HDD Usage","text":"

/diagnostics/hdd_monitor: HDD Usage

[summary]

level message OK OK WARN low disk space ERROR very low disk space

[values]

key value (example) HDD [0-9]: status OK / low disk space / very low disk space HDD [0-9]: filesystem /dev/nvme0n1p4 HDD [0-9]: size 264G HDD [0-9]: used 172G HDD [0-9]: avail 749G HDD [0-9]: use 69% HDD [0-9]: mounted on /"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readdatarate","title":"HDD ReadDataRate","text":"

/diagnostics/hdd_monitor: HDD ReadDataRate

[summary]

level message OK OK WARN high data rate of read

[values]

key value (example) HDD [0-9]: status OK / high data rate of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of read 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writedatarate","title":"HDD WriteDataRate","text":"

/diagnostics/hdd_monitor: HDD WriteDataRate

[summary]

level message OK OK WARN high data rate of write

[values]

key value (example) HDD [0-9]: status OK / high data rate of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of write 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readiops","title":"HDD ReadIOPS","text":"

/diagnostics/hdd_monitor: HDD ReadIOPS

[summary]

level message OK OK WARN high IOPS of read

[values]

key value (example) HDD [0-9]: status OK / high IOPS of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of read 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writeiops","title":"HDD WriteIOPS","text":"

/diagnostics/hdd_monitor: HDD WriteIOPS

[summary]

level message OK OK WARN high IOPS of write

[values]

key value (example) HDD [0-9]: status OK / high IOPS of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of write 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-connection","title":"HDD Connection","text":"

/diagnostics/hdd_monitor: HDD Connection

[summary]

level message OK OK WARN not connected

[values]

key value (example) HDD [0-9]: status OK / not connected HDD [0-9]: name /dev/nvme0 HDD [0-9]: mount point /"},{"location":"system/system_monitor/docs/topics_mem_monitor/","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#ros-topics-memory-monitor","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#memory-usage","title":"Memory Usage","text":"

/diagnostics/mem_monitor: Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) Mem: usage 29.72% Mem: total 31.2G Mem: used 6.0G Mem: free 20.7G Mem: shared 2.9G Mem: buff/cache 4.5G Mem: available 21.9G Swap: total 2.0G Swap: used 218M Swap: free 1.8G Total: total 33.2G Total: used 6.2G Total: free 22.5G Total: used+ 9.1G"},{"location":"system/system_monitor/docs/topics_net_monitor/","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#ros-topics-net-monitor","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#network-connection","title":"Network Connection","text":"

/diagnostics/net_monitor: Network Connection

[summary]

level message OK OK WARN no such device

[values]

key value (example) Network [0-9]: status OK / no such device HDD [0-9]: name wlp82s0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-usage","title":"Network Usage","text":"

/diagnostics/net_monitor: Network Usage

[summary]

level message OK OK

[values]

key value (example) Network [0-9]: status OK Network [0-9]: interface name wlp82s0 Network [0-9]: rx_usage 0.00% Network [0-9]: tx_usage 0.00% Network [0-9]: rx_traffic 0.00 MB/s Network [0-9]: tx_traffic 0.00 MB/s Network [0-9]: capacity 400.0 MB/s Network [0-9]: mtu 1500 Network [0-9]: rx_bytes 58455228 Network [0-9]: rx_errors 0 Network [0-9]: tx_bytes 11069136 Network [0-9]: tx_errors 0 Network [0-9]: collisions 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-traffic","title":"Network Traffic","text":"

/diagnostics/net_monitor: Network Traffic

[summary]

level message OK OK

[values when specified program is detected]

key value (example) nethogs [0-9]: program /lambda/greengrassSystemComponents/1384/999 nethogs [0-9]: sent (KB/Sec) 1.13574 nethogs [0-9]: received (KB/Sec) 0.261914

[values when error is occurring]

key value (example) error execve failed: No such file or directory"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-crc-error","title":"Network CRC Error","text":"

/diagnostics/net_monitor: Network CRC Error

[summary]

level message OK OK WARN CRC error

[values]

key value (example) Network [0-9]: interface name wlp82s0 Network [0-9]: total rx_crc_errors 0 Network [0-9]: rx_crc_errors per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#ip-packet-reassembles-failed","title":"IP Packet Reassembles Failed","text":"

/diagnostics/net_monitor: IP Packet Reassembles Failed

[summary]

level message OK OK WARN reassembles failed

[values]

key value (example) total packet reassembles failed 0 packet reassembles failed per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#udp-buf-errors","title":"UDP Buf Errors","text":"

/diagnostics/net_monitor: UDP Buf Errors

[summary]

level message OK OK WARN UDP buf errors

[values]

key value (example) total UDP rcv buf errors 0 UDP rcv buf errors per unit time 0 total UDP snd buf errors 0 UDP snd buf errors per unit time 0"},{"location":"system/system_monitor/docs/topics_ntp_monitor/","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ros-topics-ntp-monitor","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ntp-offset","title":"NTP Offset","text":"

/diagnostics/ntp_monitor: NTP Offset

[summary]

level message OK OK WARN high ERROR too high

[values]

key value (example) NTP Offset -0.013181 sec NTP Delay 0.053880 sec"},{"location":"system/system_monitor/docs/topics_process_monitor/","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#ros-topics-process-monitor","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#tasks-summary","title":"Tasks Summary","text":"

/diagnostics/process_monitor: Tasks Summary

[summary]

level message OK OK

[values]

key value (example) total 409 running 2 sleeping 321 stopped 0 zombie 0"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-load-proc0-9","title":"High-load Proc[0-9]","text":"

/diagnostics/process_monitor: High-load Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /usr/lib/firefox/firefox %CPU 37.5 %MEM 2.1 PID 14062 USER autoware PR 20 NI 0 VIRT 3461152 RES 669052 SHR 481208 S S TIME+ 23:57.49"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-mem-proc0-9","title":"High-mem Proc[0-9]","text":"

/diagnostics/process_monitor: High-mem Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /snap/multipass/1784/usr/bin/qemu-system-x86_64 %CPU 0 %MEM 2.5 PID 1565 USER root PR 20 NI 0 VIRT 3722320 RES 812432 SHR 20340 S S TIME+ 0:22.84"},{"location":"system/system_monitor/docs/topics_voltage_monitor/","title":"ROS topics: Voltage Monitor","text":""},{"location":"system/system_monitor/docs/topics_voltage_monitor/#ros-topics-voltage-monitor","title":"ROS topics: Voltage Monitor","text":"

\"CMOS Battery Status\" and \"CMOS battery voltage\" are exclusive. Only one or the other is generated. Which one is generated depends on the value of cmos_battery_label.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-status","title":"CMOS Battery Status","text":"

/diagnostics/voltage_monitor: CMOS Battery Status

[summary]

level message OK OK WARN Battery Dead

[values]

key (example) value (example) CMOS battery status OK / Battery Dead

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-voltage","title":"CMOS Battery Voltage","text":"

/diagnostics/voltage_monitor: CMOS battery voltage

[summary]

level message OK OK WARN Low Battery WARN Battery Died

[values]

key value (example) CMOS battery voltage 3.06"},{"location":"system/system_monitor/docs/traffic_reader/","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#traffic_reader","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#name","title":"Name","text":"

traffic_reader - monitoring network traffic by process

"},{"location":"system/system_monitor/docs/traffic_reader/#synopsis","title":"Synopsis","text":"

traffic_reader [OPTION]

"},{"location":"system/system_monitor/docs/traffic_reader/#description","title":"Description","text":"

Monitoring network traffic by process. This runs as a daemon process and listens to a TCP/IP port (7636 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/traffic_reader/#notes","title":"Notes","text":"

The 'traffic_reader' requires nethogs command.

"},{"location":"system/system_monitor/docs/traffic_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/topic_state_monitor/","title":"topic_state_monitor","text":""},{"location":"system/topic_state_monitor/#topic_state_monitor","title":"topic_state_monitor","text":""},{"location":"system/topic_state_monitor/#purpose","title":"Purpose","text":"

This node monitors input topic for abnormalities such as timeout and low frequency. The result of topic status is published as diagnostics.

"},{"location":"system/topic_state_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The types of topic status and corresponding diagnostic status are following.

Topic status Diagnostic status Description OK OK The topic has no abnormalities NotReceived ERROR The topic has not been received yet WarnRate WARN The frequency of the topic is dropped ErrorRate ERROR The frequency of the topic is significantly dropped Timeout ERROR The topic subscription is stopped for a certain time"},{"location":"system/topic_state_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/topic_state_monitor/#input","title":"Input","text":"Name Type Description any name any type Subscribe target topic to monitor"},{"location":"system/topic_state_monitor/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/topic_state_monitor/#parameters","title":"Parameters","text":""},{"location":"system/topic_state_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description topic string - Name of target topic topic_type string - Type of target topic (used if the topic is not transform) frame_id string - Frame ID of transform parent (used if the topic is transform) child_frame_id string - Frame ID of transform child (used if the topic is transform) transient_local bool false QoS policy of topic subscription (Transient Local/Volatile) best_effort bool false QoS policy of topic subscription (Best Effort/Reliable) diag_name string - Name used for the diagnostics to publish update_rate double 10.0 Timer callback period [Hz]"},{"location":"system/topic_state_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description warn_rate double 0.5 If the topic rate is lower than this value, the topic status becomes WarnRate error_rate double 0.1 If the topic rate is lower than this value, the topic status becomes ErrorRate timeout double 1.0 If the topic subscription is stopped for more than this time [s], the topic status becomes Timeout window_size int 10 Window size of target topic for calculating frequency"},{"location":"system/topic_state_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/velodyne_monitor/","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#velodyne_monitor","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#purpose","title":"Purpose","text":"

This node monitors the status of Velodyne LiDARs. The result of the status is published as diagnostics. Take care not to use this diagnostics to decide the lidar error. Please read Assumptions / Known limits for the detail reason.

"},{"location":"system/velodyne_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The status of Velodyne LiDAR can be retrieved from http://[ip_address]/cgi/{info, settings, status, diag}.json.

The types of abnormal status and corresponding diagnostics status are following.

Abnormal status Diagnostic status No abnormality OK Top board temperature is too cold ERROR Top board temperature is cold WARN Top board temperature is too hot ERROR Top board temperature is hot WARN Bottom board temperature is too cold ERROR Bottom board temperature is cold WARN Bottom board temperature is too hot ERROR Bottom board temperature is hot WARN Rpm(Rotations per minute) of the motor is too low ERROR Rpm(Rotations per minute) of the motor is low WARN Connection error (cannot get Velodyne LiDAR status) ERROR"},{"location":"system/velodyne_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/velodyne_monitor/#input","title":"Input","text":"

None

"},{"location":"system/velodyne_monitor/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/velodyne_monitor/#parameters","title":"Parameters","text":""},{"location":"system/velodyne_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description timeout double 0.5 Timeout for HTTP request to get Velodyne LiDAR status [s]"},{"location":"system/velodyne_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description ip_address string \"192.168.1.201\" IP address of target Velodyne LiDAR temp_cold_warn double -5.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes WARN [\u00b0C] temp_cold_error double -10.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes ERROR [\u00b0C] temp_hot_warn double 75.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes WARN [\u00b0C] temp_hot_error double 80.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes ERROR [\u00b0C] rpm_ratio_warn double 0.80 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes WARN rpm_ratio_error double 0.70 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes ERROR"},{"location":"system/velodyne_monitor/#config-files","title":"Config files","text":"

Config files for several velodyne models are prepared. The temp_*** parameters are set with reference to the operational temperature from each datasheet. Moreover, the temp_hot_*** of each model are set highly as 20 from operational temperature. Now, VLP-16.param.yaml is used as default argument because it is lowest spec.

Model Name Config name Operational Temperature [\u2103] VLP-16 VLP-16.param.yaml -10 to 60 VLP-32C VLP-32C.param.yaml -20 to 60 VLS-128 VLS-128.param.yaml -20 to 60 Velarray M1600 Velarray_M1600.param.yaml -40 to 85 HDL-32E HDL-32E.param.yaml -10 to 60"},{"location":"system/velodyne_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This node uses the http_client and request results by GET method. It takes a few seconds to get results, or generate a timeout exception if it does not succeed the GET request. This occurs frequently and the diagnostics aggregator output STALE. Therefore I recommend to stop using this results to decide the lidar error, and only monitor it to confirm lidar status.

"},{"location":"tools/reaction_analyzer/","title":"Reaction Analyzer","text":""},{"location":"tools/reaction_analyzer/#reaction-analyzer","title":"Reaction Analyzer","text":""},{"location":"tools/reaction_analyzer/#description","title":"Description","text":"

The main purpose of the reaction analyzer package is to measure the reaction times of various nodes within a ROS-based autonomous driving simulation environment by subscribing to pre-determined topics. This tool is particularly useful for evaluating the performance of perception, planning, and control pipelines in response to dynamic changes in the environment, such as sudden obstacles. To be able to measure both control outputs and perception outputs, it was necessary to divide the node into two running_mode: planning_control and perception_planning.

"},{"location":"tools/reaction_analyzer/#planning-control-mode","title":"Planning Control Mode","text":"

In this mode, the reaction analyzer creates a dummy publisher for the PredictedObjects and PointCloud2 topics. In the beginning of the test, it publishes the initial position of the ego vehicle and the goal position to set the test environment. Then, it spawns a sudden obstacle in front of the ego vehicle. After the obstacle is spawned, it starts to search reacted messages of the planning and control nodes in the pre-determined topics. When all the topics are reacted, it calculates the reaction time of the nodes and statistics by comparing reacted_times of each of the nodes with spawn_cmd_time, and it creates a csv file to store the results.

"},{"location":"tools/reaction_analyzer/#perception-planning-mode","title":"Perception Planning Mode","text":"

In this mode, the reaction analyzer reads the rosbag files which are recorded from AWSIM, and it creates a topic publisher for each topic inside the rosbag to replay the rosbag. It reads two rosbag files: path_bag_without_object and path_bag_with_object. Firstly, it replays the path_bag_without_object to set the initial position of the ego vehicle and the goal position. After spawn_time_after_init seconds , it replays the path_bag_with_object to spawn a sudden obstacle in front of the ego vehicle. After the obstacle is spawned, it starts to search the reacted messages of the perception and planning nodes in the pre-determined topics. When all the topics are reacted, it calculates the reaction time of the nodes and statistics by comparing reacted_times of each of the nodes with spawn_cmd_time, and it creates a csv file to store the results.

"},{"location":"tools/reaction_analyzer/#point-cloud-publisher-type","title":"Point Cloud Publisher Type","text":"

To get better analyze for Perception & Sensing pipeline, the reaction analyzer can publish the point cloud messages in 3 different ways: async_header_sync_publish, sync_header_sync_publish or async_publish. (T is the period of the lidar's output)

"},{"location":"tools/reaction_analyzer/#usage","title":"Usage","text":"

The common parameters you need to define for both running modes are output_file_path, test_iteration, and reaction_chain list. output_file_path is the output file path is the path where the results and statistics will be stored. test_iteration defines how many tests will be performed. The reaction_chain list is the list of the pre-defined topics you want to measure their reaction times.

IMPORTANT: Ensure the reaction_chain list is correctly defined:

"},{"location":"tools/reaction_analyzer/#prepared-test-environment","title":"Prepared Test Environment","text":""},{"location":"tools/reaction_analyzer/#planning-control-mode_1","title":"Planning Control Mode","text":"
ros2 launch reaction_analyzer reaction_analyzer.launch.xml running_mode:=planning_control vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit map_path:=[MAP_PATH]\n

After the command, the simple_planning_simulator and the reaction_analyzer will be launched. It will automatically start to test. After the test is completed, the results will be stored in the output_file_path you defined.

"},{"location":"tools/reaction_analyzer/#perception-planning-mode_1","title":"Perception Planning Mode","text":"
ros2 launch reaction_analyzer reaction_analyzer.launch.xml running_mode:=perception_planning vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=[MAP_PATH]\n

After the command, the e2e_simulator and the reaction_analyzer will be launched. It will automatically start to test. After the test is completed, the results will be stored in the output_file_path you defined.

"},{"location":"tools/reaction_analyzer/#prepared-test-environment_1","title":"Prepared Test Environment","text":"

Scene without object:

Scene object:

"},{"location":"tools/reaction_analyzer/#custom-test-environment","title":"Custom Test Environment","text":"

If you want to run the reaction analyzer with your custom test environment, you need to redefine some of the parameters. The parameters you need to redefine are initialization_pose, entity_params, goal_pose, and topic_publisher ( for perception_planning mode) parameters.

ros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=[MAP_PATH]\n

After localize EGO and dummy vehicle, we should write the positions of these entities in the map frame in reaction_analyzer.param.yaml. To achieve this:

PS: initialization_pose is only valid for planning_control mode.

ros2 bag record --all\n

NOTE: You should record the rosbags in the same environment with the same position of the EGO vehicle. You don't need to run Autoware while recording.

"},{"location":"tools/reaction_analyzer/#results","title":"Results","text":"

The results will be stored in the csv file format and written to the output_file_path you defined. It shows each pipeline of the Autoware by using header timestamp of the messages, and it reports Node Latency, Pipeline Latency, and Total Latency for each of the nodes.

"},{"location":"tools/reaction_analyzer/#parameters","title":"Parameters","text":"Name Type Description timer_period double [s] Period for the main processing timer. test_iteration int Number of iterations for the test. output_file_path string Directory path where test results and statistics will be stored. spawn_time_after_init double [s] Time delay after initialization before spawning objects. Only valid perception_planning mode. spawn_distance_threshold double [m] Distance threshold for spawning objects. Only valid planning_control mode. poses.initialization_pose struct Initial pose of the vehicle, containing x, y, z, roll, pitch, and yaw fields. Only valid planning_control mode. poses.entity_params struct Parameters for entities (e.g., obstacles), containing x, y, z, roll, pitch, yaw, x_dimension, y_dimension, and z_dimension. poses.goal_pose struct Goal pose of the vehicle, containing x, y, z, roll, pitch, and yaw fields. topic_publisher.path_bag_without_object string Path to the ROS bag file without objects. Only valid perception_planning mode. topic_publisher.path_bag_with_object string Path to the ROS bag file with objects. Only valid perception_planning mode. topic_publisher.spawned_pointcloud_sampling_distance double [m] Sampling distance for point clouds of spawned objects. Only valid planning_control mode. topic_publisher.dummy_perception_publisher_period double [s] Publishing period for the dummy perception data. Only valid planning_control mode. topic_publisher.pointcloud_publisher.pointcloud_publisher_type string Defines how the PointCloud2 messages are going to be published. Modes explained above. topic_publisher.pointcloud_publisher.pointcloud_publisher_period double [s] Publishing period of the PointCloud2 messages. topic_publisher.pointcloud_publisher.publish_only_pointcloud_with_object bool Default false. Publish only the point cloud messages with the object. reaction_params.first_brake_params.debug_control_commands bool Debug publish flag. reaction_params.first_brake_params.control_cmd_buffer_time_interval double [s] Time interval for buffering control commands. reaction_params.first_brake_params.min_number_descending_order_control_cmd int Minimum number of control commands in descending order for triggering brake. reaction_params.first_brake_params.min_jerk_for_brake_cmd double [m/s\u00b3] Minimum jerk value for issuing a brake command. reaction_params.search_zero_vel_params.max_looking_distance double [m] Maximum looking distance for zero velocity on trajectory reaction_params.search_entity_params.search_radius double [m] Searching radius for spawned entity. Distance between ego pose and entity pose. reaction_chain struct List of the nodes with their topics and topic's message types."},{"location":"tools/reaction_analyzer/#limitations","title":"Limitations","text":" "},{"location":"tools/reaction_analyzer/#future-improvements","title":"Future improvements","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/","title":"accel_brake_map_calibrator","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#accel_brake_map_calibrator","title":"accel_brake_map_calibrator","text":"

The role of this node is to automatically calibrate accel_map.csv / brake_map.csv used in the autoware_raw_vehicle_cmd_converter node.

The base map, which is lexus's one by default, is updated iteratively with the loaded driving data.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#how-to-calibrate","title":"How to calibrate","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the autoware_accel_brake_map_calibrator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch autoware_accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true\n

Or if you want to use rosbag files, run the following commands.

ros2 launch autoware_accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true use_sim_time:=true\nros2 bag play <rosbag_file> --clock\n

During the calibration with setting the parameter progress_file_output to true, the log file is output in [directory of autoware_accel_brake_map_calibrator]/config/ . You can also see accel and brake maps in [directory of autoware_accel_brake_map_calibrator]/config/accel_map.csv and [directory of autoware_accel_brake_map_calibrator]/config/brake_map.csv after calibration.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#calibration-plugin","title":"Calibration plugin","text":"

The rviz:=true option displays the RViz with a calibration plugin as below.

The current status (velocity and pedal) is shown in the plugin. The color on the current cell varies green/red depending on the current data is valid/invalid. The data that doesn't satisfy the following conditions are considered invalid and will not be used for estimation since aggressive data (e.g. when the pedal is moving fast) causes bad calibration accuracy.

The detailed parameters are described in the parameter section.

Note: You don't need to worry about whether the current state is red or green during calibration. Just keep getting data until all the cells turn red.

The value of each cell in the map is gray at first, and it changes from blue to red as the number of valid data in the cell accumulates. It is preferable to continue the calibration until each cell of the map becomes close to red. In particular, the performance near the stop depends strongly on the velocity of 0 ~ 6m/s range and the pedal value of +0.2 ~ -0.4, range so it is desirable to focus on those areas.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#diagnostics","title":"Diagnostics","text":"

The accel brake map_calibrator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current accel/brake map is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the accel/brake map.

Status Diagnostics Type Diagnostics message Description No calibration required OK \"OK\" Calibration Required WARN \"Accel/brake map Calibration is required.\" The accuracy of current accel/brake map may be low.

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /accel_brake_map_calibrator/output/update_suggest\n

When the diagnostics type is WARN, True is published on this topic and the update of the accel/brake map is suggested.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#evaluation-of-the-accel-brake-map-accuracy","title":"Evaluation of the accel / brake map accuracy","text":"

The accuracy of map is evaluated by the Root Mean Squared Error (RMSE) between the observed acceleration and predicted acceleration.

TERMS:

You can check additional error information with the following topics.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#how-to-visualize-calibration-data","title":"How to visualize calibration data","text":"

The process of calibration can be visualized as below. Since these scripts need the log output of the calibration, the pedal_accel_graph_output parameter must be set to true while the calibration is running for the visualization.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#visualize-plot-of-relation-between-acceleration-and-pedal","title":"Visualize plot of relation between acceleration and pedal","text":"

The following command shows the plot of used data in the calibration. In each plot of velocity ranges, you can see the distribution of the relationship between pedal and acceleration, and raw data points with colors according to their pitch angles.

ros2 run autoware_accel_brake_map_calibrator view_plot.py\n

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#visualize-statistics-about-accelerationvelocitypedal-data","title":"Visualize statistics about acceleration/velocity/pedal data","text":"

The following command shows the statistics of the calibration:

of all data in each map cell.

ros2 run autoware_accel_brake_map_calibrator view_statistics.py\n

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#how-to-save-the-calibrated-accel-brake-map-anytime-you-want","title":"How to save the calibrated accel / brake map anytime you want","text":"

You can save accel and brake map anytime with the following command.

ros2 service call /accel_brake_map_calibrator/update_map_dir tier4_vehicle_msgs/srv/UpdateAccelBrakeMap \"path: '<accel/brake map directory>'\"\n

You can also save accel and brake map in the default directory where Autoware reads accel_map.csv/brake_map.csv using the RViz plugin (AccelBrakeMapCalibratorButtonPanel) as following.

  1. Click Panels tab, and select AccelBrakeMapCalibratorButtonPanel.

  2. Select the panel, and the button will appear at the bottom of RViz.

  3. Press the button, and the accel / brake map will be saved. (The button cannot be pressed in certain situations, such as when the calibrator node is not running.)

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#parameters","title":"Parameters","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#system-parameters","title":"System Parameters","text":"Name Type Description Default value update_method string you can select map calibration method. \"update_offset_each_cell\" calculates offsets for each grid cells on the map. \"update_offset_total\" calculates the total offset of the map. \"update_offset_each_cell\" get_pitch_method string \"tf\": get pitch from tf, \"none\": unable to perform pitch validation and pitch compensation \"tf\" pedal_accel_graph_output bool if true, it will output a log of the pedal accel graph. true progress_file_output bool if true, it will output a log and csv file of the update process. false default_map_dir str directory of default map [directory of autoware_raw_vehicle_cmd_converter]/data/default/ calibrated_map_dir str directory of calibrated map [directory of autoware_accel_brake_map_calibrator]/config/ update_hz double hz for update 10.0"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#algorithm-parameters","title":"Algorithm Parameters","text":"Name Type Description Default value initial_covariance double Covariance of initial acceleration map (larger covariance makes the update speed faster) 0.05 velocity_min_threshold double Speeds smaller than this are not used for updating. 0.1 velocity_diff_threshold double When the velocity data is more than this threshold away from the grid reference speed (center value), the associated data is not used for updating. 0.556 max_steer_threshold double If the steer angle is greater than this value, the associated data is not used for updating. 0.2 max_pitch_threshold double If the pitch angle is greater than this value, the associated data is not used for updating. 0.02 max_jerk_threshold double If the ego jerk calculated from ego acceleration is greater than this value, the associated data is not used for updating. 0.7 pedal_velocity_thresh double If the pedal moving speed is greater than this value, the associated data is not used for updating. 0.15 pedal_diff_threshold double If the current pedal value is more then this threshold away from the previous value, the associated data is not used for updating. 0.03 max_accel double Maximum value of acceleration calculated from velocity source. 5.0 min_accel double Minimum value of acceleration calculated from velocity source. -5.0 pedal_to_accel_delay double The delay time between actuation_cmd to acceleration, considered in the update logic. 0.3 update_suggest_thresh double threshold of RMSE ratio that update suggest flag becomes true. ( RMSE ratio: [RMSE of new map] / [RMSE of original map] ) 0.7 max_data_count int For visualization. When the data num of each grid gets this value, the grid color gets red. 100 accel_brake_value_source string Whether to use actuation_status or actuation_command as accel/brake sources. value status"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#test-utility-scripts","title":"Test utility scripts","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#constant-accelbrake-command-test","title":"Constant accel/brake command test","text":"

These scripts are useful to test for accel brake map calibration. These generate an ActuationCmd with a constant accel/brake value given interactively by a user through CLI.

The accel/brake_tester.py receives a target accel/brake command from CLI. It sends a target value to actuation_cmd_publisher.py which generates the ActuationCmd. You can run these scripts by the following commands in the different terminals, and it will be as in the screenshot below.

ros2 run autoware_accel_brake_map_calibrator accel_tester.py\nros2 run autoware_accel_brake_map_calibrator brake_tester.py\nros2 run autoware_accel_brake_map_calibrator actuation_cmd_publisher.py\n

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#calibration-method","title":"Calibration Method","text":"

Two algorithms are selectable for the acceleration map update, update_offset_four_cell_around and update_offset_each_cell. Please see the link for details.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#data-preprocessing","title":"Data Preprocessing","text":"

Before calibration, missing or unusable data (e.g., too large handle angles) must first be eliminated. The following parameters are used to determine which data to remove.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#parameters_1","title":"Parameters","text":"Name Description Default Value velocity_min_threshold Exclude minimal velocity 0.1 max_steer_threshold Exclude large steering angle 0.2 max_pitch_threshold Exclude large pitch angle 0.02 max_jerk_threshold Exclude large jerk 0.7 pedal_velocity_thresh Exclude large pedaling speed 0.15"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#update_offset_each_cell","title":"update_offset_each_cell","text":"

Update by Recursive Least Squares(RLS) method using data close enough to each grid.

Advantage : Only data close enough to each grid is used for calibration, allowing accurate updates at each point.

Disadvantage : Calibration is time-consuming due to a large amount of data to be excluded.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#parameters_2","title":"Parameters","text":"

Data selection is determined by the following thresholds.

Name Default Value velocity_diff_threshold 0.556 pedal_diff_threshold 0.03"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#update-formula","title":"Update formula","text":"\\[ \\begin{align} \\theta[n]=& \\theta[n-1]+\\frac{p[n-1]x^{(n)}}{\\lambda+p[n-1]{(x^{(n)})}^2}(y^{(n)}-\\theta[n-1]x^{(n)})\\\\ p[n]=&\\frac{p[n-1]}{\\lambda+p[n-1]{(x^{(n)})}^2} \\end{align} \\]"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#variables","title":"Variables","text":"Variable name Symbol covariance \\(p[n-1]\\) map_offset \\(\\theta[n]\\) forgettingfactor \\(\\lambda\\) phi \\(x(=1)\\) measured_acc \\(y\\)"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#update_offset_four_cell_around-1","title":"update_offset_four_cell_around [1]","text":"

Update the offsets by RLS in four grids around newly obtained data. By considering linear interpolation, the update takes into account appropriate weights. Therefore, there is no need to remove data by thresholding.

Advantage : No data is wasted because updates are performed on the 4 grids around the data with appropriate weighting. Disadvantage : Accuracy may be degraded due to extreme bias of the data. For example, if data \\(z(k)\\) is biased near \\(Z_{RR}\\) in Fig. 2, updating is performed at the four surrounding points ( \\(Z_{RR}\\), \\(Z_{RL}\\), \\(Z_{LR}\\), and \\(Z_{LL}\\)), but accuracy at \\(Z_{LL}\\) is not expected.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#implementation","title":"Implementation","text":"

See eq.(7)-(10) in [1] for the updated formula. In addition, eq.(17),(18) from [1] are used for Anti-Windup.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#references","title":"References","text":"

[1] Gabrielle Lochrie, Michael Doljevic, Mario Nona, Yongsoon Yoon, Anti-Windup Recursive Least Squares Method for Adaptive Lookup Tables with Application to Automotive Powertrain Control Systems, IFAC-PapersOnLine, Volume 54, Issue 20, 2021, Pages 840-845

"},{"location":"vehicle/autoware_external_cmd_converter/","title":"external_cmd_converter","text":""},{"location":"vehicle/autoware_external_cmd_converter/#external_cmd_converter","title":"external_cmd_converter","text":"

external_cmd_converter is a node that converts desired mechanical input to acceleration and velocity by using accel/brake map.

"},{"location":"vehicle/autoware_external_cmd_converter/#algorithm","title":"Algorithm","text":""},{"location":"vehicle/autoware_external_cmd_converter/#how-to-calculate-reference-acceleration-and-velocity","title":"How to calculate reference acceleration and velocity","text":"

A reference acceleration and velocity are derived from the throttle and brake values of external control commands.

"},{"location":"vehicle/autoware_external_cmd_converter/#reference-acceleration","title":"Reference Acceleration","text":"

A reference acceleration is calculated from accel_brake_map based on values of a desired_pedal and a current velocity;

\\[ pedal_d = throttle_d - brake_d, \\] \\[ acc_{ref} = Acc(pedal_d, v_{x,current}). \\] Parameter Description \\(throttle_d\\) throttle value of external control command (~/in/external_control_cmd.control.throttle) \\(brake_d\\) brake value of external control command (~/in/external_control_cmd.control.brake) \\(v_{x,current}\\) current longitudinal velocity (~/in/odometry.twist.twist.linear.x) Acc accel_brake_map"},{"location":"vehicle/autoware_external_cmd_converter/#reference-velocity","title":"Reference Velocity","text":"

A reference velocity is calculated based on a current velocity and a reference acceleration:

\\[ v_{ref} = v_{x,current} + k_{v_{ref}} \\cdot \\text{sign}_{gear} \\cdot acc_{ref}. \\] Parameter Description \\(acc_{ref}\\) reference acceleration \\(k_{v_{ref}}\\) reference velocity gain \\(\\text{sign}_{gear}\\) gear command (~/in/shift_cmd) (Drive/Low: 1, Reverse: -1, Other: 0)"},{"location":"vehicle/autoware_external_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/in/external_control_cmd tier4_external_api_msgs::msg::ControlCommand target throttle/brake/steering_angle/steering_angle_velocity is necessary to calculate desired control command. ~/input/shift_cmd\" autoware_vehicle_msgs::GearCommand current command of gear. ~/input/emergency_stop tier4_external_api_msgs::msg::Heartbeat emergency heart beat for external command. ~/input/current_gate_mode tier4_control_msgs::msg::GateMode topic for gate mode. ~/input/odometry navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/autoware_external_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/out/control_cmd autoware_control_msgs::msg::Control ackermann control command converted from selected external command"},{"location":"vehicle/autoware_external_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description ref_vel_gain_ double reference velocity gain timer_rate double timer's update rate wait_for_first_topic double if time out check is done after receiving first topic control_command_timeout double time out check for control command emergency_stop_timeout double time out check for emergency stop command"},{"location":"vehicle/autoware_external_cmd_converter/#limitation","title":"Limitation","text":"

tbd.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/","title":"autoware_raw_vehicle_cmd_converter","text":""},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#autoware_raw_vehicle_cmd_converter","title":"autoware_raw_vehicle_cmd_converter","text":""},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#overview","title":"Overview","text":"

The raw_vehicle_command_converter is a crucial node in vehicle automation systems, responsible for translating desired steering and acceleration inputs into specific vehicle control commands. This process is achieved through a combination of a lookup table and an optional feedback control system.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#lookup-table","title":"Lookup Table","text":"

The core of the converter's functionality lies in its use of a CSV-formatted lookup table. This table encapsulates the relationship between the throttle/brake pedal (depending on your vehicle control interface) and the corresponding vehicle acceleration across various speeds. The converter utilizes this data to accurately translate target accelerations into appropriate throttle/brake values.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#creation-of-reference-data","title":"Creation of Reference Data","text":"

Reference data for the lookup table is generated through the following steps:

  1. Data Collection: On a flat road, a constant value command (e.g., throttle/brake pedal) is applied to accelerate or decelerate the vehicle.
  2. Recording Data: During this phase, both the IMU acceleration and vehicle velocity data are recorded.
  3. CSV File Generation: A CSV file is created, detailing the relationship between command values, vehicle speed, and resulting acceleration.

Once the acceleration map is crafted, it should be loaded when the RawVehicleCmdConverter node is launched, with the file path defined in the launch file.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#auto-calibration-tool","title":"Auto-Calibration Tool","text":"

For ease of calibration and adjustments to the lookup table, an auto-calibration tool is available. More information and instructions for this tool can be found here.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#variable-gear-ratio-vgr","title":"Variable Gear Ratio (VGR)","text":"

This is a gear ratio for converting tire angle to steering angle. Generally, to improve operability, the gear ratio becomes dynamically larger as the speed increases or the steering angle becomes smaller. For a certain vehicle, data was acquired and the gear ratio was approximated by the following formula.

\\[ a + b \\times v^2 - c \\times \\lvert \\delta \\rvert \\]

For that vehicle, the coefficients were as follows.

vgr_coef_a: 15.713\nvgr_coef_b: 0.053\nvgr_coef_c: 0.042\n

When convert_steer_cmd_method: \"vgr\" is selected, the node receives the control command from the controller as the desired tire angle and calculates the desired steering angle to output. Also, when convert_actuation_to_steering_status: true, this node receives the actuation_status topic and calculates the steer tire angle from the steer_wheel_angle and publishes it.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#vehicle-adaptor","title":"Vehicle Adaptor","text":"

Under development A feature that compensates for control commands according to the dynamic characteristics of the vehicle. This feature works when use_vehicle_adaptor: true is set and requires control_horizon to be enabled, so you need to set enable_control_cmd_horizon_pub: true in the trajectory_follower node.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/input/control_cmd autoware_control_msgs::msg::Control target velocity/acceleration/steering_angle/steering_angle_velocity is necessary to calculate actuation command. ~/input/steering\" autoware_vehicle_msgs::msg::SteeringReport subscribe only when convert_actuation_to_steering_status: false. current status of steering used for steering feed back control ~/input/odometry navigation_msgs::Odometry twist topic in odometry is used. ~/input/actuation_status tier4_vehicle_msgs::msg::ActuationStatus actuation status is assumed to receive the same type of status as sent to the vehicle side. For example, if throttle/brake pedal/steer_wheel_angle is sent, the same type of status is received. In the case of steer_wheel_angle, it is used to calculate steer_tire_angle and VGR in this node.

Input topics when vehicle_adaptor is enabled

Name Type Description ~/input/accel geometry_msgs::msg::AccelWithCovarianceStamped; acceleration status ~/input/operation_mode_state autoware_adapi_v1_msgs::msg::OperationModeState operation mode status ~/input/control_horizon autoware_control_msgs::msg::ControlHorizon control horizon command"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/output/actuation_cmd tier4_vehicle_msgs::msg::ActuationCommandStamped actuation command for vehicle to apply mechanical input ~/output/steering_status autoware_vehicle_msgs::msg::SteeringReport publish only when convert_actuation_to_steering_status: true. steer tire angle is calculated from steer wheel angle and published."},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#parameters","title":"Parameters","text":"Name Type Description Default Range csv_path_accel_map string path for acceleration map csv file $(find-pkg-share autoware_raw_vehicle_cmd_converter)/data/default/accel_map.csv N/A csv_path_brake_map string path for brake map csv file $(find-pkg-share autoware_raw_vehicle_cmd_converter)/data/default/brake_map.csv N/A csv_path_steer_map string path for steer map csv file $(find-pkg-share autoware_raw_vehicle_cmd_converter)/data/default/steer_map.csv N/A convert_accel_cmd boolean use accel or not true N/A convert_brake_cmd boolean use brake or not true N/A convert_steer_cmd boolean use steer or not true N/A use_steer_ff boolean steering steer controller using steer feed forward or not true N/A use_steer_fb boolean steering steer controller using steer feed back or not true N/A is_debugging boolean debugging mode or not false N/A max_throttle float maximum value of throttle 0.4 \u22650.0 max_brake float maximum value of brake 0.8 \u22650.0 max_steer float maximum value of steer 10.0 N/A min_steer float minimum value of steer -10.0 N/A steer_pid.kp float proportional coefficient value in PID control 150.0 N/A steer_pid.ki float integral coefficient value in PID control 15.0 >0.0 steer_pid.kd float derivative coefficient value in PID control 0.0 N/A steer_pid.max float maximum value of PID 8.0 N/A steer_pid.min float minimum value of PID -8.0. N/A steer_pid.max_p float maximum value of Proportional in PID 8.0 N/A steer_pid.min_p float minimum value of Proportional in PID -8.0 N/A steer_pid.max_i float maximum value of Integral in PID 8.0 N/A steer_pid.min_i float minimum value of Integral in PID -8.0 N/A steer_pid.max_d float maximum value of Derivative in PID 0.0 N/A steer_pid.min_d float minimum value of Derivative in PID 0.0 N/A steer_pid.invalid_integration_decay float invalid integration decay value in PID control 0.97 >0.0 convert_steer_cmd_method string method for converting steer command vgr ['vgr', 'steer_map'] vgr_coef_a float coefficient a for variable gear ratio 15.713 N/A vgr_coef_b float coefficient b for variable gear ratio 0.053 N/A vgr_coef_c float coefficient c for variable gear ratio 0.042 N/A convert_actuation_to_steering_status boolean convert actuation to steering status or not. Whether to subscribe to actuation_status and calculate and publish steering_status For example, receive the steering wheel angle and calculate the steering wheel angle based on the gear ratio. If false, the vehicle interface must publish steering_status. true N/A use_vehicle_adaptor boolean flag to enable feature that compensates control commands according to vehicle dynamics. false N/A"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#limitation","title":"Limitation","text":"

The current feed back implementation is only applied to steering control.

"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/","title":"steer_offset_estimator","text":""},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#steer_offset_estimator","title":"steer_offset_estimator","text":""},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#purpose","title":"Purpose","text":"

The role of this node is to automatically calibrate steer_offset used in the vehicle_interface node.

The base steer offset value is 0 by default, which is standard, is updated iteratively with the loaded driving data. This module is supposed to be used in below straight driving situation.

"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Estimates sequential steering offsets from kinematic model and state observations. Calculate yaw rate error and then calculate steering error recursively by least squared method, for more details see updateSteeringOffset() function.

"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#input","title":"Input","text":"Name Type Description ~/input/twist geometry_msgs::msg::TwistStamped vehicle twist ~/input/steer autoware_vehicle_msgs::msg::SteeringReport steering"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#output","title":"Output","text":"Name Type Description ~/output/steering_offset tier4_debug_msgs::msg::Float32Stamped steering offset ~/output/steering_offset_covariance tier4_debug_msgs::msg::Float32Stamped covariance of steering offset"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the steer_offset_estimator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch steer_offset_estimator steer_offset_estimator.launch.xml\n

Or if you want to use rosbag files, run the following commands.

ros2 param set /use_sim_time true\nros2 bag play <rosbag_file> --clock\n
"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#parameters","title":"Parameters","text":"Name Type Description Default Range initial_covariance float steer offset is larger than tolerance 1000 N/A steer_update_hz float update hz of steer data 10 \u22650.0 forgetting_factor float weight of using previous value 0.999 \u22650.0 valid_min_velocity float velocity below this value is not used 5 \u22650.0 valid_max_steer float steer above this value is not used 0.05 N/A warn_steer_offset_deg float Warn if offset is above this value. ex. if absolute estimated offset is larger than 2.5[deg] => warning 2.5 N/A"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#diagnostics","title":"Diagnostics","text":"

The steer_offset_estimator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current steer_offset is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the steer_offset.

Status Diagnostics Type Diagnostics message No calibration required OK \"Preparation\" Calibration Required WARN \"Steer offset is larger than tolerance\"

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /vehicle/status/steering_offset\n
"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/","title":"autoware_mission_details_overlay_rviz_plugin","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#autoware_mission_details_overlay_rviz_plugin","title":"autoware_mission_details_overlay_rviz_plugin","text":"

This RViz plugin displays the remaining distance and time for the current mission.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#input","title":"Input","text":"Name Type Description /planning/mission_remaining_distance_time autoware_planning_msgs::msg::MissionRemainingDistanceTime The topic is for mission remaining distance and time"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#overlay-parameters","title":"Overlay Parameters","text":"Name Type Default Value Description Width int 170 Width of the overlay [px] Height int 100 Height of the overlay [px] Right int 10 Margin from the right border [px] Top int 10 Margin from the top border [px]

The mission details display is aligned with top right corner of the screen.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#usage","title":"Usage","text":"

Similar to autoware_overlay_rviz_plugin

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#credits","title":"Credits","text":"

Based on the jsk_visualization package.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#icons","title":"Icons","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/","title":"autoware_overlay_rviz_plugin","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#autoware_overlay_rviz_plugin","title":"autoware_overlay_rviz_plugin","text":"

Plugin for displaying 2D overlays over the RViz2 3D scene.

Based on the jsk_visualization package, under the 3-Clause BSD license.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and gears.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#input","title":"Input","text":"Name Type Description /vehicle/status/velocity_status autoware_vehicle_msgs::msg::VelocityReport The topic is vehicle velocity /vehicle/status/turn_indicators_status autoware_vehicle_msgs::msg::TurnIndicatorsReport The topic is status of turn signal /vehicle/status/hazard_status autoware_vehicle_msgs::msg::HazardReport The topic is status of hazard /vehicle/status/steering_status autoware_vehicle_msgs::msg::SteeringReport The topic is status of steering /vehicle/status/gear_status autoware_vehicle_msgs::msg::GearReport The topic is status of gear /planning/scenario_planning/current_max_velocity tier4_planning_msgs::msg::VelocityLimit The topic is velocity limit /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The topic is status of traffic light"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#signaldisplay","title":"SignalDisplay","text":"Name Type Default Value Description property_width_ int 128 Width of the plotter window [px] property_height_ int 128 Height of the plotter window [px] property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_signal_color_ QColor QColor(25, 255, 240) Turn Signal color"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz2 and click Add button under the Displays panel.

  2. Under By display type tab, select autoware_overlay_rviz_plugin/SignalDisplay and press OK.

  3. Enter the names of the topics if necessary.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_string_stamped_overlay_rviz_plugin/","title":"autoware_string_stamped_rviz_plugin","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_string_stamped_overlay_rviz_plugin/#autoware_string_stamped_rviz_plugin","title":"autoware_string_stamped_rviz_plugin","text":"

Plugin for displaying 2D overlays over the RViz2 3D scene.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_string_stamped_overlay_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the ROS message whose topic type is autoware_internal_debug_msgs::msg::StringStamped in rviz.

"},{"location":"visualization/autoware_perception_rviz_plugin/","title":"autoware_perception_rviz_plugin","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#autoware_perception_rviz_plugin","title":"autoware_perception_rviz_plugin","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#purpose","title":"Purpose","text":"

It is an rviz plugin for visualizing the result from perception module. This package is based on the implementation of the rviz plugin developed by Autoware.Auto.

See Autoware.Auto design documentation for the original design philosophy. [1]

"},{"location":"visualization/autoware_perception_rviz_plugin/#input-types-visualization-results","title":"Input Types / Visualization Results","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#detectedobjects","title":"DetectedObjects","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#input-types","title":"Input Types","text":"Name Type Description autoware_perception_msgs::msg::DetectedObjects detection result array"},{"location":"visualization/autoware_perception_rviz_plugin/#visualization-result","title":"Visualization Result","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#trackedobjects","title":"TrackedObjects","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#input-types_1","title":"Input Types","text":"Name Type Description autoware_perception_msgs::msg::TrackedObjects tracking result array"},{"location":"visualization/autoware_perception_rviz_plugin/#visualization-result_1","title":"Visualization Result","text":"

Overwrite tracking results with detection results.

"},{"location":"visualization/autoware_perception_rviz_plugin/#predictedobjects","title":"PredictedObjects","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#input-types_2","title":"Input Types","text":"Name Type Description autoware_perception_msgs::msg::PredictedObjects prediction result array"},{"location":"visualization/autoware_perception_rviz_plugin/#visualization-result_2","title":"Visualization Result","text":"

Overwrite prediction results with tracking results.

"},{"location":"visualization/autoware_perception_rviz_plugin/#referencesexternal-links","title":"References/External links","text":"

[1] https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/tools/visualization/autoware_rviz_plugins

"},{"location":"visualization/autoware_perception_rviz_plugin/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"visualization/bag_time_manager_rviz_plugin/","title":"bag_time_manager_rviz_plugin","text":""},{"location":"visualization/bag_time_manager_rviz_plugin/#bag_time_manager_rviz_plugin","title":"bag_time_manager_rviz_plugin","text":""},{"location":"visualization/bag_time_manager_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin allows publishing and controlling the ros bag time.

"},{"location":"visualization/bag_time_manager_rviz_plugin/#output","title":"Output","text":"

tbd.

"},{"location":"visualization/bag_time_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select BagTimeManagerPanel and press OK.

  3. See bag_time_manager_rviz_plugin/BagTimeManagerPanel is added.

"},{"location":"visualization/tier4_adapi_rviz_plugin/","title":"tier4_adapi_rviz_plugin","text":""},{"location":"visualization/tier4_adapi_rviz_plugin/#tier4_adapi_rviz_plugin","title":"tier4_adapi_rviz_plugin","text":""},{"location":"visualization/tier4_adapi_rviz_plugin/#routepanel","title":"RoutePanel","text":"

To use the panel, set the topic name from 2D Goal Pose Tool to /rviz/routing/pose. By default, when a tool publish a pose, the panel immediately sets a route with that as the goal. Enable or disable of allow_goal_modification option can be set with the check box.

Push the mode button in the waypoint to enter waypoint mode. In this mode, the pose is added to waypoints. Press the apply button to set the route using the saved waypoints (the last one is a goal). Reset the saved waypoints with the reset button.

"},{"location":"visualization/tier4_adapi_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"visualization/tier4_adapi_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"visualization/tier4_adapi_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/","title":"tier4_camera_view_rviz_plugin","text":""},{"location":"visualization/tier4_camera_view_rviz_plugin/#tier4_camera_view_rviz_plugin","title":"tier4_camera_view_rviz_plugin","text":""},{"location":"visualization/tier4_camera_view_rviz_plugin/#thirdpersonview-tool","title":"ThirdPersonView Tool","text":"

Add the tier4_camera_view_rviz_plugin/ThirdPersonViewTool tool to the RViz. Push the button, the camera will focus on the vehicle and set the target frame to base_link. Short cut key 'o'.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#birdeyeview-tool","title":"BirdEyeView Tool","text":"

Add the tier4_camera_view_rviz_plugin/BirdEyeViewTool tool to the RViz. Push the button, the camera will turn to the BEV view, the target frame is consistent with the latest frame. Short cut key 'r'.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"visualization/tier4_datetime_rviz_plugin/","title":"tier4_datetime_rviz_plugin","text":""},{"location":"visualization/tier4_datetime_rviz_plugin/#tier4_datetime_rviz_plugin","title":"tier4_datetime_rviz_plugin","text":""},{"location":"visualization/tier4_datetime_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the ROS Time and Wall Time in rviz.

"},{"location":"visualization/tier4_datetime_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/tier4_datetime_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select tier4_datetime_rviz_plugin/AutowareDateTimePanel and press OK.
"},{"location":"visualization/tier4_localization_rviz_plugin/","title":"tier4_localization_rviz_plugin","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#tier4_localization_rviz_plugin","title":"tier4_localization_rviz_plugin","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin can display the localization history obtained by ekf_localizer, ndt_scan_matching, and GNSS. If the uncertainty of the estimated pose is given, it can also be displayed.

"},{"location":"visualization/tier4_localization_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#input","title":"Input","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#pose-history","title":"Pose History","text":"Name Type Description input/pose geometry_msgs::msg::PoseStamped In input/pose, put the result of localization calculated by ekf_localizer, ndt_scan_matching, or GNSS"},{"location":"visualization/tier4_localization_rviz_plugin/#pose-with-covariance-history","title":"Pose With Covariance History","text":"Name Type Description input/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped In input/pose_with_covariance, put the result of localization calculated by ekf_localizer, ndt_scan_matching, or GNSS"},{"location":"visualization/tier4_localization_rviz_plugin/#parameters","title":"Parameters","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#pose-history_1","title":"Pose History","text":"Name Type Default Value Description property_buffer_size_ int 100 Buffer size of topic property_line_view_ bool true Use Line property or not property_line_width_ float 0.1 Width of Line property [m] property_line_alpha_ float 1.0 Alpha of Line property property_line_color_ QColor Qt::white Color of Line property"},{"location":"visualization/tier4_localization_rviz_plugin/#pose-with-covariance-history_1","title":"Pose With Covariance History","text":"Name Type Default Value Description property_buffer_size_ int 100 Buffer size of topic property_path_view_ bool true Use path property or not property_shape_type_ string Line Line or Arrow property_line_width_ float 0.1 Width of Line property [m] property_line_alpha_ float 1.0 Alpha of Line property property_line_color_ QColor Qt::white Color of Line property property_arrow_shaft_length float 0.3 Shaft length of Arrow property property_arrow_shaft_diameter float 0.15 Shaft diameter of Arrow property property_arrow_head_length float 0.2 Head length of Arrow property property_arrow_head_diameter float 0.3 Head diameter of Arrow property property_arrow_alpha_ float 1.0 Alpha of Arrow property property_arrow_color_ QColor Qt::white Color of Arrow property property_sphere_scale_ float 1.0 Scale of Sphere property property_sphere_alpha_ float 0.5 Alpha of Sphere property property_sphere_color_ QColor (204, 51, 204) Color of Sphere property"},{"location":"visualization/tier4_localization_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/tier4_localization_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select tier4_localization_rviz_plugin/PoseHistory or PoseWithCovarianceHistory. Next, press OK.
  3. Enter the name of the topic where you want to view the trajectory and the covariance.
  4. You can view the trajectory and the covariance.
"},{"location":"visualization/tier4_planning_rviz_plugin/","title":"tier4_planning_rviz_plugin","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#tier4_planning_rviz_plugin","title":"tier4_planning_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"visualization/tier4_planning_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the path, trajectory, and maximum speed.

"},{"location":"visualization/tier4_planning_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#input","title":"Input","text":"Name Type Description /input/path autoware_planning_msgs::msg::Path The topic on which to subscribe path /input/trajectory autoware_planning_msgs::msg::Trajectory The topic on which to subscribe trajectory /planning/scenario_planning/current_max_velocity tier4_planning_msgs/msg/VelocityLimit The topic on which to publish max velocity"},{"location":"visualization/tier4_planning_rviz_plugin/#output","title":"Output","text":"Name Type Description /planning/mission_planning/checkpoint geometry_msgs/msg/PoseStamped The topic on which to publish checkpoint"},{"location":"visualization/tier4_planning_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#missioncheckpoint","title":"MissionCheckpoint","text":"Name Type Default Value Description pose_topic_property_ string mission_checkpoint The topic on which to publish checkpoint std_dev_x_ float 0.5 X standard deviation for checkpoint pose [m] std_dev_y_ float 0.5 Y standard deviation for checkpoint pose [m] std_dev_theta_ float M_PI / 12.0 Theta standard deviation for checkpoint pose [rad] position_z_ float 0.0 Z position for checkpoint pose [m]"},{"location":"visualization/tier4_planning_rviz_plugin/#path","title":"Path","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_view_ bool false Use Constant Width or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"visualization/tier4_planning_rviz_plugin/#drivablearea","title":"DrivableArea","text":"Name Type Default Value Description color_scheme_property_ int 0 Color scheme of DrivableArea property alpha_property_ float 0.2 Alpha of DrivableArea property draw_under_property_ bool false Draw as background or not"},{"location":"visualization/tier4_planning_rviz_plugin/#pathfootprint","title":"PathFootprint","text":"Name Type Default Value Description property_path_footprint_view_ bool true Use Path Footprint property or not property_path_footprint_alpha_ float 1.0 Alpha of Path Footprint property property_path_footprint_color_ QColor Qt::black Color of Path Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m]"},{"location":"visualization/tier4_planning_rviz_plugin/#trajectory","title":"Trajectory","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_velocity_text_view_ bool false View text Velocity property_velocity_text_scale_ float 0.3 Scale of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"visualization/tier4_planning_rviz_plugin/#trajectoryfootprint","title":"TrajectoryFootprint","text":"Name Type Default Value Description property_trajectory_footprint_view_ bool true Use Trajectory Footprint property or not property_trajectory_footprint_alpha_ float 1.0 Alpha of Trajectory Footprint property property_trajectory_footprint_color_ QColor QColor(230, 230, 50) Color of Trajectory Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m] property_trajectory_point_view_ bool false Use Trajectory Point property or not property_trajectory_point_alpha_ float 1.0 Alpha of Trajectory Point property property_trajectory_point_color_ QColor QColor(0, 60, 255) Color of Trajectory Point property property_trajectory_point_radius_ float 0.1 Radius of Trajectory Point property"},{"location":"visualization/tier4_planning_rviz_plugin/#maxvelocity","title":"MaxVelocity","text":"Name Type Default Value Description property_topic_name_ string /planning/scenario_planning/current_max_velocity The topic on which to subscribe max velocity property_text_color_ QColor QColor(255, 255, 255) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 96 Length of the plotter window [px] property_value_scale_ float 1.0 / 4.0 Value scale"},{"location":"visualization/tier4_planning_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_planning_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the path or trajectory.
"},{"location":"visualization/tier4_planning_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"visualization/tier4_planning_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"visualization/tier4_planning_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"visualization/tier4_state_rviz_plugin/","title":"tier4_state_rviz_plugin","text":""},{"location":"visualization/tier4_state_rviz_plugin/#tier4_state_rviz_plugin","title":"tier4_state_rviz_plugin","text":""},{"location":"visualization/tier4_state_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the current status of autoware. This plugin also can engage from the panel.

"},{"location":"visualization/tier4_state_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_state_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState The topic represents the state of operation mode /api/routing/state autoware_adapi_v1_msgs::msg::RouteState The topic represents the state of route /api/localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState The topic represents the state of localization initialization /api/motion/state autoware_adapi_v1_msgs::msg::MotionState The topic represents the state of motion /api/autoware/get/emergency tier4_external_api_msgs::msg::Emergency The topic represents the state of external emergency /vehicle/status/gear_status autoware_vehicle_msgs::msg::GearReport The topic represents the state of gear"},{"location":"visualization/tier4_state_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to autonomous /api/operation_mode/change_to_stop autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to stop /api/operation_mode/change_to_local autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to local /api/operation_mode/change_to_remote autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to remote /api/operation_mode/enable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to enable vehicle control by Autoware /api/operation_mode/disable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to disable vehicle control by Autoware /api/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute The service to clear route state /api/motion/accept_start autoware_adapi_v1_msgs::srv::AcceptStart The service to accept the vehicle to start /api/autoware/set/emergency tier4_external_api_msgs::srv::SetEmergency The service to set external emergency /planning/scenario_planning/max_velocity_default tier4_planning_msgs::msg::VelocityLimit The topic to set maximum speed of the vehicle"},{"location":"visualization/tier4_state_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select tier4_state_rviz_plugin/AutowareStatePanel and press OK.

  3. If the auto button is activated, can engage by clicking it.

"},{"location":"visualization/tier4_system_rviz_plugin/","title":"tier4_system_rviz_plugin","text":""},{"location":"visualization/tier4_system_rviz_plugin/#tier4_system_rviz_plugin","title":"tier4_system_rviz_plugin","text":""},{"location":"visualization/tier4_system_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin display the Hazard information from Autoware; and output notices when emergencies are from initial localization and route setting.

"},{"location":"visualization/tier4_system_rviz_plugin/#input","title":"Input","text":"Name Type Description /system/emergency/hazard_status autoware_system_msgs::msg::HazardStatusStamped The topic represents the emergency information from Autoware"},{"location":"visualization/tier4_traffic_light_rviz_plugin/","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"visualization/tier4_traffic_light_rviz_plugin/#tier4_traffic_light_rviz_plugin","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"visualization/tier4_traffic_light_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin panel publishes dummy traffic light signals.

"},{"location":"visualization/tier4_traffic_light_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_traffic_light_rviz_plugin/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray Publish traffic light signals"},{"location":"visualization/tier4_traffic_light_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select TrafficLightPublishPanel and press OK.
  3. Set Traffic Light ID & Traffic Light Status and press SET button.
  4. Traffic light signals are published, while PUBLISH button is pushed.
"},{"location":"visualization/tier4_vehicle_rviz_plugin/","title":"tier4_vehicle_rviz_plugin","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#tier4_vehicle_rviz_plugin","title":"tier4_vehicle_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"visualization/tier4_vehicle_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and acceleration.

"},{"location":"visualization/tier4_vehicle_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#input","title":"Input","text":"Name Type Description /vehicle/status/velocity_status autoware_vehicle_msgs::msg::VelocityReport The topic is vehicle twist /control/turn_signal_cmd autoware_vehicle_msgs::msg::TurnIndicatorsReport The topic is status of turn signal /vehicle/status/steering_status autoware_vehicle_msgs::msg::SteeringReport The topic is status of steering /localization/acceleration geometry_msgs::msg::AccelWithCovarianceStamped The topic is the acceleration"},{"location":"visualization/tier4_vehicle_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#consolemeter","title":"ConsoleMeter","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale"},{"location":"visualization/tier4_vehicle_rviz_plugin/#steeringangle","title":"SteeringAngle","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale property_handle_angle_scale_ float 3.0 Scale is steering angle to handle angle"},{"location":"visualization/tier4_vehicle_rviz_plugin/#turnsignal","title":"TurnSignal","text":"Name Type Default Value Description property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_width_ int 256 Left of the plotter window [px] property_height_ int 256 Width of the plotter window [px]"},{"location":"visualization/tier4_vehicle_rviz_plugin/#velocityhistory","title":"VelocityHistory","text":"Name Type Default Value Description property_velocity_timeout_ float 10.0 Timeout of velocity [s] property_velocity_alpha_ float 1.0 Alpha of velocity property_velocity_scale_ float 0.3 Scale of velocity property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of velocity history property_vel_max_ float 3.0 Color Border Vel Max [m/s]"},{"location":"visualization/tier4_vehicle_rviz_plugin/#accelerationmeter","title":"AccelerationMeter","text":"Name Type Default Value Description property_normal_text_color_ QColor QColor(25, 255, 240) Normal text color property_emergency_text_color_ QColor QColor(255, 80, 80) Emergency acceleration color property_left_ int 896 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1 / 6.667 Value text scale property_emergency_threshold_max_ float 1.0 Max acceleration threshold for emergency [m/s^2] property_emergency_threshold_min_ float -2.5 Min acceleration threshold for emergency [m/s^2]"},{"location":"visualization/tier4_vehicle_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/tier4_vehicle_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_vehicle_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the status.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Autoware Universe","text":""},{"location":"#autoware-universe","title":"Autoware Universe","text":""},{"location":"#welcome-to-autoware-universe","title":"Welcome to Autoware Universe","text":"

Autoware Universe serves as a foundational pillar within the Autoware ecosystem, playing a critical role in enhancing the core functionalities of autonomous driving technologies. This repository is a pivotal element of the Autoware Core/Universe concept, managing a wide array of packages that significantly extend the capabilities of autonomous vehicles.

"},{"location":"#getting-started","title":"Getting Started","text":"

To dive into the vast world of Autoware and understand how Autoware Universe fits into the bigger picture, we recommend starting with the Autoware Documentation. This resource provides a thorough overview of the Autoware ecosystem, guiding you through its components, functionalities, and how to get started with development.

"},{"location":"#explore-autoware-universe-documentation","title":"Explore Autoware Universe documentation","text":"

For those looking to explore the specifics of Autoware Universe components, the Autoware Universe Documentation, deployed with MKDocs, offers detailed insights.

"},{"location":"#code-coverage-metrics","title":"Code Coverage Metrics","text":"

Below table shows the coverage rate of entire Autoware Universe and sub-components respectively.

"},{"location":"#entire-project-coverage","title":"Entire Project Coverage","text":""},{"location":"#component-wise-coverage","title":"Component-wise Coverage","text":"

You can check more details by clicking the badge and navigating the codecov website.

Component Coverage Common Control Evaluator Launch TBD Localization Map Perception Planning Sensing Simulator System Vehicle"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#contributor-covenant-code-of-conduct","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to a positive environment for our community include:

Examples of unacceptable behavior include:

"},{"location":"CODE_OF_CONDUCT/#enforcement-responsibilities","title":"Enforcement Responsibilities","text":"

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@autoware.org. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"

Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

"},{"location":"CONTRIBUTING/","title":"Contributing","text":""},{"location":"CONTRIBUTING/#contributing","title":"Contributing","text":"

See https://autowarefoundation.github.io/autoware-documentation/main/contributing/.

"},{"location":"DISCLAIMER/","title":"DISCLAIMER","text":"

DISCLAIMER

\u201cAutoware\u201d will be provided by The Autoware Foundation under the Apache License 2.0. This \u201cDISCLAIMER\u201d will be applied to all users of Autoware (a \u201cUser\u201d or \u201cUsers\u201d) with the Apache License 2.0 and Users shall hereby approve and acknowledge all the contents specified in this disclaimer below and will be deemed to consent to this disclaimer without any objection upon utilizing or downloading Autoware.

Disclaimer and Waiver of Warranties

  1. AUTOWARE FOUNDATION MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH RESPECT TO PROVIDING AUTOWARE (the \u201cService\u201d) including but not limited to any representation or warranty (i) of fitness or suitability for a particular purpose contemplated by the Users, (ii) of the expected functions, commercial value, accuracy, or usefulness of the Service, (iii) that the use by the Users of the Service complies with the laws and regulations applicable to the Users or any internal rules established by industrial organizations, (iv) that the Service will be free of interruption or defects, (v) of the non-infringement of any third party's right and (vi) the accuracy of the content of the Services and the software itself.

  2. The Autoware Foundation shall not be liable for any damage incurred by the User that are attributable to the Autoware Foundation for any reasons whatsoever. UNDER NO CIRCUMSTANCES SHALL THE AUTOWARE FOUNDATION BE LIABLE FOR INCIDENTAL, INDIRECT, SPECIAL OR FUTURE DAMAGES OR LOSS OF PROFITS.

  3. A User shall be entirely responsible for the content posted by the User and its use of any content of the Service or the Website. If the User is held responsible in a civil action such as a claim for damages or even in a criminal case, the Autoware Foundation and member companies, governments and academic & non-profit organizations and their directors, officers, employees and agents (collectively, the \u201cIndemnified Parties\u201d) shall be completely discharged from any rights or assertions the User may have against the Indemnified Parties, or from any legal action, litigation or similar procedures.

Indemnity

A User shall indemnify and hold the Indemnified Parties harmless from any of their damages, losses, liabilities, costs or expenses (including attorneys' fees or criminal compensation), or any claims or demands made against the Indemnified Parties by any third party, due to or arising out of, or in connection with utilizing Autoware (including the representations and warranties), the violation of applicable Product Liability Law of each country (including criminal case) or violation of any applicable laws by the Users, or the content posted by the User or its use of any content of the Service or the Website.

"},{"location":"common/","title":"Common","text":""},{"location":"common/#common","title":"Common","text":""},{"location":"common/#getting-started","title":"Getting Started","text":"

The Autoware.Universe Common folder consists of common and testing libraries that are used by other Autoware components, as well as useful plugins for visualization in RVIZ2.

Note

In addition to the ones listed in this folder, users can also have a look at some of the add-ons in the autoware_tools/common documentation page.

"},{"location":"common/#highlights","title":"Highlights","text":"

Some of the commonly used libraries are:

  1. autoware_universe_utils
  2. autoware_motion_utils
"},{"location":"common/autoware_adapi_specs/","title":"autoware_adapi_specs","text":""},{"location":"common/autoware_adapi_specs/#autoware_adapi_specs","title":"autoware_adapi_specs","text":"

This package is a specification of Autoware AD API.

"},{"location":"common/autoware_auto_common/design/comparisons/","title":"Comparisons","text":""},{"location":"common/autoware_auto_common/design/comparisons/#comparisons","title":"Comparisons","text":"

The float_comparisons.hpp library is a simple set of functions for performing approximate numerical comparisons. There are separate functions for performing comparisons using absolute bounds and relative bounds. Absolute comparison checks are prefixed with abs_ and relative checks are prefixed with rel_.

The bool_comparisons.hpp library additionally contains an XOR operator.

The intent of the library is to improve readability of code and reduce likelihood of typographical errors when using numerical and boolean comparisons.

"},{"location":"common/autoware_auto_common/design/comparisons/#target-use-cases","title":"Target use cases","text":"

The approximate comparisons are intended to be used to check whether two numbers lie within some absolute or relative interval. The exclusive_or function will test whether two values cast to different boolean values.

"},{"location":"common/autoware_auto_common/design/comparisons/#assumptions","title":"Assumptions","text":""},{"location":"common/autoware_auto_common/design/comparisons/#example-usage","title":"Example Usage","text":"
#include \"autoware_auto_common/common/bool_comparisons.hpp\"\n#include \"autoware_auto_common/common/float_comparisons.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::qp_interface::helper_functions::comparisons;\n\nstatic constexpr auto epsilon = 0.2;\nstatic constexpr auto relative_epsilon = 0.01;\n\nstd::cout << exclusive_or(true, false) << \"\\n\";\n// Prints: true\n\nstd::cout << rel_eq(1.0, 1.1, relative_epsilon)) << \"\\n\";\n// Prints: false\n\nstd::cout << approx_eq(10000.0, 10010.0, epsilon, relative_epsilon)) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_eq(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_ne(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_eq_zero(0.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_lt(4.0, 4.25, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_lte(1.0, 1.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gt(1.25, 1.0, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gte(0.75, 1.0, epsilon) << \"\\n\";\n// Prints: false\n
"},{"location":"common/autoware_component_interface_specs/","title":"autoware_component_interface_specs","text":""},{"location":"common/autoware_component_interface_specs/#autoware_component_interface_specs","title":"autoware_component_interface_specs","text":"

This package is a specification of component interfaces.

"},{"location":"common/autoware_component_interface_tools/","title":"autoware_component_interface_tools","text":""},{"location":"common/autoware_component_interface_tools/#autoware_component_interface_tools","title":"autoware_component_interface_tools","text":"

This package provides the following tools for component interface.

"},{"location":"common/autoware_component_interface_tools/#service_log_checker","title":"service_log_checker","text":"

Monitor the service log of component_interface_utils and display if the response status is an error.

"},{"location":"common/autoware_component_interface_utils/","title":"autoware_component_interface_utils","text":""},{"location":"common/autoware_component_interface_utils/#autoware_component_interface_utils","title":"autoware_component_interface_utils","text":""},{"location":"common/autoware_component_interface_utils/#features","title":"Features","text":"

This is a utility package that provides the following features:

"},{"location":"common/autoware_component_interface_utils/#design","title":"Design","text":"

This package provides the wrappers for the interface classes of rclcpp. The wrappers limit the usage of the original class to enforce the processing recommended by the component interface. Do not inherit the class of rclcpp, and forward or wrap the member function that is allowed to be used.

"},{"location":"common/autoware_component_interface_utils/#instantiation-of-the-wrapper-class","title":"Instantiation of the wrapper class","text":"

The wrapper class requires interface information in this format.

struct SampleService\n{\nusing Service = sample_msgs::srv::ServiceType;\nstatic constexpr char name[] = \"/sample/service\";\n};\n\nstruct SampleMessage\n{\nusing Message = sample_msgs::msg::MessageType;\nstatic constexpr char name[] = \"/sample/message\";\nstatic constexpr size_t depth = 1;\nstatic constexpr auto reliability = RMW_QOS_POLICY_RELIABILITY_RELIABLE;\nstatic constexpr auto durability = RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL;\n};\n

Create the wrapper using the above definition as follows.

// header file\nautoware::component_interface_utils::Service<SampleService>::SharedPtr srv_;\nautoware::component_interface_utils::Client<SampleService>::SharedPtr cli_;\nautoware::component_interface_utils::Publisher<SampleMessage>::SharedPtr pub_;\nautoware::component_interface_utils::Subscription<SampleMessage>::SharedPtr sub_;\n\n// source file\nconst auto node = autoware::component_interface_utils::NodeAdaptor(this);\nnode.init_srv(srv_, callback);\nnode.init_cli(cli_);\nnode.init_pub(pub_);\nnode.init_sub(sub_, callback);\n
"},{"location":"common/autoware_component_interface_utils/#logging-for-service-and-client","title":"Logging for service and client","text":"

If the wrapper class is used, logging is automatically enabled. The log level is RCLCPP_INFO.

"},{"location":"common/autoware_component_interface_utils/#service-exception-for-response","title":"Service exception for response","text":"

If the wrapper class is used and the service response has status, throwing ServiceException will automatically catch and set it to status. This is useful when returning an error from a function called from the service callback.

void service_callback(Request req, Response res)\n{\nfunction();\nres->status.success = true;\n}\n\nvoid function()\n{\nthrow ServiceException(ERROR_CODE, \"message\");\n}\n

If the wrapper class is not used or the service response has no status, manually catch the ServiceException as follows.

void service_callback(Request req, Response res)\n{\ntry {\nfunction();\nres->status.success = true;\n} catch (const ServiceException & error) {\nres->status = error.status();\n}\n}\n
"},{"location":"common/autoware_component_interface_utils/#relays-for-topic-and-service","title":"Relays for topic and service","text":"

There are utilities for relaying services and messages of the same type.

const auto node = autoware::component_interface_utils::NodeAdaptor(this);\nservice_callback_group_ = create_callback_group(rclcpp::CallbackGroupType::MutuallyExclusive);\nnode.relay_message(pub_, sub_);\nnode.relay_service(cli_, srv_, service_callback_group_);  // group is for avoiding deadlocks\n
"},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/","title":"Fake Test Node","text":""},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#fake-test-node","title":"Fake Test Node","text":""},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#what-this-package-provides","title":"What this package provides","text":"

When writing an integration test for a node in C++ using GTest, there is quite some boilerplate code that needs to be written to set up a fake node that would publish expected messages on an expected topic and subscribes to messages on some other topic. This is usually implemented as a custom GTest fixture.

This package contains a library that introduces two utility classes that can be used in place of custom fixtures described above to write integration tests for a node:

These fixtures take care of initializing and re-initializing rclcpp as well as of checking that all subscribers and publishers have a match, thus reducing the amount of boilerplate code that the user needs to write.

"},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#how-to-use-this-library","title":"How to use this library","text":"

After including the relevant header the user can use a typedef to use a custom fixture name and use the provided classes as fixtures in TEST_F and TEST_P tests directly.

"},{"location":"common/autoware_fake_test_node/design/fake_test_node-design/#example-usage","title":"Example usage","text":"

Let's say there is a node NodeUnderTest that requires testing. It just subscribes to std_msgs::msg::Int32 messages and publishes a std_msgs::msg::Bool to indicate that the input is positive. To test such a node the following code can be used utilizing the autoware::fake_test_node::FakeTestNode:

using FakeNodeFixture = autoware::fake_test_node::FakeTestNode;\n\n/// @test Test that we can use a non-parametrized test.\nTEST_F(FakeNodeFixture, Test) {\nInt32 msg{};\nmsg.data = 15;\nconst auto node = std::make_shared<NodeUnderTest>();\n\nBool::SharedPtr last_received_msg{};\nauto fake_odom_publisher = create_publisher<Int32>(\"/input_topic\");\nauto result_odom_subscription = create_subscription<Bool>(\"/output_topic\", *node,\n[&last_received_msg](const Bool::SharedPtr msg) {last_received_msg = msg;});\n\nconst auto dt{std::chrono::milliseconds{100LL}};\nconst auto max_wait_time{std::chrono::seconds{10LL}};\nauto time_passed{std::chrono::milliseconds{0LL}};\nwhile (!last_received_msg) {\nfake_odom_publisher->publish(msg);\nrclcpp::spin_some(node);\nrclcpp::spin_some(get_fake_node());\nstd::this_thread::sleep_for(dt);\ntime_passed += dt;\nif (time_passed > max_wait_time) {\nFAIL() << \"Did not receive a message soon enough.\";\n}\n}\nEXPECT_TRUE(last_received_msg->data);\nSUCCEED();\n}\n

Here only the TEST_F example is shown but a TEST_P usage is very similar with a little bit more boilerplate to set up all the parameter values, see test_fake_test_node.cpp for an example usage.

"},{"location":"common/autoware_global_parameter_loader/Readme/","title":"Autoware Global Parameter Loader","text":""},{"location":"common/autoware_global_parameter_loader/Readme/#autoware-global-parameter-loader","title":"Autoware Global Parameter Loader","text":"

This package is to set common ROS parameters to each node.

"},{"location":"common/autoware_global_parameter_loader/Readme/#usage","title":"Usage","text":"

Add the following lines to the launch file of the node in which you want to get global parameters.

<!-- Global parameters -->\n<include file=\"$(find-pkg-share autoware_global_parameter_loader)/launch/global_params.launch.py\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n</include>\n

The vehicle model parameter is read from config/vehicle_info.param.yaml in vehicle_model_description package.

"},{"location":"common/autoware_global_parameter_loader/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently only vehicle_info is loaded by this launcher.

"},{"location":"common/autoware_glog_component/","title":"glog_component","text":""},{"location":"common/autoware_glog_component/#glog_component","title":"glog_component","text":"

This package provides the glog (google logging library) feature as a ros2 component library. This is used to dynamically load the glog feature with container.

See the glog github for the details of its features.

"},{"location":"common/autoware_glog_component/#example","title":"Example","text":"

When you load the glog_component in container, the launch file can be like below:

glog_component = ComposableNode(\n    package=\"autoware_glog_component\",\n    plugin=\"autoware::glog_component::GlogComponent\",\n    name=\"glog_component\",\n)\n\ncontainer = ComposableNodeContainer(\n    name=\"my_container\",\n    namespace=\"\",\n    package=\"rclcpp_components\",\n    executable=LaunchConfiguration(\"container_executable\"),\n    composable_node_descriptions=[\n        component1,\n        component2,\n        glog_component,\n    ],\n)\n
"},{"location":"common/autoware_goal_distance_calculator/Readme/","title":"autoware_goal_distance_calculator","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#autoware_goal_distance_calculator","title":"autoware_goal_distance_calculator","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes deviation of self-pose from goal pose.

"},{"location":"common/autoware_goal_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/mission_planning/route autoware_planning_msgs::msg::Route Used to get goal pose /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/autoware_goal_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description deviation/lateral tier4_debug_msgs::msg::Float64Stamped publish lateral deviation of self-pose from goal pose[m] deviation/longitudinal tier4_debug_msgs::msg::Float64Stamped publish longitudinal deviation of self-pose from goal pose[m] deviation/yaw tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[rad] deviation/yaw_deg tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[deg]"},{"location":"common/autoware_goal_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/autoware_goal_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period. [Hz]"},{"location":"common/autoware_goal_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation oneshot bool true publish deviations just once or repeatedly"},{"location":"common/autoware_goal_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_grid_map_utils/","title":"Grid Map Utils","text":""},{"location":"common/autoware_grid_map_utils/#grid-map-utils","title":"Grid Map Utils","text":""},{"location":"common/autoware_grid_map_utils/#overview","title":"Overview","text":"

This packages contains a re-implementation of the grid_map::PolygonIterator used to iterate over all cells of a grid map contained inside some polygon.

"},{"location":"common/autoware_grid_map_utils/#algorithm","title":"Algorithm","text":"

This implementation uses the scan line algorithm, a common algorithm used to draw polygons on a rasterized image. The main idea of the algorithm adapted to a grid map is as follow:

More details on the scan line algorithm can be found in the References.

"},{"location":"common/autoware_grid_map_utils/#api","title":"API","text":"

The autoware::grid_map_utils::PolygonIterator follows the same API as the original grid_map::PolygonIterator.

"},{"location":"common/autoware_grid_map_utils/#assumptions","title":"Assumptions","text":"

The behavior of the autoware::grid_map_utils::PolygonIterator is only guaranteed to match the grid_map::PolygonIterator if edges of the polygon do not exactly cross any cell center. In such a case, whether the crossed cell is considered inside or outside of the polygon can vary due to floating precision error.

"},{"location":"common/autoware_grid_map_utils/#performances","title":"Performances","text":"

Benchmarking code is implemented in test/benchmarking.cpp and is also used to validate that the autoware::grid_map_utils::PolygonIterator behaves exactly like the grid_map::PolygonIterator.

The following figure shows a comparison of the runtime between the implementation of this package (autoware_grid_map_utils) and the original implementation (grid_map). The time measured includes the construction of the iterator and the iteration over all indexes and is shown using a logarithmic scale. Results were obtained varying the side size of a square grid map with 100 <= n <= 1000 (size=n means a grid of n x n cells), random polygons with a number of vertices 3 <= m <= 100 and with each parameter (n,m) repeated 10 times.

"},{"location":"common/autoware_grid_map_utils/#future-improvements","title":"Future improvements","text":"

There exists variations of the scan line algorithm for multiple polygons. These can be implemented if we want to iterate over the cells contained in at least one of multiple polygons.

The current implementation imitate the behavior of the original grid_map::PolygonIterator where a cell is selected if its center position is inside the polygon. This behavior could be changed for example to only return all cells overlapped by the polygon.

"},{"location":"common/autoware_grid_map_utils/#references","title":"References","text":""},{"location":"common/autoware_interpolation/","title":"Interpolation package","text":""},{"location":"common/autoware_interpolation/#interpolation-package","title":"Interpolation package","text":"

This package supplies linear and spline interpolation functions.

"},{"location":"common/autoware_interpolation/#linear-interpolation","title":"Linear Interpolation","text":"

lerp(src_val, dst_val, ratio) (for scalar interpolation) interpolates src_val and dst_val with ratio. This will be replaced with std::lerp(src_val, dst_val, ratio) in C++20.

lerp(base_keys, base_values, query_keys) (for vector interpolation) applies linear regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/autoware_interpolation/#spline-interpolation","title":"Spline Interpolation","text":"

spline(base_keys, base_values, query_keys) (for vector interpolation) applies spline regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/autoware_interpolation/#evaluation-of-calculation-cost","title":"Evaluation of calculation cost","text":"

We evaluated calculation cost of spline interpolation for 100 points, and adopted the best one which is tridiagonal matrix algorithm. Methods except for tridiagonal matrix algorithm exists in spline_interpolation package, which has been removed from Autoware.

Method Calculation time Tridiagonal Matrix Algorithm 0.007 [ms] Preconditioned Conjugate Gradient 0.024 [ms] Successive Over-Relaxation 0.074 [ms]"},{"location":"common/autoware_interpolation/#spline-interpolation-algorithm","title":"Spline Interpolation Algorithm","text":"

Assuming that the size of base_keys (\\(x_i\\)) and base_values (\\(y_i\\)) are \\(N + 1\\), we aim to calculate spline interpolation with the following equation to interpolate between \\(y_i\\) and \\(y_{i+1}\\).

\\[ Y_i(x) = a_i (x - x_i)^3 + b_i (x - x_i)^2 + c_i (x - x_i) + d_i \\ \\ \\ (i = 0, \\dots, N-1) \\]

Constraints on spline interpolation are as follows. The number of constraints is \\(4N\\), which is equal to the number of variables of spline interpolation.

\\[ \\begin{align} Y_i (x_i) & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y_i (x_{i+1}) & = y_{i+1} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y'_i (x_{i+1}) & = Y'_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_i (x_{i+1}) & = Y''_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_0 (x_0) & = 0 \\\\ Y''_{N-1} (x_N) & = 0 \\end{align} \\]

According to this article, spline interpolation is formulated as the following linear equation.

\\[ \\begin{align} \\begin{pmatrix} 2(h_0 + h_1) & h_1 \\\\ h_0 & 2 (h_1 + h_2) & h_2 & & O \\\\ & & & \\ddots \\\\ O & & & & h_{N-2} & 2 (h_{N-2} + h_{N-1}) \\end{pmatrix} \\begin{pmatrix} v_1 \\\\ v_2 \\\\ v_3 \\\\ \\vdots \\\\ v_{N-1} \\end{pmatrix}= \\begin{pmatrix} w_1 \\\\ w_2 \\\\ w_3 \\\\ \\vdots \\\\ w_{N-1} \\end{pmatrix} \\end{align} \\]

where

\\[ \\begin{align} h_i & = x_{i+1} - x_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ w_i & = 6 \\left(\\frac{y_{i+1} - y_{i+1}}{h_i} - \\frac{y_i - y_{i-1}}{h_{i-1}}\\right) \\ \\ \\ (i = 1, \\dots, N-1) \\end{align} \\]

The coefficient matrix of this linear equation is tridiagonal matrix. Therefore, it can be solve with tridiagonal matrix algorithm, which can solve linear equations without gradient descent methods.

Solving this linear equation with tridiagonal matrix algorithm, we can calculate coefficients of spline interpolation as follows.

\\[ \\begin{align} a_i & = \\frac{v_{i+1} - v_i}{6 (x_{i+1} - x_i)} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ b_i & = \\frac{v_i}{2} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ c_i & = \\frac{y_{i+1} - y_i}{x_{i+1} - x_i} - \\frac{1}{6}(x_{i+1} - x_i)(2 v_i + v_{i+1}) \\ \\ \\ (i = 0, \\dots, N-1) \\\\ d_i & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\end{align} \\]"},{"location":"common/autoware_interpolation/#tridiagonal-matrix-algorithm","title":"Tridiagonal Matrix Algorithm","text":"

We solve tridiagonal linear equation according to this article where variables of linear equation are expressed as follows in the implementation.

\\[ \\begin{align} \\begin{pmatrix} b_0 & c_0 & & \\\\ a_0 & b_1 & c_2 & O \\\\ & & \\ddots \\\\ O & & a_{N-2} & b_{N-1} \\end{pmatrix} x = \\begin{pmatrix} d_0 \\\\ d_2 \\\\ d_3 \\\\ \\vdots \\\\ d_{N-1} \\end{pmatrix} \\end{align} \\]"},{"location":"common/autoware_motion_utils/","title":"Motion Utils package","text":""},{"location":"common/autoware_motion_utils/#motion-utils-package","title":"Motion Utils package","text":""},{"location":"common/autoware_motion_utils/#definition-of-terms","title":"Definition of terms","text":""},{"location":"common/autoware_motion_utils/#segment","title":"Segment","text":"

Segment in Autoware is the line segment between two successive points as follows.

The nearest segment index and nearest point index to a certain position is not always th same. Therefore, we prepare two different utility functions to calculate a nearest index for points and segments.

"},{"location":"common/autoware_motion_utils/#nearest-index-search","title":"Nearest index search","text":"

In this section, the nearest index and nearest segment index search is explained.

We have the same functions for the nearest index search and nearest segment index search. Taking for the example the nearest index search, we have two types of functions.

The first function finds the nearest index with distance and yaw thresholds.

template <class T>\nsize_t findFirstNearestIndexWithSoftConstraints(\nconst T & points, const geometry_msgs::msg::Pose & pose,\nconst double dist_threshold = std::numeric_limits<double>::max(),\nconst double yaw_threshold = std::numeric_limits<double>::max());\n

This function finds the first local solution within thresholds. The reason to find the first local one is to deal with some edge cases explained in the next subsection.

There are default parameters for thresholds arguments so that you can decide which thresholds to pass to the function.

  1. When both the distance and yaw thresholds are given.
  2. When only distance are given.
  3. When no thresholds are given.

The second function finds the nearest index in the lane whose id is lane_id.

size_t findNearestIndexFromLaneId(\nconst tier4_planning_msgs::msg::PathWithLaneId & path,\nconst geometry_msgs::msg::Point & pos, const int64_t lane_id);\n
"},{"location":"common/autoware_motion_utils/#application-to-various-object","title":"Application to various object","text":"

Many node packages often calculate the nearest index of objects. We will explain the recommended method to calculate it.

"},{"location":"common/autoware_motion_utils/#nearest-index-for-the-ego","title":"Nearest index for the ego","text":"

Assuming that the path length before the ego is short enough, we expect to find the correct nearest index in the following edge cases by findFirstNearestIndexWithSoftConstraints with both distance and yaw thresholds. Blue circles describes the distance threshold from the base link position and two blue lines describe the yaw threshold against the base link orientation. Among points in these cases, the correct nearest point which is red can be found.

Therefore, the implementation is as follows.

const size_t ego_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t ego_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\n
"},{"location":"common/autoware_motion_utils/#nearest-index-for-dynamic-objects","title":"Nearest index for dynamic objects","text":"

For the ego nearest index, the orientation is considered in addition to the position since the ego is supposed to follow the points. However, for the dynamic objects (e.g., predicted object), sometimes its orientation may be different from the points order, e.g. the dynamic object driving backward although the ego is driving forward.

Therefore, the yaw threshold should not be considered for the dynamic object. The implementation is as follows.

const size_t dynamic_obj_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\nconst size_t dynamic_obj_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\n
"},{"location":"common/autoware_motion_utils/#nearest-index-for-traffic-objects","title":"Nearest index for traffic objects","text":"

In lanelet maps, traffic objects belong to the specific lane. With this specific lane's id, the correct nearest index can be found.

The implementation is as follows.

// first extract `lane_id` which the traffic object belong to.\nconst size_t traffic_obj_nearest_idx = findNearestIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\nconst size_t traffic_obj_nearest_seg_idx = findNearestSegmentIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\n
"},{"location":"common/autoware_motion_utils/#for-developers","title":"For developers","text":"

Some of the template functions in trajectory.hpp are mostly used for specific types (autoware_planning_msgs::msg::PathPoint, autoware_planning_msgs::msg::PathPoint, autoware_planning_msgs::msg::TrajectoryPoint), so they are exported as extern template functions to speed-up compilation time.

autoware_motion_utils.hpp header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/","title":"vehicle utils","text":""},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#vehicle-utils","title":"vehicle utils","text":"

Vehicle utils provides a convenient library used to check vehicle status.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#feature","title":"Feature","text":"

The library contains following classes.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#vehicle_stop_checker","title":"vehicle_stop_checker","text":"

This class check whether the vehicle is stopped or not based on localization result.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#parameters","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s]"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#member-functions","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#example-usage","title":"Example Usage","text":"

Necessary includes:

#include <autoware/universe_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_stop_checker_ = std::make_unique<VehicleStopChecker>(this);\n}\n\nstd::unique_ptr<VehicleStopChecker> vehicle_stop_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_stop_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_stop_checker_->isVehicleStopped(3.0);\n\n...\n}\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#vehicle_arrival_checker","title":"vehicle_arrival_checker","text":"

This class check whether the vehicle arrive at stop point based on localization and planning result.

"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#subscribed-topics_1","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry /planning/scenario_planning/trajectory autoware_planning_msgs::msg::Trajectory trajectory"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#parameters_1","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s] th_arrived_distance_m double 1.0 threshold distance to check if vehicle has arrived at target point [m]"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#member-functions_1","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
bool isVehicleStoppedAtStopPoint(const double stop_duration)\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#example-usage_1","title":"Example Usage","text":"

Necessary includes:

#include <autoware/universe_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_arrival_checker_ = std::make_unique<VehicleArrivalChecker>(this);\n}\n\nstd::unique_ptr<VehicleArrivalChecker> vehicle_arrival_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_arrival_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_arrival_checker_->isVehicleStopped(3.0);\n\n...\n\nconst auto result_3 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint();\n\n...\n\nconst auto result_4 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint(3.0);\n\n...\n}\n
"},{"location":"common/autoware_motion_utils/docs/vehicle/vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

vehicle_stop_checker and vehicle_arrival_checker cannot check whether the vehicle is stopped more than velocity_buffer_time_sec second.

"},{"location":"common/autoware_object_recognition_utils/","title":"autoware_object_recognition_utils","text":""},{"location":"common/autoware_object_recognition_utils/#autoware_object_recognition_utils","title":"autoware_object_recognition_utils","text":""},{"location":"common/autoware_object_recognition_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions that are useful across the object recognition module. This package may include functions for converting between different data types, msg types, and performing common operations on them.

"},{"location":"common/autoware_path_distance_calculator/Readme/","title":"autoware_path_distance_calculator","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#autoware_path_distance_calculator","title":"autoware_path_distance_calculator","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes a distance from the closest path point from the self-position to the end point of the path. Note that the distance means the arc-length along the path, not the Euclidean distance between the two points.

"},{"location":"common/autoware_path_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/scenario_planning/lane_driving/behavior_planning/path autoware_planning_msgs::msg::Path Reference path /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/autoware_path_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description ~/distance tier4_debug_msgs::msg::Float64Stamped Publish a distance from the closest path point from the self-position to the end point of the path[m]"},{"location":"common/autoware_path_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/autoware_path_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"common/autoware_path_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"common/autoware_path_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_polar_grid/Readme/","title":"Polar Grid","text":""},{"location":"common/autoware_polar_grid/Readme/#polar-grid","title":"Polar Grid","text":""},{"location":"common/autoware_polar_grid/Readme/#purpose","title":"Purpose","text":"

This plugin displays polar grid around ego vehicle in Rviz.

"},{"location":"common/autoware_polar_grid/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation Max Range float 200.0f max range for polar grid. [m] Wave Velocity float 100.0f wave ring velocity. [m/s] Delta Range float 10.0f wave ring distance for polar grid. [m]"},{"location":"common/autoware_polar_grid/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_pyplot/","title":"autoware_pyplot","text":""},{"location":"common/autoware_pyplot/#autoware_pyplot","title":"autoware_pyplot","text":"

This package provides C++ interface for the notable matplotlib using pybind11 backend for

"},{"location":"common/autoware_pyplot/#usage","title":"usage","text":"

In your main function, setup the python context and import matplotlib

#include <autoware/pyplot/pyplot.hpp>\n#include <pybind11/embed.h>\n\n// in main...\npy::scoped_interpreter guard{};\nauto plt = autoware::pyplot::import();\n

Then you can use major functionalities of matplotlib almost in the same way as native python code.

{\nplt.plot(Args(std::vector<int>({1, 3, 2, 4})), Kwargs(\"color\"_a = \"blue\", \"linewidth\"_a = 1.0));\nplt.xlabel(Args(\"x-title\"));\nplt.ylabel(Args(\"y-title\"));\nplt.title(Args(\"title\"));\nplt.xlim(Args(0, 5));\nplt.ylim(Args(0, 5));\nplt.grid(Args(true));\nplt.savefig(Args(\"test_single_plot.png\"));\n}\n\n{\nauto [fig, axes] = plt.subplots(1, 2);\nauto & ax1 = axes[0];\nauto & ax2 = axes[1];\n\nax1.set_aspect(Args(\"equal\"));\nax2.set_aspect(Args(\"equal\"));\n}\n
"},{"location":"common/autoware_signal_processing/","title":"Signal Processing Methods","text":""},{"location":"common/autoware_signal_processing/#signal-processing-methods","title":"Signal Processing Methods","text":"

In this package, we present signal processing related methods for the Autoware applications. The following functionalities are available in the current version.

low-pass filter currently supports only the 1-D low pass filtering.

"},{"location":"common/autoware_signal_processing/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/","title":"ButterworthFilter","text":""},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#butterworth-low-pass-filter-design-tool-class","title":"Butterworth Low-pass Filter Design Tool Class","text":"

This Butterworth low-pass filter design tool can be used to design a Butterworth filter in continuous and discrete-time from the given specifications of the filter performance. The Butterworth filter is a class implementation. A default constructor creates the object without any argument.

The filter can be prepared in three ways. If the filter specifications are known, such as the pass-band, and stop-band frequencies (Wp and Ws) together with the pass-band and stop-band ripple magnitudes (Ap and As), one can call the filter's buttord method with these arguments to obtain the recommended filter order (N) and cut-off frequency (Wc_rad_sec [rad/s]).

Figure 1. Butterworth Low-pass filter specification from [1].

An example call is demonstrated below;

ButterworthFilter bf();\n\nWp = 2.0; // pass-band frequency [rad/sec]\nWs = 3.0; // stop-band frequency [rad/sec]\nAp = 6.0; // pass-band ripple mag or loss [dB]\nAs = 20.0; // stop band ripple attenuation [dB]\n\n// Computing filter coefficients from the specs\nbf.Buttord(Wp, Ws, Ap, As);\n\n// Get the computed order and Cut-Off frequency\nsOrderCutOff NWc = bf.getOrderCutOff();]\n\ncout << \" The computed order is ;\" << NWc.N << endl;\ncout << \" The computed cut-off frequency is ;\" << NWc.Wc_rad_sec << endl;\n

The filter order and cut-off frequency can be obtained in a struct using bf.getOrderCutoff() method. These specs can be printed on the screen by calling PrintFilterSpecs() method. If the user would like to define the order and cut-off frequency manually, the setter methods for these variables can be called to set the filter order (N) and the desired cut-off frequency (Wc_rad_sec [rad/sec]) for the filter.

"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#obtaining-filter-transfer-functions","title":"Obtaining Filter Transfer Functions","text":"

The discrete transfer function of the filter requires the roots and gain of the continuous-time transfer function. Therefore, it is a must to call the first computeContinuousTimeTF() to create the continuous-time transfer function of the filter using;

bf.computeContinuousTimeTF();\n

The computed continuous-time transfer function roots can be printed on the screen using the methods;

bf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n

The resulting screen output for a 5th order filter is demonstrated below.

 Roots of Continuous Time Filter Transfer Function Denominator are :\n-0.585518 + j 1.80204\n-1.53291 + j 1.11372\n-1.89478 + j 2.32043e-16\n-1.53291 + j -1.11372\n-0.585518 + j -1.80204\n\n\nThe Continuous-Time Transfer Function of the Filter is ;\n\n                                   24.422\n-------------------------------------------------------------------------------\n1.000 *s[5] + 6.132 *s[4] + 18.798 *s[3] + 35.619 *s[2] + 41.711 *s[1] + 24.422\n
"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#discrete-time-transfer-function-difference-equations","title":"Discrete Time Transfer Function (Difference Equations)","text":"

The digital filter equivalent of the continuous-time definitions is produced by using the bi-linear transformation. When creating the discrete-time function of the ButterworthFilter object, its Numerator (Bn) and Denominator (An ) coefficients are stored in a vector of filter order size N.

The discrete transfer function method is called using ;

bf.computeDiscreteTimeTF();\nbf.PrintDiscreteTimeTF();\n

The results are printed on the screen like; The Discrete-Time Transfer Function of the Filter is ;

0.191 *z[-5] + 0.956 *z[-4] + 1.913 *z[-3] + 1.913 *z[-2] + 0.956 *z[-1] + 0.191\n--------------------------------------------------------------------------------\n1.000 *z[-5] + 1.885 *z[-4] + 1.888 *z[-3] + 1.014 *z[-2] + 0.298 *z[-1] + 0.037\n

and the associated difference coefficients An and Bn by withing a struct ;

sDifferenceAnBn AnBn = bf.getAnBn();\n

The difference coefficients appear in the filtering equation in the form of.

An * Y_filtered = Bn * Y_unfiltered\n

To filter a signal given in a vector form ;

"},{"location":"common/autoware_signal_processing/documentation/ButterworthFilter/#calling-filter-by-a-specified-cut-off-and-sampling-frequencies-in-hz","title":"Calling Filter by a specified cut-off and sampling frequencies [in Hz]","text":"

The Butterworth filter can also be created by defining the desired order (N), a cut-off frequency (fc in [Hz]), and a sampling frequency (fs in [Hz]). In this method, the cut-off frequency is pre-warped with respect to the sampling frequency [1, 2] to match the continuous and digital filter frequencies.

The filter is prepared by the following calling options;

 // 3rd METHOD defining a sampling frequency together with the cut-off fc, fs\n bf.setOrder(2);\n bf.setCutOffFrequency(10, 100);\n

At this step, we define a boolean variable whether to use the pre-warping option or not.

// Compute Continuous Time TF\nbool use_sampling_frequency = true;\nbf.computeContinuousTimeTF(use_sampling_frequency);\nbf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n\n// Compute Discrete Time TF\nbf.computeDiscreteTimeTF(use_sampling_frequency);\nbf.PrintDiscreteTimeTF();\n

References:

  1. Manolakis, Dimitris G., and Vinay K. Ingle. Applied digital signal processing: theory and practice. Cambridge University Press, 2011.

  2. https://en.wikibooks.org/wiki/Digital_Signal_Processing/Bilinear_Transform

"},{"location":"common/autoware_test_utils/","title":"Test Utils","text":""},{"location":"common/autoware_test_utils/#test-utils","title":"Test Utils","text":""},{"location":"common/autoware_test_utils/#background","title":"Background","text":"

Several Autoware's components and modules have already adopted unit testing, so a common library to ease the process of writing unit tests is necessary.

"},{"location":"common/autoware_test_utils/#purpose","title":"Purpose","text":"

The objective of the test_utils is to develop a unit testing library for the Autoware components. This library will include

"},{"location":"common/autoware_test_utils/#available-maps","title":"Available Maps","text":"

The following maps are available here

"},{"location":"common/autoware_test_utils/#common","title":"Common","text":"

The common map contains multiple types of usable inputs, including shoulder lanes, intersections, and some regulatory elements. The common map is named lanelet2_map.osm in the folder.

"},{"location":"common/autoware_test_utils/#2-km-straight","title":"2 km Straight","text":"

The 2 km straight lanelet map consists of two lanes that run in the same direction. The map is named 2km_test.osm.

The following illustrates the design of the map.

"},{"location":"common/autoware_test_utils/#road_shoulders","title":"road_shoulders","text":"

The road_shoulders lanelet map consist of a variety of pick-up/drop-off site maps with road_shoulder tags including:

You can easily launch planning_simulator by

ros2 launch autoware_test_utils psim_road_shoulder.launch.xml vehicle_model:=<> sensor_model:=<> use_sim_time:=true\n
"},{"location":"common/autoware_test_utils/#intersection","title":"intersection","text":"

The intersections lanelet map consist of a variety of intersections including:

You can easily launch planning_simulator by

ros2 launch autoware_test_utils psim_intersection.launch.xml vehicle_model:=<> sensor_model:=<> use_sim_time:=true\n
"},{"location":"common/autoware_test_utils/#example-use-cases","title":"Example use cases","text":""},{"location":"common/autoware_test_utils/#autoware-planning-test-manager","title":"Autoware Planning Test Manager","text":"

The goal of the Autoware Planning Test Manager is to test planning module nodes. The PlanningInterfaceTestManager class (source code) creates wrapper functions based on the test_utils functions.

"},{"location":"common/autoware_test_utils/#generate-test-data-for-unit-testing","title":"Generate test data for unit testing","text":"

As presented in this PR description, the user can save a snapshot of the scene to a yaml file while running Planning Simulation on the test map.

ros2 launch autoware_test_utils psim_road_shoulder.launch.xml\nros2 launch autoware_test_utils psim_intersection.launch.xml\n

It uses the autoware sample_vehicle_description and sample_sensor_kit by default, and autoware_test_utils/config/test_vehicle_info.param.yaml is exactly the same as that of sample_vehicle_description. If specified, vehicle_model/sensor_model argument is available.

ros2 service call /autoware_test_utils/topic_snapshot_saver std_srvs/srv/Empty \\{\\}\n

The list and field names of the topics to be saved are specified in config/sample_topic_snapshot.yaml.

# setting\nfields:\n- name: self_odometry # this is the field name for this topic\ntype: Odometry # the abbreviated type name of this topic\ntopic: /localization/kinematic_state # the name of this topic\n\n# output\nself_odometry:\n- header: ...\n...\n

Each field can be parsed to ROS message type using the functions defined in autoware_test_utils/mock_data_parser.hpp

"},{"location":"common/autoware_testing/design/autoware_testing-design/","title":"autoware_testing","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#autoware_testing","title":"autoware_testing","text":"

This is the design document for the autoware_testing package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

The package aims to provide a unified way to add standard testing functionality to the package, currently supporting:

"},{"location":"common/autoware_testing/design/autoware_testing-design/#design","title":"Design","text":"

Uses ros_testing (which is an extension of launch_testing) and provides some parametrized, reusable standard tests to run.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Parametrization is limited to package, executable names, parameters filename and executable arguments. Test namespace is set as 'test'. Parameters file for the package is expected to be in param directory inside package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

To add a smoke test to your package tests, add test dependency on autoware_testing to package.xml

<test_depend>autoware_testing</test_depend>\n

and add the following two lines to CMakeLists.txt in the IF (BUILD_TESTING) section:

find_package(autoware_testing REQUIRED)\nadd_smoke_test(<package_name> <executable_name> [PARAM_FILENAME <param_filename>] [EXECUTABLE_ARGUMENTS <arguments>])\n

Where

<package_name> - [required] tested node package name.

<executable_name> - [required] tested node executable name.

<param_filename> - [optional] param filename. Default value is test.param.yaml. Required mostly in situation where there are multiple smoke tests in a package and each requires different parameters set

<arguments> - [optional] arguments passed to executable. By default no arguments are passed.

which adds <executable_name>_smoke_test test to suite.

Example test result:

build/<package_name>/test_results/<package_name>/<executable_name>_smoke_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
"},{"location":"common/autoware_testing/design/autoware_testing-design/#references-external-links","title":"References / External links","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#related-issues","title":"Related issues","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/","title":"Traffic Light Recognition Marker Publisher","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#traffic-light-recognition-marker-publisher","title":"Traffic Light Recognition Marker Publisher","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#purpose","title":"Purpose","text":"

This node publishes a marker array for visualizing traffic signal recognition results on Rviz.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#input","title":"Input","text":"Name Type Description /map/vector_map autoware_map_msgs::msg::LaneletMapBin Vector map for getting traffic signal information /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The result of traffic signal recognition"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals_marker visualization_msgs::msg::MarkerArray Publish a marker array for visualization of traffic signal recognition results"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#parameters","title":"Parameters","text":"

None.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"common/autoware_traffic_light_recognition_marker_publisher/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/autoware_traffic_light_utils/","title":"autoware_traffic_light_utils","text":""},{"location":"common/autoware_traffic_light_utils/#autoware_traffic_light_utils","title":"autoware_traffic_light_utils","text":""},{"location":"common/autoware_traffic_light_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions that are useful across the traffic light recognition module. This package may include functions for handling ROI types, converting between different data types and message types, as well as common functions related to them.

"},{"location":"common/autoware_trajectory/","title":"Autoware Trajectory","text":""},{"location":"common/autoware_trajectory/#autoware-trajectory","title":"Autoware Trajectory","text":"

This package provides classes to manage/manipulate Trajectory.

"},{"location":"common/autoware_trajectory/#example-usage","title":"Example Usage","text":"

This section describes Example Usage of Trajectory<autoware_planning_msgs::msg::PathPoint>

"},{"location":"common/autoware_universe_utils/","title":"autoware_universe_utils","text":""},{"location":"common/autoware_universe_utils/#autoware_universe_utils","title":"autoware_universe_utils","text":""},{"location":"common/autoware_universe_utils/#purpose","title":"Purpose","text":"

This package contains many common functions used by other packages, so please refer to them as needed.

"},{"location":"common/autoware_universe_utils/#for-developers","title":"For developers","text":"

autoware_universe_utils.hpp header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.

"},{"location":"common/autoware_universe_utils/#autowareuniverse_utils","title":"autoware::universe_utils","text":""},{"location":"common/autoware_universe_utils/#systems","title":"systems","text":""},{"location":"common/autoware_universe_utils/#autowareuniverse_utilstimekeeper","title":"autoware::universe_utils::TimeKeeper","text":""},{"location":"common/autoware_universe_utils/#constructor","title":"Constructor","text":"
template <typename... Reporters>\nexplicit TimeKeeper(Reporters... reporters);\n
"},{"location":"common/autoware_universe_utils/#methods","title":"Methods","text":" "},{"location":"common/autoware_universe_utils/#note","title":"Note","text":" "},{"location":"common/autoware_universe_utils/#example","title":"Example","text":"
#include <rclcpp/rclcpp.hpp>\n\n#include <std_msgs/msg/string.hpp>\n\n#include <chrono>\n#include <iostream>\n#include <memory>\n#include <thread>\n\nclass ExampleNode : public rclcpp::Node\n{\npublic:\nExampleNode() : Node(\"time_keeper_example\")\n{\npublisher_ =\ncreate_publisher<autoware::universe_utils::ProcessingTimeDetail>(\"processing_time\", 1);\n\ntime_keeper_ = std::make_shared<autoware::universe_utils::TimeKeeper>(publisher_, &std::cerr);\n// You can also add a reporter later by add_reporter.\n// time_keeper_->add_reporter(publisher_);\n// time_keeper_->add_reporter(&std::cerr);\n\ntimer_ =\ncreate_wall_timer(std::chrono::seconds(1), std::bind(&ExampleNode::func_a, this));\n}\n\nprivate:\nstd::shared_ptr<autoware::universe_utils::TimeKeeper> time_keeper_;\nrclcpp::Publisher<autoware::universe_utils::ProcessingTimeDetail>::SharedPtr publisher_;\nrclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_str_;\nrclcpp::TimerBase::SharedPtr timer_;\n\nvoid func_a()\n{\n// Start constructing ProcessingTimeTree (because func_a is the root function)\nautoware::universe_utils::ScopedTimeTrack st(\"func_a\", *time_keeper_);\nstd::this_thread::sleep_for(std::chrono::milliseconds(1));\ntime_keeper_->comment(\"This is a comment for func_a\");\nfunc_b();\n// End constructing ProcessingTimeTree. After this, the tree will be reported (publishing\n// message and outputting to std::cerr)\n}\n\nvoid func_b()\n{\nautoware::universe_utils::ScopedTimeTrack st(\"func_b\", *time_keeper_);\nstd::this_thread::sleep_for(std::chrono::milliseconds(2));\ntime_keeper_->comment(\"This is a comment for func_b\");\nfunc_c();\n}\n\nvoid func_c()\n{\nautoware::universe_utils::ScopedTimeTrack st(\"func_c\", *time_keeper_);\nstd::this_thread::sleep_for(std::chrono::milliseconds(3));\ntime_keeper_->comment(\"This is a comment for func_c\");\n}\n};\n\nint main(int argc, char ** argv)\n{\nrclcpp::init(argc, argv);\nauto node = std::make_shared<ExampleNode>();\nrclcpp::spin(node);\nrclcpp::shutdown();\nreturn 0;\n}\n
"},{"location":"common/autoware_universe_utils/#autowareuniverse_utilsscopedtimetrack","title":"autoware::universe_utils::ScopedTimeTrack","text":""},{"location":"common/autoware_universe_utils/#description","title":"Description","text":"

Class for automatically tracking the processing time of a function within a scope.

"},{"location":"common/autoware_universe_utils/#constructor_1","title":"Constructor","text":"
ScopedTimeTrack(const std::string & func_name, TimeKeeper & time_keeper);\n
"},{"location":"common/autoware_universe_utils/#destructor","title":"Destructor","text":"
~ScopedTimeTrack();\n
"},{"location":"common/autoware_universe_utils/third_party_licenses/opencv-license/","title":"Opencv license","text":"
                             Apache License\n                       Version 2.0, January 2004\n                    http://www.apache.org/licenses/\n

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

  1. Definitions.

    \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

    \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

    \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

    \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.

    \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

    \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

    \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

    \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

    \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"

    \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

    (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and

    (b) You must cause any modified files to carry prominent notices stating that You changed the files; and

    (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and

    (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

    You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

  To apply the Apache License to your work, attach the following\n  boilerplate notice, with the fields enclosed by brackets \"[]\"\n  replaced with your own identifying information. (Don't include\n  the brackets!)  The text should be enclosed in the appropriate\n  comment syntax for the file format. We also recommend that a\n  file or class name and description of purpose be included on the\n  same \"printed page\" as the copyright notice for easier\n  identification within third-party archives.\n

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0\n

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"common/autoware_vehicle_info_utils/Readme/","title":"Vehicle Info Util","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#vehicle-info-util","title":"Vehicle Info Util","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#purpose","title":"Purpose","text":"

This package is to get vehicle info parameters.

"},{"location":"common/autoware_vehicle_info_utils/Readme/#description","title":"Description","text":"

In here, you can check the vehicle dimensions with more detail. The current format supports only the Ackermann model. This file defines the model assumed in autoware path planning, control, etc. and does not represent the exact physical model. If a model other than the Ackermann model is used, it is assumed that a vehicle interface will be designed to change the control output for the model.

"},{"location":"common/autoware_vehicle_info_utils/Readme/#versioning-policy","title":"Versioning Policy","text":"

We have implemented a versioning system for the vehicle_info.param.yaml file to ensure clarity and consistency in file format across different versions of Autoware and its external applications. Please see discussion for the details.

"},{"location":"common/autoware_vehicle_info_utils/Readme/#how-to-operate","title":"How to Operate","text":"
/**:\nros__parameters:\n# version: 0.1.0 # Uncomment and update this line for future format changes.\nwheel_radius: 0.383\n...\n
"},{"location":"common/autoware_vehicle_info_utils/Readme/#why-versioning","title":"Why Versioning?","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#scripts","title":"Scripts","text":""},{"location":"common/autoware_vehicle_info_utils/Readme/#minimum-turning-radius","title":"Minimum turning radius","text":"
$ ros2 run autoware_vehicle_info_utils min_turning_radius_calculator.py\nyaml path is /home/autoware/pilot-auto/install/autoware_vehicle_info_utils/share/autoware_vehicle_info_utils/config/vehicle_info.param.yaml\nMinimum turning radius is 3.253042620027102 [m] for rear, 4.253220695862465 [m] for front.\n

You can designate yaml file with -y option as follows.

ros2 run autoware_vehicle_info_utils min_turning_radius_calculator.py -y <path-to-yaml>\n
"},{"location":"common/autoware_vehicle_info_utils/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_api_utils/","title":"tier4_api_utils","text":""},{"location":"common/tier4_api_utils/#tier4_api_utils","title":"tier4_api_utils","text":"

This is an old implementation of a class that logs when calling a service. Please use component_interface_utils instead.

"},{"location":"control/autoware_autonomous_emergency_braking/","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autoware_autonomous_emergency_braking/#autonomous-emergency-braking-aeb","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autoware_autonomous_emergency_braking/#purpose-role","title":"Purpose / Role","text":"

autonomous_emergency_braking is a module that prevents collisions with obstacles on the predicted path created by a control module or sensor values estimated from the control module.

"},{"location":"control/autoware_autonomous_emergency_braking/#assumptions","title":"Assumptions","text":"

This module has following assumptions.

"},{"location":"control/autoware_autonomous_emergency_braking/#imu-path-generation-steering-angle-vs-imus-angular-velocity","title":"IMU path generation: steering angle vs IMU's angular velocity","text":"

Currently, the IMU-based path is generated using the angular velocity obtained by the IMU itself. It has been suggested that the steering angle could be used instead onf the angular velocity.

The pros and cons of both approaches are:

IMU angular velocity:

Steering angle:

For the moment, there are no plans to implement the steering angle on the path creation process of the AEB module.

"},{"location":"control/autoware_autonomous_emergency_braking/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

AEB has the following steps before it outputs the emergency stop signal.

  1. Activate AEB if necessary.

  2. Generate a predicted path of the ego vehicle.

  3. Get target obstacles from the input point cloud and/or predicted object data.

  4. Estimate the closest obstacle speed.

  5. Collision check with target obstacles.

  6. Send emergency stop signals to /diagnostics.

We give more details of each section below.

"},{"location":"control/autoware_autonomous_emergency_braking/#1-activate-aeb-if-necessary","title":"1. Activate AEB if necessary","text":"

We do not activate AEB module if it satisfies the following conditions.

"},{"location":"control/autoware_autonomous_emergency_braking/#2-generate-a-predicted-path-of-the-ego-vehicle","title":"2. Generate a predicted path of the ego vehicle","text":""},{"location":"control/autoware_autonomous_emergency_braking/#21-overview-of-imu-path-generation","title":"2.1 Overview of IMU Path Generation","text":"

AEB generates a predicted footprint path based on current velocity and current angular velocity obtained from attached sensors. Note that if use_imu_path is false, it skips this step. This predicted path is generated as:

\\[ x_{k+1} = x_k + v cos(\\theta_k) dt \\\\ y_{k+1} = y_k + v sin(\\theta_k) dt \\\\ \\theta_{k+1} = \\theta_k + \\omega dt \\]

where \\(v\\) and \\(\\omega\\) are current longitudinal velocity and angular velocity respectively. \\(dt\\) is time interval that users can define in advance with the imu_prediction_time_interval parameter. The IMU path is generated considering a time horizon, defined by the imu_prediction_time_horizon parameter.

"},{"location":"control/autoware_autonomous_emergency_braking/#22-constraints-and-countermeasures-in-imu-path-generation","title":"2.2 Constraints and Countermeasures in IMU Path Generation","text":"

Since the IMU path generation only uses the ego vehicle's current angular velocity, disregarding the MPC's planner steering, the shape of the IMU path tends to get distorted quite easily and protrude out of the ego vehicle's current lane, possibly causing unwanted emergency stops. There are two countermeasures for this issue:

  1. Control using the max_generated_imu_path_length parameter

  2. Control based on lateral deviation

"},{"location":"control/autoware_autonomous_emergency_braking/#23-advantages-and-limitations-of-lateral-deviation-control","title":"2.3 Advantages and Limitations of Lateral Deviation Control","text":"

The advantage of setting a lateral deviation limit with the limit_imu_path_lat_dev parameter is that the imu_prediction_time_horizon and the max_generated_imu_path_length can be increased without worries about the IMU predicted path deforming beyond a certain threshold. The downside is that the IMU path will be cut short when the ego has a high angular velocity, in said cases, the AEB module would mostly rely on the MPC path to prevent or mitigate collisions.

If it is assumed the ego vehicle will mostly travel along the centerline of its lanelets, it can be useful to set the lateral deviation threshold parameter imu_path_lat_dev_threshold to be equal to or smaller than the average lanelet width divided by 2, that way, the chance of the IMU predicted path leaving the current ego lanelet is smaller, and it is possible to increase the imu_prediction_time_horizon to prevent frontal collisions when the ego is mostly traveling in a straight line.

The lateral deviation is measured using the ego vehicle's current position as a reference, and it measures the distance of the furthermost vertex of the predicted ego footprint to the predicted path. The following image illustrates how the lateral deviation of a given ego pose is measured.

"},{"location":"control/autoware_autonomous_emergency_braking/#24-imu-path-generation-algorithm","title":"2.4 IMU Path Generation Algorithm","text":""},{"location":"control/autoware_autonomous_emergency_braking/#241-selection-of-lateral-deviation-check-points","title":"2.4.1 Selection of Lateral Deviation Check Points","text":"

Select vehicle vertices for lateral deviation checks based on the following conditions:

"},{"location":"control/autoware_autonomous_emergency_braking/#242-path-generation-process","title":"2.4.2 Path Generation Process","text":"

Execute the following steps at each time step:

  1. State Update

  2. Vehicle Footprint Generation

  3. Lateral Deviation Calculation

  4. Evaluation of Termination Conditions

"},{"location":"control/autoware_autonomous_emergency_braking/#243-termination-conditions","title":"2.4.3 Termination Conditions","text":"

Path generation terminates when any of the following conditions are met:

  1. Basic Termination Conditions (both must be satisfied)

  2. Path Length Termination Condition

  3. Lateral Deviation Termination Condition (when limit_imu_path_lat_dev = true)

"},{"location":"control/autoware_autonomous_emergency_braking/#mpc-path-generation","title":"MPC path generation","text":"

If the use_predicted_trajectory parameter is set to true, the AEB module will directly use the predicted path from the MPC as a base to generate a footprint path. It will copy the ego poses generated by the MPC until a given time horizon. The mpc_prediction_time_horizon parameter dictates how far ahead in the future the MPC path will predict the ego vehicle's movement. Both the IMU footprint path and the MPC footprint path can be used at the same time.

"},{"location":"control/autoware_autonomous_emergency_braking/#3-get-target-obstacles","title":"3. Get target obstacles","text":"

After generating the ego footprint path(s), the target obstacles are identified. There are two methods to find target obstacles: using the input point cloud, or using the predicted object information coming from perception modules.

"},{"location":"control/autoware_autonomous_emergency_braking/#pointcloud-obstacle-filtering","title":"Pointcloud obstacle filtering","text":"

The AEB module can filter the input pointcloud to find target obstacles with which the ego vehicle might collide. This method can be enable if the use_pointcloud_data parameter is set to true. The pointcloud obstacle filtering has three major steps, which are rough filtering, noise filtering with clustering and rigorous filtering.

"},{"location":"control/autoware_autonomous_emergency_braking/#rough-filtering","title":"Rough filtering","text":"

In rough filtering step, we select target obstacle with simple filter. Create a search area up to a certain distance (default is half of the ego vehicle width plus the path_footprint_extra_margin parameter plus the expand_width parameter) away from the predicted path of the ego vehicle and ignore the point cloud that are not within it. The rough filtering step is illustrated below.

"},{"location":"control/autoware_autonomous_emergency_braking/#noise-filtering-with-clustering-and-convex-hulls","title":"Noise filtering with clustering and convex hulls","text":"

To prevent the AEB from considering noisy points, euclidean clustering is performed on the filtered point cloud. The points in the point cloud that are not close enough to other points to form a cluster are discarded. Furthermore, each point in a cluster is compared against the cluster_minimum_height parameter, if no point inside a cluster has a height/z value greater than cluster_minimum_height, the whole cluster of points is discarded. The parameters cluster_tolerance, minimum_cluster_size and maximum_cluster_size can be used to tune the clustering and the size of objects to be ignored, for more information about the clustering method used by the AEB module, please check the official documentation on euclidean clustering of the PCL library: https://pcl.readthedocs.io/projects/tutorials/en/master/cluster_extraction.html.

Furthermore, a 2D convex hull is created around each detected cluster, the vertices of each hull represent the most extreme/outside points of the cluster. These vertices are then checked in the next step.

"},{"location":"control/autoware_autonomous_emergency_braking/#rigorous-filtering","title":"Rigorous filtering","text":"

After Noise filtering, the module performs a geometric collision check to determine whether the filtered obstacles/hull vertices actually have possibility to collide with the ego vehicle. In this check, the ego vehicle is represented as a rectangle, and the point cloud obstacles are represented as points. Only the vertices with a possibility of collision are labeled as target obstacles.

"},{"location":"control/autoware_autonomous_emergency_braking/#obstacle-labeling","title":"Obstacle labeling","text":"

After rigorous filtering, the remaining obstacles are labeled. An obstacle is given a \"target\" label for collision checking only if it falls within the ego vehicle's defined footprint (made using the ego vehicle's width and the expand_width parameter). For an emergency stop to occur, at least one obstacle needs to be labeled as a target.

"},{"location":"control/autoware_autonomous_emergency_braking/#using-predicted-objects-to-get-target-obstacles","title":"Using predicted objects to get target obstacles","text":"

If the use_predicted_object_data parameter is set to true, the AEB can use predicted object data coming from the perception modules, to get target obstacle points. This is done by obtaining the 2D intersection points between the ego's predicted footprint path (made using the ego vehicle's width and the expand_width parameter) and each of the predicted objects enveloping polygon or bounding box. if there is no intersection, all points are discarded.

"},{"location":"control/autoware_autonomous_emergency_braking/#finding-the-closest-target-obstacle","title":"Finding the closest target obstacle","text":"

After identifying all possible obstacles using pointcloud data and/or predicted object data, the AEB module selects the closest point to the ego vehicle as the candidate for collision checking. The \"closest object\" is defined as an obstacle within the ego vehicle's footprint, determined by its width and the expand_width parameter, that is closest to the ego vehicle along the longitudinal axis, using the IMU or MPC path as a reference. Target obstacles are prioritized over those outside the ego path, even if the latter are longitudinally closer. This prioritization ensures that the collision check focuses on objects that pose the highest risk based on the vehicle's trajectory.

If no target obstacles are found, the AEB module considers other nearby obstacles outside the path. In such cases, it skips the collision check but records the position of the closest obstacle to calculate its speed (Step #4). Note that, obstacles obtained with predicted object data are all target obstacles since they are within the ego footprint path and it is not necessary to calculate their speed (it is already calculated by the perception module). Such obstacles are excluded from step #4.

"},{"location":"control/autoware_autonomous_emergency_braking/#4-obstacle-velocity-estimation","title":"4. Obstacle velocity estimation","text":"

To begin calculating the target point's velocity, the point must enter the speed calculation area, which is defined by the speed_calculation_expansion_margin parameter plus the ego vehicles width and the expand_width parameter. Depending on the operational environment, this margin can reduce unnecessary autonomous emergency braking caused by velocity miscalculations during the initial calculation steps.

Once the position of the closest obstacle/point is determined, the AEB modules uses the history of previously detected objects to estimate the closest object relative speed using the following equations:

\\[ d_{t} = t_{1} - t_{0} \\] \\[ d_{x} = norm(o_{x} - prev_{x}) \\] \\[ v_{norm} = d_{x} / d_{t} \\]

Where \\(t_{1}\\) and \\(t_{0}\\) are the timestamps of the point clouds used to detect the current closest object and the closest object of the previous point cloud frame, and \\(o_{x}\\) and \\(prev_{x}\\) are the positions of those objects, respectively.

Note that, when the closest obstacle/point comes from using predicted object data, \\(v_{norm}\\) is calculated by directly computing the norm of the predicted object's velocity in the x and y axes.

The velocity vector is then compared against the ego's predicted path to get the longitudinal velocity \\(v_{obj}\\):

\\[ v_{obj} = v_{norm} * Cos(yaw_{diff}) + v_{ego} \\]

where \\(yaw_{diff}\\) is the difference in yaw between the ego path and the displacement vector and \\(v_{ego}\\) is the ego's current speed, which accounts for the movement of points caused by the ego moving and not the object. All these equations are performed disregarding the z axis (in 2D).

Note that the object velocity is calculated against the ego's current movement direction. If the object moves in the opposite direction to the ego's movement, the object velocity will be negative, which will reduce the rss distance on the next step.

The resulting estimated object speed is added to a queue of speeds with timestamps. The AEB then checks for expiration of past speed estimations and eliminates expired speed measurements from the queue, the object expiration is determined by checking if the time elapsed since the speed was first added to the queue is larger than the parameter previous_obstacle_keep_time. Finally, the median speed of the queue is calculated. The median speed will be used to calculate the RSS distance used for collision checking.

"},{"location":"control/autoware_autonomous_emergency_braking/#5-collision-check-with-target-obstacles-using-rss-distance","title":"5. Collision check with target obstacles using RSS distance","text":"

In the fifth step, the AEB module checks for collision with the closest target obstacle using RSS distance. Only the closest target object is evaluated because RSS distance is used to determine collision risk. If the nearest target point is deemed safe, all other potential obstacles within the path are also assumed to be safe.

RSS distance is formulated as:

\\[ d = v_{ego}*t_{response} + v_{ego}^2/(2*a_{min}) -(sign(v_{obj})) * v_{obj}^2/(2*a_{obj_{min}}) + offset \\]

where \\(v_{ego}\\) and \\(v_{obj}\\) is current ego and obstacle velocity, \\(a_{min}\\) and \\(a_{obj_{min}}\\) is ego and object minimum acceleration (maximum deceleration), \\(t_{response}\\) is response time of the ego vehicle to start deceleration. Therefore the distance from the ego vehicle to the obstacle is smaller than this RSS distance \\(d\\), the ego vehicle send emergency stop signals.

Only obstacles classified as \"targets\" (as defined in Step #3) are considered for RSS distance calculations. Among these \"target\" obstacles, the one closest to the ego vehicle is used for the calculation. If no \"target\" obstacles are present\u2014meaning no obstacles fall within the ego vehicle's predicted path (determined by its width and an expanded margin)\u2014this step is skipped. Instead, the position of the closest obstacle is recorded for future speed calculations (Step #4). In this scenario, no emergency stop diagnostic message is generated. The process is illustrated in the accompanying diagram.

"},{"location":"control/autoware_autonomous_emergency_braking/#6-send-emergency-stop-signals-to-diagnostics","title":"6. Send emergency stop signals to /diagnostics","text":"

If AEB detects collision with point cloud obstacles in the previous step, it sends emergency signal to /diagnostics in this step. Note that in order to enable emergency stop, it has to send ERROR level emergency. Moreover, AEB user should modify the setting file to keep the emergency level, otherwise Autoware does not hold the emergency state.

"},{"location":"control/autoware_autonomous_emergency_braking/#use-cases","title":"Use cases","text":""},{"location":"control/autoware_autonomous_emergency_braking/#front-vehicle-suddenly-brakes","title":"Front vehicle suddenly brakes","text":"

The AEB can activate when a vehicle in front suddenly brakes, and a collision is detected by the AEB module. Provided the distance between the ego vehicle and the front vehicle is large enough and the ego\u2019s emergency acceleration value is high enough, it is possible to avoid or soften collisions with vehicles in front that suddenly brake. NOTE: the acceleration used by the AEB to calculate rss_distance is NOT necessarily the acceleration used by the ego while doing an emergency brake. The acceleration used by the real vehicle can be tuned by changing the mrm_emergency stop jerk and acceleration values.

"},{"location":"control/autoware_autonomous_emergency_braking/#stop-for-objects-that-appear-suddenly","title":"Stop for objects that appear suddenly","text":"

When an object appears suddenly, the AEB can act as a fail-safe to stop the ego vehicle when other modules fail to detect the object on time. If sudden object cut ins are expected, it might be useful for the AEB module to detect collisions of objects BEFORE they enter the real ego vehicle path by increasing the expand_width parameter.

"},{"location":"control/autoware_autonomous_emergency_braking/#preventing-collisions-with-rear-objects","title":"Preventing Collisions with rear objects","text":"

The AEB module can also prevent collisions when the ego vehicle is moving backwards.

"},{"location":"control/autoware_autonomous_emergency_braking/#preventing-collisions-in-case-of-wrong-odometry-imu-path-only","title":"Preventing collisions in case of wrong Odometry (IMU path only)","text":"

When vehicle odometry information is faulty, it is possible that the MPC fails to predict a correct path for the ego vehicle. If the MPC predicted path is wrong, collision avoidance will not work as intended on the planning modules. However, the AEB\u2019s IMU path does not depend on the MPC and could be able to predict a collision when the other modules cannot. As an example you can see a figure of a hypothetical case in which the MPC path is wrong and only the AEB\u2019s IMU path detects a collision.

"},{"location":"control/autoware_autonomous_emergency_braking/#parameters","title":"Parameters","text":"Name Unit Type Description Default value publish_debug_markers [-] bool flag to publish debug markers true publish_debug_pointcloud [-] bool flag to publish the point cloud used for debugging false use_predicted_trajectory [-] bool flag to use the predicted path from the control module true use_imu_path [-] bool flag to use the predicted path generated by sensor data true use_object_velocity_calculation [-] bool flag to use the object velocity calculation. If set to false, object velocity is set to 0 [m/s] true check_autoware_state [-] bool flag to enable or disable autoware state check. If set to false, the AEB module will run even when the ego vehicle is not in AUTONOMOUS state. true detection_range_min_height [m] double minimum hight of detection range used for avoiding the ghost brake by false positive point clouds 0.0 detection_range_max_height_margin [m] double margin for maximum hight of detection range used for avoiding the ghost brake by false positive point clouds. detection_range_max_height = vehicle_height + detection_range_max_height_margin 0.0 voxel_grid_x [m] double down sampling parameters of x-axis for voxel grid filter 0.05 voxel_grid_y [m] double down sampling parameters of y-axis for voxel grid filter 0.05 voxel_grid_z [m] double down sampling parameters of z-axis for voxel grid filter 100000.0 cluster tolerance [m] double maximum allowable distance between any two points to be considered part of the same cluster 0.15 cluster_minimum_height [m] double at least one point in a cluster must be higher than this value for the cluster to be included in the set of possible collision targets 0.1 minimum_cluster_size [-] int minimum required amount of points contained by a cluster for it to be considered as a possible target obstacle 10 maximum_cluster_size [-] int maximum amount of points contained by a cluster for it to be considered as a possible target obstacle 10000 min_generated_imu_path_length [m] double minimum distance for a predicted path generated by sensors 0.5 max_generated_imu_path_length [m] double maximum distance for a predicted path generated by sensors 10.0 expand_width [m] double expansion width of the ego vehicle for collision checking, path cropping and speed calculation 0.1 longitudinal_offset_margin [m] double longitudinal offset distance for collision check 2.0 t_response [s] double response time for the ego to detect the front vehicle starting deceleration 1.0 a_ego_min [m/ss] double maximum deceleration value of the ego vehicle -3.0 a_obj_min [m/ss] double maximum deceleration value of objects -3.0 imu_prediction_time_horizon [s] double time horizon of the predicted path generated by sensors 1.5 imu_prediction_time_interval [s] double time interval of the predicted path generated by sensors 0.1 mpc_prediction_time_horizon [s] double time horizon of the predicted path generated by mpc 1.5 mpc_prediction_time_interval [s] double time interval of the predicted path generated by mpc 0.1 aeb_hz [-] double frequency at which AEB operates per second 10 speed_calculation_expansion_margin [m] double expansion width of the ego vehicle footprint used when calculating the closest object's speed 0.7 path_footprint_extra_margin [m] double this parameters expands the ego footprint used to crop the AEB input pointcloud 1.0"},{"location":"control/autoware_autonomous_emergency_braking/#limitations","title":"Limitations","text":" "},{"location":"control/autoware_collision_detector/","title":"Collision Detector","text":""},{"location":"control/autoware_collision_detector/#collision-detector","title":"Collision Detector","text":""},{"location":"control/autoware_collision_detector/#purpose","title":"Purpose","text":"

This module subscribes required data (ego-pose, obstacles, etc), and publishes diagnostics if an object is within a specific distance.

"},{"location":"control/autoware_collision_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_collision_detector/#flow-chart","title":"Flow chart","text":""},{"location":"control/autoware_collision_detector/#algorithms","title":"Algorithms","text":""},{"location":"control/autoware_collision_detector/#check-data","title":"Check data","text":"

Check that collision_detector receives no ground pointcloud, dynamic objects.

"},{"location":"control/autoware_collision_detector/#object-filtering","title":"Object Filtering","text":""},{"location":"control/autoware_collision_detector/#recognition-assumptions","title":"Recognition Assumptions","text":"
  1. If the classification changes but it's considered the same object, the uuid does not change.
  2. It's possible for the same uuid to be recognized after being lost for a few frames.
  3. Once an object is determined to be excluded, it continues to be excluded for a certain period of time.
"},{"location":"control/autoware_collision_detector/#filtering-process","title":"Filtering Process","text":"
  1. Initial Recognition and Exclusion:

  2. New Object Determination:

  3. Exclusion Mechanism:

"},{"location":"control/autoware_collision_detector/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.

"},{"location":"control/autoware_collision_detector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_collision_detector/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_perception_msgs::msg::PredictedObjects Dynamic objects /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"control/autoware_collision_detector/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics"},{"location":"control/autoware_collision_detector/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true collision_distance double Distance threshold at which an object is considered a collision. [m] 0.15 nearby_filter_radius double Distance range for filtering objects. Objects within this radius are considered. [m] 5.0 keep_ignoring_time double Time to keep filtering objects that first appeared in the vicinity [sec] 10.0 nearby_object_type_filters object of bool values Specifies which object types to filter. Only objects with true value will be filtered. {unknown: true, others: false}"},{"location":"control/autoware_collision_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"control/autoware_control_validator/","title":"Control Validator","text":""},{"location":"control/autoware_control_validator/#control-validator","title":"Control Validator","text":"

The control_validator is a module that checks the validity of the output of the control component. The status of the validation can be viewed in the /diagnostics topic.

"},{"location":"control/autoware_control_validator/#supported-features","title":"Supported features","text":"

The following features are supported for the validation and can have thresholds set by parameters. The listed features below does not always correspond to the latest implementation.

Description Arguments Diagnostic equation Inverse velocity: Measured velocity has a different sign from the target velocity. measured velocity \\(v\\), target velocity \\(\\hat{v}\\), and velocity parameter \\(c\\) \\(v \\hat{v} < 0, \\quad \\lvert v \\rvert > c\\) Overspeed: Measured speed exceeds target speed significantly. measured velocity \\(v\\), target velocity \\(\\hat{v}\\), ratio parameter \\(r\\), and offset parameter \\(c\\) \\(\\lvert v \\rvert > (1 + r) \\lvert \\hat{v} \\rvert + c\\)

"},{"location":"control/autoware_control_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"control/autoware_control_validator/#inputs","title":"Inputs","text":"

The control_validator takes in the following inputs:

Name Type Description ~/input/kinematics nav_msgs/Odometry ego pose and twist ~/input/reference_trajectory autoware_planning_msgs/Trajectory reference trajectory which is outputted from planning module to to be followed ~/input/predicted_trajectory autoware_planning_msgs/Trajectory predicted trajectory which is outputted from control module"},{"location":"control/autoware_control_validator/#outputs","title":"Outputs","text":"

It outputs the following:

Name Type Description ~/output/validation_status control_validator/ControlValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"control/autoware_control_validator/#parameters","title":"Parameters","text":"

The following parameters can be set for the control_validator:

"},{"location":"control/autoware_control_validator/#system-parameters","title":"System parameters","text":"Name Type Description Default value publish_diag bool if true, diagnostics msg is published. true diag_error_count_threshold int the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true display_on_terminal bool show error msg on terminal true"},{"location":"control/autoware_control_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"control/autoware_control_validator/#thresholds","title":"Thresholds","text":"

The input trajectory is detected as invalid if the index exceeds the following thresholds.

Name Type Description Default value thresholds.max_distance_deviation double invalid threshold of the max distance deviation between the predicted path and the reference trajectory [m] 1.0 thresholds.rolling_back_velocity double threshold velocity to valid the vehicle velocity [m/s] 0.5 thresholds.over_velocity_offset double threshold velocity offset to valid the vehicle velocity [m/s] 2.0 thresholds.over_velocity_ratio double threshold ratio to valid the vehicle velocity [*] 0.2"},{"location":"control/autoware_external_cmd_selector/","title":"autoware_external_cmd_selector","text":""},{"location":"control/autoware_external_cmd_selector/#autoware_external_cmd_selector","title":"autoware_external_cmd_selector","text":""},{"location":"control/autoware_external_cmd_selector/#purpose","title":"Purpose","text":"

autoware_external_cmd_selector is the package to publish external_control_cmd, gear_cmd, hazard_lights_cmd, heartbeat and turn_indicators_cmd, according to the current mode, which is remote or local.

The current mode is set via service, remote is remotely operated, local is to use the values calculated by Autoware.

"},{"location":"control/autoware_external_cmd_selector/#input-output","title":"Input / Output","text":""},{"location":"control/autoware_external_cmd_selector/#input-topics","title":"Input topics","text":"Name Type Description /api/external/set/command/local/control TBD Local. Calculated control value. /api/external/set/command/local/heartbeat TBD Local. Heartbeat. /api/external/set/command/local/shift TBD Local. Gear shift like drive, rear and etc. /api/external/set/command/local/turn_signal TBD Local. Turn signal like left turn, right turn and etc. /api/external/set/command/remote/control TBD Remote. Calculated control value. /api/external/set/command/remote/heartbeat TBD Remote. Heartbeat. /api/external/set/command/remote/shift TBD Remote. Gear shift like drive, rear and etc. /api/external/set/command/remote/turn_signal TBD Remote. Turn signal like left turn, right turn and etc."},{"location":"control/autoware_external_cmd_selector/#output-topics","title":"Output topics","text":"Name Type Description /control/external_cmd_selector/current_selector_mode TBD Current selected mode, remote or local. /diagnostics diagnostic_msgs::msg::DiagnosticArray Check if node is active or not. /external/selected/external_control_cmd TBD Pass through control command with current mode. /external/selected/gear_cmd autoware_vehicle_msgs::msg::GearCommand Pass through gear command with current mode. /external/selected/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand Pass through hazard light with current mode. /external/selected/heartbeat TBD Pass through heartbeat with current mode. /external/selected/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand Pass through turn indicator with current mode."},{"location":"control/autoware_external_cmd_selector/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float The rate in Hz at which the external command selector node updates. 10.0 N/A initial_selector_mode string The initial mode for command selection, either 'local' or 'remote'. local ['local', 'remote']"},{"location":"control/autoware_joy_controller/","title":"autoware_joy_controller","text":""},{"location":"control/autoware_joy_controller/#autoware_joy_controller","title":"autoware_joy_controller","text":""},{"location":"control/autoware_joy_controller/#role","title":"Role","text":"

autoware_joy_controller is the package to convert a joy msg to autoware commands (e.g. steering wheel, shift, turn signal, engage) for a vehicle.

"},{"location":"control/autoware_joy_controller/#usage","title":"Usage","text":""},{"location":"control/autoware_joy_controller/#ros-2-launch","title":"ROS 2 launch","text":"
# With default config (ds4)\nros2 launch autoware_joy_controller joy_controller.launch.xml\n\n# Default config but select from the existing parameter files\nros2 launch autoware_joy_controller joy_controller_param_selection.launch.xml joy_type:=ds4 # or g29, p65, xbox\n\n# Override the param file\nros2 launch autoware_joy_controller joy_controller.launch.xml config_file:=/path/to/your/param.yaml\n
"},{"location":"control/autoware_joy_controller/#input-output","title":"Input / Output","text":""},{"location":"control/autoware_joy_controller/#input-topics","title":"Input topics","text":"Name Type Description ~/input/joy sensor_msgs::msg::Joy joy controller command ~/input/odometry nav_msgs::msg::Odometry ego vehicle odometry to get twist"},{"location":"control/autoware_joy_controller/#output-topics","title":"Output topics","text":"Name Type Description ~/output/control_command autoware_control_msgs::msg::Control lateral and longitudinal control command ~/output/external_control_command tier4_external_api_msgs::msg::ControlCommandStamped lateral and longitudinal control command ~/output/shift tier4_external_api_msgs::msg::GearShiftStamped gear command ~/output/turn_signal tier4_external_api_msgs::msg::TurnSignalStamped turn signal command ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (Auto or External) ~/output/heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/output/vehicle_engage autoware_vehicle_msgs::msg::Engage vehicle engage"},{"location":"control/autoware_joy_controller/#parameters","title":"Parameters","text":"Parameter Type Description joy_type string joy controller type (default: DS4) update_rate double update rate to publish control commands accel_ratio double ratio to calculate acceleration (commanded acceleration is ratio * operation) brake_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation) steer_ratio double ratio to calculate deceleration (commanded steer is ratio * operation) steering_angle_velocity double steering angle velocity for operation accel_sensitivity double sensitivity to calculate acceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) brake_sensitivity double sensitivity to calculate deceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) raw_control bool skip input odometry if true velocity_gain double ratio to calculate velocity by acceleration max_forward_velocity double absolute max velocity to go forward max_backward_velocity double absolute max velocity to go backward backward_accel_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation)"},{"location":"control/autoware_joy_controller/#p65-joystick-key-map","title":"P65 Joystick Key Map","text":"Action Button Acceleration R2 Brake L2 Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal A Gate Mode B Emergency Stop Select Clear Emergency Stop Start Autoware Engage X Autoware Disengage Y Vehicle Engage PS Vehicle Disengage Right Trigger"},{"location":"control/autoware_joy_controller/#ds4-joystick-key-map","title":"DS4 Joystick Key Map","text":"Action Button Acceleration R2, \u00d7, or Right Stick Up Brake L2, \u25a1, or Right Stick Down Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal SHARE Gate Mode OPTIONS Emergency Stop PS Clear Emergency Stop PS Autoware Engage \u25cb Autoware Disengage \u25cb Vehicle Engage \u25b3 Vehicle Disengage \u25b3"},{"location":"control/autoware_joy_controller/#xbox-joystick-key-map","title":"XBOX Joystick Key Map","text":"Action Button Acceleration RT Brake LT Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left LB Turn Signal Right RB Clear Turn Signal A Gate Mode B Emergency Stop View Clear Emergency Stop Menu Autoware Engage X Autoware Disengage Y Vehicle Engage Left Stick Button Vehicle Disengage Right Stick Button"},{"location":"control/autoware_lane_departure_checker/","title":"Lane Departure Checker","text":""},{"location":"control/autoware_lane_departure_checker/#lane-departure-checker","title":"Lane Departure Checker","text":"

The Lane Departure Checker checks if vehicle follows a trajectory. If it does not follow the trajectory, it reports its status via diagnostic_updater.

"},{"location":"control/autoware_lane_departure_checker/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"control/autoware_lane_departure_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_lane_departure_checker/#how-to-extend-footprint-by-covariance","title":"How to extend footprint by covariance","text":"
  1. Calculate the standard deviation of error ellipse(covariance) in vehicle coordinate.

    1.Transform covariance into vehicle coordinate.

    Calculate covariance in vehicle coordinate.

    2.The longitudinal length we want to expand is correspond to marginal distribution of \\(x_{vehicle}\\), which is represented in \\(Cov_{vehicle}(0,0)\\). In the same way, the lateral length is represented in \\(Cov_{vehicle}(1,1)\\). Wikipedia reference here.

  2. Expand footprint based on the standard deviation multiplied with footprint_margin_scale.

"},{"location":"control/autoware_lane_departure_checker/#interface","title":"Interface","text":""},{"location":"control/autoware_lane_departure_checker/#input","title":"Input","text":""},{"location":"control/autoware_lane_departure_checker/#output","title":"Output","text":""},{"location":"control/autoware_lane_departure_checker/#parameters","title":"Parameters","text":""},{"location":"control/autoware_lane_departure_checker/#node-parameters","title":"Node Parameters","text":""},{"location":"control/autoware_lane_departure_checker/#general-parameters","title":"General Parameters","text":"Name Type Description Default value will_out_of_lane_checker bool Enable checker whether ego vehicle footprint will depart from lane True out_of_lane_checker bool Enable checker whether ego vehicle footprint is out of lane True boundary_departure_checker bool Enable checker whether ego vehicle footprint wil depart from boundary specified by boundary_types_to_detect False update_rate double Frequency for publishing [Hz] 10.0 visualize_lanelet bool Flag for visualizing lanelet False"},{"location":"control/autoware_lane_departure_checker/#parameters-for-lane-departure","title":"Parameters For Lane Departure","text":"Name Type Description Default value include_right_lanes bool Flag for including right lanelet in borders False include_left_lanes bool Flag for including left lanelet in borders False include_opposite_lanes bool Flag for including opposite lanelet in borders False include_conflicting_lanes bool Flag for including conflicting lanelet in borders False"},{"location":"control/autoware_lane_departure_checker/#parameters-for-road-border-departure","title":"Parameters For Road Border Departure","text":"Name Type Description Default value boundary_types_to_detect std::vector\\<std::string> line_string types to detect with boundary_departure_checker [road_border]"},{"location":"control/autoware_lane_departure_checker/#core-parameters","title":"Core Parameters","text":"Name Type Description Default value footprint_margin_scale double Coefficient for expanding footprint margin. Multiplied by 1 standard deviation. 1.0 footprint_extra_margin double Coefficient for expanding footprint margin. When checking for lane departure 0.0 resample_interval double Minimum Euclidean distance between points when resample trajectory.[m] 0.3 max_deceleration double Maximum deceleration when calculating braking distance. 2.8 delay_time double Delay time which took to actuate brake when calculating braking distance. [second] 1.3 max_lateral_deviation double Maximum lateral deviation in vehicle coordinate. [m] 2.0 max_longitudinal_deviation double Maximum longitudinal deviation in vehicle coordinate. [m] 2.0 max_yaw_deviation_deg double Maximum ego yaw deviation from trajectory. [deg] 60.0"},{"location":"control/autoware_mpc_lateral_controller/","title":"MPC Lateral Controller","text":""},{"location":"control/autoware_mpc_lateral_controller/#mpc-lateral-controller","title":"MPC Lateral Controller","text":"

This is the design document for the lateral controller node in the autoware_trajectory_follower_node package.

"},{"location":"control/autoware_mpc_lateral_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node is used to general lateral control commands (steering angle and steering rate) when following a path.

"},{"location":"control/autoware_mpc_lateral_controller/#design","title":"Design","text":"

The node uses an implementation of linear model predictive control (MPC) for accurate path tracking. The MPC uses a model of the vehicle to simulate the trajectory resulting from the control command. The optimization of the control command is formulated as a Quadratic Program (QP).

Different vehicle models are implemented:

For the optimization, a Quadratic Programming (QP) solver is used and two options are currently implemented:

"},{"location":"control/autoware_mpc_lateral_controller/#filtering","title":"Filtering","text":"

Filtering is required for good noise reduction. A Butterworth filter is employed for processing the yaw and lateral errors, which are used as inputs for the MPC, as well as for refining the output steering angle. Other filtering methods can be considered as long as the noise reduction performances are good enough. The moving average filter for example is not suited and can yield worse results than without any filtering.

"},{"location":"control/autoware_mpc_lateral_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The tracking is not accurate if the first point of the reference trajectory is at or in front of the current ego pose.

"},{"location":"control/autoware_mpc_lateral_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_mpc_lateral_controller/#inputs","title":"Inputs","text":"

Set the following from the controller_node

"},{"location":"control/autoware_mpc_lateral_controller/#outputs","title":"Outputs","text":"

Return LateralOutput which contains the following to the controller node

Publish the following messages.

Name Type Description ~/output/predicted_trajectory autoware_planning_msgs::Trajectory Predicted trajectory calculated by MPC. The trajectory size will be empty when the controller is in an emergency such as too large deviation from the planning trajectory."},{"location":"control/autoware_mpc_lateral_controller/#mpc-class","title":"MPC class","text":"

The MPC class (defined in mpc.hpp) provides the interface with the MPC algorithm. Once a vehicle model, a QP solver, and the reference trajectory to follow have been set (using setVehicleModel(), setQPSolver(), setReferenceTrajectory()), a lateral control command can be calculated by providing the current steer, velocity, and pose to function calculateMPC().

"},{"location":"control/autoware_mpc_lateral_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

"},{"location":"control/autoware_mpc_lateral_controller/#system","title":"System","text":"Name Type Description Default value traj_resample_dist double distance of waypoints in resampling [m] 0.1 use_steer_prediction boolean flag for using steer prediction (do not use steer measurement) false use_delayed_initial_state boolean flag to use x0_delayed as initial state for predicted trajectory true"},{"location":"control/autoware_mpc_lateral_controller/#path-smoothing","title":"Path Smoothing","text":"Name Type Description Default value enable_path_smoothing boolean path smoothing flag. This should be true when uses path resampling to reduce resampling noise. false path_filter_moving_ave_num int number of data points moving average filter for path smoothing 25 curvature_smoothing_num_traj int index distance of points used in curvature calculation for trajectory: p(i-num), p(i), p(i+num). larger num makes less noisy values. 15 curvature_smoothing_num_ref_steer int index distance of points used in curvature calculation for reference steering command: p(i-num), p(i), p(i+num). larger num makes less noisy values. 15"},{"location":"control/autoware_mpc_lateral_controller/#trajectory-extending","title":"Trajectory Extending","text":"Name Type Description Default value extend_trajectory_for_end_yaw_control boolean trajectory extending flag for end yaw control true"},{"location":"control/autoware_mpc_lateral_controller/#mpc-optimization","title":"MPC Optimization","text":"Name Type Description Default value qp_solver_type string QP solver option. described below in detail. \"osqp\" mpc_prediction_horizon int total prediction step for MPC 50 mpc_prediction_dt double prediction period for one step [s] 0.1 mpc_weight_lat_error double weight for lateral error 1.0 mpc_weight_heading_error double weight for heading error 0.0 mpc_weight_heading_error_squared_vel double weight for heading error * velocity 0.3 mpc_weight_steering_input double weight for steering error (steer command - reference steer) 1.0 mpc_weight_steering_input_squared_vel double weight for steering error (steer command - reference steer) * velocity 0.25 mpc_weight_lat_jerk double weight for lateral jerk (steer(i) - steer(i-1)) * velocity 0.1 mpc_weight_steer_rate double weight for steering rate [rad/s] 0.0 mpc_weight_steer_acc double weight for derivatives of the steering rate [rad/ss] 0.000001 mpc_low_curvature_weight_lat_error double [used in a low curvature trajectory] weight for lateral error 0.1 mpc_low_curvature_weight_heading_error double [used in a low curvature trajectory] weight for heading error 0.0 mpc_low_curvature_weight_heading_error_squared_vel double [used in a low curvature trajectory] weight for heading error * velocity 0.3 mpc_low_curvature_weight_steering_input double [used in a low curvature trajectory] weight for steering error (steer command - reference steer) 1.0 mpc_low_curvature_weight_steering_input_squared_vel double [used in a low curvature trajectory] weight for steering error (steer command - reference steer) * velocity 0.25 mpc_low_curvature_weight_lat_jerk double [used in a low curvature trajectory] weight for lateral jerk (steer(i) - steer(i-1)) * velocity 0.0 mpc_low_curvature_weight_steer_rate double [used in a low curvature trajectory] weight for steering rate [rad/s] 0.0 mpc_low_curvature_weight_steer_acc double [used in a low curvature trajectory] weight for derivatives of the steering rate [rad/ss] 0.000001 mpc_low_curvature_thresh_curvature double threshold of curvature to use \"low_curvature\" parameter 0.0 mpc_weight_terminal_lat_error double terminal lateral error weight in matrix Q to improve mpc stability 1.0 mpc_weight_terminal_heading_error double terminal heading error weight in matrix Q to improve mpc stability 0.1 mpc_zero_ff_steer_deg double threshold that feed-forward angle becomes zero 0.5 mpc_acceleration_limit double limit on the vehicle's acceleration 2.0 mpc_velocity_time_constant double time constant used for velocity smoothing 0.3 mpc_min_prediction_length double minimum prediction length 5.0"},{"location":"control/autoware_mpc_lateral_controller/#vehicle-model","title":"Vehicle Model","text":"Name Type Description Default value vehicle_model_type string vehicle model type for mpc prediction \"kinematics\" input_delay double steering input delay time for delay compensation 0.24 vehicle_model_steer_tau double steering dynamics time constant (1d approximation) [s] 0.3 steer_rate_lim_dps_list_by_curvature [double] steering angle rate limit list depending on curvature [deg/s] [40.0, 50.0, 60.0] curvature_list_for_steer_rate_lim [double] curvature list for steering angle rate limit interpolation in ascending order [/m] [0.001, 0.002, 0.01] steer_rate_lim_dps_list_by_velocity [double] steering angle rate limit list depending on velocity [deg/s] [60.0, 50.0, 40.0] velocity_list_for_steer_rate_lim [double] velocity list for steering angle rate limit interpolation in ascending order [m/s] [10.0, 15.0, 20.0] acceleration_limit double acceleration limit for trajectory velocity modification [m/ss] 2.0 velocity_time_constant double velocity dynamics time constant for trajectory velocity modification [s] 0.3"},{"location":"control/autoware_mpc_lateral_controller/#lowpass-filter-for-noise-reduction","title":"Lowpass Filter for Noise Reduction","text":"Name Type Description Default value steering_lpf_cutoff_hz double cutoff frequency of lowpass filter for steering output command [hz] 3.0 error_deriv_lpf_cutoff_hz double cutoff frequency of lowpass filter for error derivative [Hz] 5.0"},{"location":"control/autoware_mpc_lateral_controller/#stop-state","title":"Stop State","text":"Name Type Description Default value stop_state_entry_ego_speed *1 double threshold value of the ego vehicle speed used to the stop state entry condition 0.001 stop_state_entry_target_speed *1 double threshold value of the target speed used to the stop state entry condition 0.001 converged_steer_rad double threshold value of the steer convergence 0.1 keep_steer_control_until_converged boolean keep steer control until steer is converged true new_traj_duration_time double threshold value of the time to be considered as new trajectory 1.0 new_traj_end_dist double threshold value of the distance between trajectory ends to be considered as new trajectory 0.3 mpc_converged_threshold_rps double threshold value to be sure output of the optimization is converged, it is used in stopped state 0.01

(*1) To prevent unnecessary steering movement, the steering command is fixed to the previous value in the stop state.

"},{"location":"control/autoware_mpc_lateral_controller/#steer-offset","title":"Steer Offset","text":"

Defined in the steering_offset namespace. This logic is designed as simple as possible, with minimum design parameters.

Name Type Description Default value enable_auto_steering_offset_removal boolean Estimate the steering offset and apply compensation true update_vel_threshold double If the velocity is smaller than this value, the data is not used for the offset estimation 5.56 update_steer_threshold double If the steering angle is larger than this value, the data is not used for the offset estimation. 0.035 average_num int The average of this number of data is used as a steering offset. 1000 steering_offset_limit double The angle limit to be applied to the offset compensation. 0.02"},{"location":"control/autoware_mpc_lateral_controller/#for-dynamics-model-wip","title":"For dynamics model (WIP)","text":"Name Type Description Default value cg_to_front_m double distance from baselink to the front axle[m] 1.228 cg_to_rear_m double distance from baselink to the rear axle [m] 1.5618 mass_fl double mass applied to front left tire [kg] 600 mass_fr double mass applied to front right tire [kg] 600 mass_rl double mass applied to rear left tire [kg] 600 mass_rr double mass applied to rear right tire [kg] 600 cf double front cornering power [N/rad] 155494.663 cr double rear cornering power [N/rad] 155494.663"},{"location":"control/autoware_mpc_lateral_controller/#debug","title":"Debug","text":"Name Type Description Default value publish_debug_trajectories boolean publish predicted trajectory and resampled reference trajectory for debug purpose true"},{"location":"control/autoware_mpc_lateral_controller/#how-to-tune-mpc-parameters","title":"How to tune MPC parameters","text":""},{"location":"control/autoware_mpc_lateral_controller/#set-kinematics-information","title":"Set kinematics information","text":"

First, it's important to set the appropriate parameters for vehicle kinematics. This includes parameters like wheelbase, which represents the distance between the front and rear wheels, and max_steering_angle, which indicates the maximum tire steering angle. These parameters should be set in the vehicle_info.param.yaml.

"},{"location":"control/autoware_mpc_lateral_controller/#set-dynamics-information","title":"Set dynamics information","text":"

Next, you need to set the proper parameters for the dynamics model. These include the time constant steering_tau and time delay steering_delay for steering dynamics, and the maximum acceleration mpc_acceleration_limit and the time constant mpc_velocity_time_constant for velocity dynamics.

"},{"location":"control/autoware_mpc_lateral_controller/#confirmation-of-the-input-information","title":"Confirmation of the input information","text":"

It's also important to make sure the input information is accurate. Information such as the velocity of the center of the rear wheel [m/s] and the steering angle of the tire [rad] is required. Please note that there have been frequent reports of performance degradation due to errors in input information. For instance, there are cases where the velocity of the vehicle is offset due to an unexpected difference in tire radius, or the tire angle cannot be accurately measured due to a deviation in the steering gear ratio or midpoint. It is suggested to compare information from multiple sensors (e.g., integrated vehicle speed and GNSS position, steering angle and IMU angular velocity), and ensure the input information for MPC is appropriate.

"},{"location":"control/autoware_mpc_lateral_controller/#mpc-weight-tuning","title":"MPC weight tuning","text":"

Then, tune the weights of the MPC. One simple approach of tuning is to keep the weight for the lateral deviation (weight_lat_error) constant, and vary the input weight (weight_steering_input) while observing the trade-off between steering oscillation and control accuracy.

Here, weight_lat_error acts to suppress the lateral error in path following, while weight_steering_input works to adjust the steering angle to a standard value determined by the path's curvature. When weight_lat_error is large, the steering moves significantly to improve accuracy, which can cause oscillations. On the other hand, when weight_steering_input is large, the steering doesn't respond much to tracking errors, providing stable driving but potentially reducing tracking accuracy.

The steps are as follows:

  1. Set weight_lat_error = 0.1, weight_steering_input = 1.0 and other weights to 0.
  2. If the vehicle oscillates when driving, set weight_steering_input larger.
  3. If the tracking accuracy is low, set weight_steering_input smaller.

If you want to adjust the effect only in the high-speed range, you can use weight_steering_input_squared_vel. This parameter corresponds to the steering weight in the high-speed range.

"},{"location":"control/autoware_mpc_lateral_controller/#descriptions-for-weights","title":"Descriptions for weights","text":""},{"location":"control/autoware_mpc_lateral_controller/#other-tips-for-tuning","title":"Other tips for tuning","text":"

Here are some tips for adjusting other parameters:

"},{"location":"control/autoware_mpc_lateral_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/autoware_mpc_lateral_controller/#related-issues","title":"Related issues","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/","title":"MPC Algorithm","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#mpc-algorithm","title":"MPC Algorithm","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#introduction","title":"Introduction","text":"

Model Predictive Control (MPC) is a control method that solves an optimization problem during each control cycle to determine an optimal control sequence based on a given vehicle model. The calculated sequence of control inputs is used to control the system.

In simpler terms, an MPC controller calculates a series of control inputs that optimize the state and output trajectories to achieve the desired behavior. The key characteristics of an MPC control system can be summarized as follows:

  1. Prediction of Future Trajectories: MPC computes a control sequence by predicting the future state and output trajectories. The first control input is applied to the system, and this process repeats in a receding horizon manner at each control cycle.
  2. Handling of Constraints: MPC is capable of handling constraints on the state and input variables during the optimization phase. This ensures that the system operates within specified limits.
  3. Handling of Complex Dynamics: MPC algorithms can handle complex dynamics, whether they are linear or nonlinear in nature.

The choice between a linear or nonlinear model or constraint equation depends on the specific formulation of the MPC problem. If any nonlinear expressions are present in the motion equation or constraints, the optimization problem becomes nonlinear. In the following sections, we provide a step-by-step explanation of how linear and nonlinear optimization problems are solved within the MPC framework. Note that in this documentation, we utilize the linearization method to accommodate the nonlinear model.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#linear-mpc-formulation","title":"Linear MPC formulation","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#formulate-as-an-optimization-problem","title":"Formulate as an optimization problem","text":"

This section provides an explanation of MPC specifically for linear systems. In the following section, it also demonstrates the formulation of a vehicle path following problem as an application.

In the linear MPC formulation, all motion and constraint expressions are linear. For the path following problem, let's assume that the system's motion can be described by a set of equations, denoted as (1). The state evolution and measurements are presented in a discrete state space format, where matrices \\(A\\), \\(B\\), and \\(C\\) represent the state transition, control, and measurement matrices, respectively.

\\[ \\begin{gather} x_{k+1}=Ax_{k}+Bu_{k}+w_{k}, y_{k}=Cx_{k} \\tag{1} \\\\\\ x_{k}\\in R^{n},u_{k}\\in R^{m},w_{k}\\in R^{n}, y_{k}\\in R^{l}, A\\in R^{n\\times n}, B\\in R^{n\\times m}, C\\in R^{l \\times n} \\end{gather} \\]

Equation (1) represents the state-space equation, where \\(x_k\\) represents the internal states, \\(u_k\\) denotes the input, and \\(w_k\\) represents a known disturbance caused by linearization or problem structure. The measurements are indicated by the variable \\(y_k\\).

It's worth noting that another advantage of MPC is its ability to effectively handle the disturbance term \\(w\\). While it is referred to as a disturbance here, it can take various forms as long as it adheres to the equation's structure.

The state transition and measurement equations in (1) are iterative, moving from time \\(k\\) to time \\(k+1\\). By propagating the equation starting from an initial state and control pair \\((x_0, u_0)\\) along with a specified horizon of \\(N\\) steps, one can predict the trajectories of states and measurements.

For simplicity, let's assume the initial state is \\(x_0\\) with \\(k=0\\).

To begin, we can compute the state \\(x_1\\) at \\(k=1\\) using equation (1) by substituting the initial state into the equation. Since we are seeking a solution for the input sequence, we represent the inputs as decision variables in the symbolic expressions.

\\[ \\begin{align} x_{1} = Ax_{0} + Bu_{0} + w_{0} \\tag{2} \\end{align} \\]

Then, when \\(k=2\\), using also equation (2), we get

\\[ \\begin{align} x_{2} & = Ax_{1} + Bu_{1} + w_{1} \\\\\\ & = A(Ax_{0} + Bu_{0} + w_{0}) + Bu_{1} + w_{1} \\\\\\ & = A^{2}x_{0} + ABu_{0} + Aw_{0} + Bu_{1} + w_{1} \\\\\\ & = A^{2}x_{0} + \\begin{bmatrix}AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\end{bmatrix} + \\begin{bmatrix}A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\end{bmatrix} \\tag{3} \\end{align} \\]

When \\(k=3\\) , from equation (3)

\\[ \\begin{align} x_{3} & = Ax_{2} + Bu_{2} + w_{2} \\\\\\ & = A(A^{2}x_{0} + ABu_{0} + Bu_{1} + Aw_{0} + w_{1} ) + Bu_{2} + w_{2} \\\\\\ & = A^{3}x_{0} + A^{2}Bu_{0} + ABu_{1} + A^{2}w_{0} + Aw_{1} + Bu_{2} + w_{2} \\\\\\ & = A^{3}x_{0} + \\begin{bmatrix}A^{2}B & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\\\\\ u_{2} \\end{bmatrix} + \\begin{bmatrix} A^{2} & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\\\\\ w_{2} \\end{bmatrix} \\tag{4} \\end{align} \\]

If \\(k=n\\) , then

\\[ \\begin{align} x_{n} = A^{n}x_{0} + \\begin{bmatrix}A^{n-1}B & A^{n-2}B & \\dots & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\\\\\ \\vdots \\\\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} A^{n-1} & A^{n-2} & \\dots & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\\\\\ \\vdots \\\\\\ w_{n-1} \\end{bmatrix} \\tag{5} \\end{align} \\]

Putting all of them together with (2) to (5) yields the following matrix equation;

\\[ \\begin{align} \\begin{bmatrix}x_{1}\\\\\\ x_{2} \\\\\\ x_{3} \\\\\\ \\vdots \\\\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix}A^{1}\\\\\\ A^{2} \\\\\\ A^{3} \\\\\\ \\vdots \\\\\\ A^{n} \\end{bmatrix}x_{0} + \\begin{bmatrix}B & 0 & \\dots & & 0 \\\\\\ AB & B & 0 & \\dots & 0 \\\\\\ A^{2}B & AB & B & \\dots & 0 \\\\\\ \\vdots & \\vdots & & & 0 \\\\\\ A^{n-1}B & A^{n-2}B & \\dots & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\\\ u_{1} \\\\\\ u_{2} \\\\\\ \\vdots \\\\\\ u_{n-1} \\end{bmatrix} \\\\\\ + \\begin{bmatrix}I & 0 & \\dots & & 0 \\\\\\ A & I & 0 & \\dots & 0 \\\\\\ A^{2} & A & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & & & 0 \\\\\\ A^{n-1} & A^{n-2} & \\dots & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\\\ w_{1} \\\\\\ w_{2} \\\\\\ \\vdots \\\\\\ w_{n-1} \\end{bmatrix} \\tag{6} \\end{align} \\]

In this case, the measurements (outputs) become; \\(y_{k}=Cx_{k}\\), so

\\[ \\begin{align} \\begin{bmatrix}y_{1}\\\\\\ y_{2} \\\\\\ y_{3} \\\\\\ \\vdots \\\\\\ y_{n} \\end{bmatrix} = \\begin{bmatrix}C & 0 & \\dots & & 0 \\\\\\ 0 & C & 0 & \\dots & 0 \\\\\\ 0 & 0 & C & \\dots & 0 \\\\\\ \\vdots & & & \\ddots & 0 \\\\\\ 0 & \\dots & 0 & 0 & C \\end{bmatrix}\\begin{bmatrix}x_{1}\\\\\\ x_{2} \\\\\\ x_{3} \\\\\\ \\vdots \\\\\\ x_{n} \\end{bmatrix} \\tag{7} \\end{align} \\]

We can combine equations (6) and (7) into the following form:

\\[ \\begin{align} X = Fx_{0} + GU +SW, Y=HX \\tag{8} \\end{align} \\]

This form is similar to the original state-space equations (1), but it introduces new matrices: the state transition matrix \\(F\\), control matrix \\(G\\), disturbance matrix \\(W\\), and measurement matrix \\(H\\). In these equations, \\(X\\) represents the predicted states, given by \\(\\begin{bmatrix}x_{1} & x_{2} & \\dots & x_{n} \\end{bmatrix}^{T}\\).

Now that \\(G\\), \\(S\\), \\(W\\), and \\(H\\) are known, we can express the output behavior \\(Y\\) for the next \\(n\\) steps as a function of the input \\(U\\). This allows us to calculate the control input \\(U\\) so that \\(Y(U)\\) follows the target trajectory \\(Y_{ref}\\).

The next step is to define a cost function. The cost function generally uses the following quadratic form;

\\[ \\begin{align} J = (Y - Y_{ref})^{T}Q(Y - Y_{ref}) + (U - U_{ref})^{T}R(U - U_{ref}) \\tag{9} \\end{align} \\]

where \\(U_{ref}\\) is the target or steady-state input around which the system is linearized for \\(U\\).

This cost function is the same as that of the LQR controller. The first term of \\(J\\) penalizes the deviation from the reference trajectory. The second term penalizes the deviation from the reference (or steady-state) control trajectory. The \\(Q\\) and \\(R\\) are the cost weights Positive and Positive semi-semidefinite matrices.

Note: in some cases, \\(U_{ref}=0\\) is used, but this can mean the steering angle should be set to \\(0\\) even if the vehicle is turning a curve. Thus \\(U_{ref}\\) is used for the explanation here. This \\(U_{ref}\\) can be pre-calculated from the curvature of the target trajectory or the steady-state analyses.

As the resulting trajectory output is now \\(Y=Y(x_{0}, U)\\), the cost function depends only on U and the initial state conditions which yields the cost \\(J=J(x_{0}, U)\\). Let\u2019s find the \\(U\\) that minimizes this.

Substituting equation (8) into equation (9) and tidying up the equation for \\(U\\).

\\[ \\begin{align} J(U) &= (H(Fx_{0}+GU+SW)-Y_{ref})^{T}Q(H(Fx_{0}+GU+SW)-Y_{ref})+(U-U_{ref})^{T}R(U-U_{ref}) \\\\\\ & =U^{T}(G^{T}H^{T}QHG+R)U+2\\lbrace\\{(H(Fx_{0}+SW)-Y_{ref})^{T}QHG-U_{ref}^{T}R\\rbrace\\}U +(\\rm{constant}) \\tag{10} \\end{align} \\]

This equation is a quadratic form of \\(U\\) (i.e. \\(U^{T}AU+B^{T}U\\))

The coefficient matrix of the quadratic term of \\(U\\), \\(G^{T}C^{T}QCG+R\\) , is positive definite due to the positive and semi-positive definiteness requirement for \\(Q\\) and \\(R\\). Therefore, the cost function is a convex quadratic function in U, which can efficiently be solved by convex optimization.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#apply-to-vehicle-path-following-problem-nonlinear-problem","title":"Apply to vehicle path-following problem (nonlinear problem)","text":"

Because the path-following problem with a kinematic vehicle model is nonlinear, we cannot directly use the linear MPC methods described in the preceding section. There are several ways to deal with a nonlinearity such as using the nonlinear optimization solver. Here, the linearization is applied to the nonlinear vehicle model along the reference trajectory, and consequently, the nonlinear model is converted into a linear time-varying model.

For a nonlinear kinematic vehicle model, the discrete-time update equations are as follows:

\\[ \\begin{align} x_{k+1} &= x_{k} + v\\cos\\theta_{k} \\text{d}t \\\\\\ y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t \\tag{11} \\\\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]

The vehicle reference is the center of the rear axle and all states are measured at this point. The states, parameters, and control variables are shown in the following table.

Symbol Represent \\(v\\) Vehicle speed measured at the center of rear axle \\(\\theta\\) Yaw (heading angle) in global coordinate system \\(\\delta\\) Vehicle steering angle \\(\\delta_{des}\\) Vehicle target steering angle \\(L\\) Vehicle wheelbase (distance between the rear and front axles) \\(\\tau\\) Time constant for the first order steering dynamics

We assume in this example that the MPC only generates the steering control, and the trajectory generator gives the vehicle speed along the trajectory.

The kinematic vehicle model discrete update equations contain trigonometric functions; sin and cos, and the vehicle coordinates \\(x\\), \\(y\\), and yaw angles are global coordinates. In path tracking applications, it is common to reformulate the model in error dynamics to convert the control into a regulator problem in which the targets become zero (zero error).

We make small angle assumptions for the following derivations of linear equations. Given the nonlinear dynamics and omitting the longitudinal coordinate \\(x\\), the resulting set of equations become;

\\[ \\begin{align} y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t - \\kappa_{r}v\\cos\\theta_{k}\\text{d}t \\tag{12} \\\\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]

Where \\(\\kappa_{r}\\left(s\\right)\\) is the curvature along the trajectory parametrized by the arc length.

There are three expressions in the update equations that are subject to linear approximation: the lateral deviation (or lateral coordinate) \\(y\\), the heading angle (or the heading angle error) \\(\\theta\\), and the steering \\(\\delta\\). We can make a small angle assumption on the heading angle \\(\\theta\\).

In the path tracking problem, the curvature of the trajectory \\(\\kappa_{r}\\) is known in advance. At the lower speeds, the Ackermann formula approximates the reference steering angle \\(\\theta_{r}\\)(this value corresponds to the \\(U_{ref}\\) mentioned above). The Ackermann steering expression can be written as;

\\[ \\begin{align} \\delta_{r} = \\arctan\\left(L\\kappa_{r}\\right) \\end{align} \\]

When the vehicle is turning a path, its steer angle \\(\\delta\\) should be close to the value \\(\\delta_{r}\\). Therefore, \\(\\delta\\) can be expressed,

\\[ \\begin{align} \\delta = \\delta_{r} + \\Delta \\delta, \\Delta\\delta \\ll 1 \\end{align} \\]

Substituting this equation into equation (12), and approximate \\(\\Delta\\delta\\) to be small.

\\[ \\begin{align} \\tan\\delta &\\simeq \\tan\\delta_{r} + \\frac{\\text{d}\\tan\\delta}{\\text{d}\\delta} \\Biggm|_{\\delta=\\delta_{r}}\\Delta\\delta \\\\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\Delta\\delta \\\\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\left(\\delta-\\delta_{r}\\right) \\\\\\ &= \\tan \\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta \\end{align} \\]

Using this, \\(\\theta_{k+1}\\) can be expressed

\\[ \\begin{align} \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L}\\text{d}t - \\kappa_{r}v\\cos\\delta_{k}\\text{d}t \\\\\\ &\\simeq \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(\\tan\\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\\\ &= \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(L\\kappa_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\\\ &= \\theta_{k} + \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}}\\delta_{k} - \\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\end{align} \\]

Finally, the linearized time-varying model equation becomes;

\\[ \\begin{align} \\begin{bmatrix} y_{k+1} \\\\\\ \\theta_{k+1} \\\\\\ \\delta_{k+1} \\end{bmatrix} = \\begin{bmatrix} 1 & v\\text{d}t & 0 \\\\\\ 0 & 1 & \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\\\ 0 & 0 & 1 - \\tau^{-1}\\text{d}t \\end{bmatrix} \\begin{bmatrix} y_{k} \\\\\\ \\theta_{k} \\\\\\ \\delta_{k} \\end{bmatrix} + \\begin{bmatrix} 0 \\\\\\ 0 \\\\\\ \\tau^{-1}\\text{d}t \\end{bmatrix}\\delta_{des} + \\begin{bmatrix} 0 \\\\\\ -\\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\\\ 0 \\end{bmatrix} \\end{align} \\]

This equation has the same form as equation (1) of the linear MPC assumption, but the matrices \\(A\\), \\(B\\), and \\(w\\) change depending on the coordinate transformation. To make this explicit, the entire equation is written as follows

\\[ \\begin{align} x_{k+1} = A_{k}x_{k} + B_{k}u_{k}+w_{k} \\end{align} \\]

Comparing equation (1), \\(A \\rightarrow A_{k}\\). This means that the \\(A\\) matrix is a linear approximation in the vicinity of the trajectory after \\(k\\) steps (i.e., \\(k* \\text{d}t\\) seconds), and it can be obtained if the trajectory is known in advance.

Using this equation, write down the update equation likewise (2) ~ (6)

\\[ \\begin{align} \\begin{bmatrix} x_{1} \\\\\\ x_{2} \\\\\\ x_{3} \\\\\\ \\vdots \\\\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix} A_{1} \\\\\\ A_{1}A_{0} \\\\\\ A_{2}A_{1}A_{0} \\\\\\ \\vdots \\\\\\ \\prod_{i=0}^{n-1} A_{k} \\end{bmatrix} x_{0} + \\begin{bmatrix} B_{0} & 0 & \\dots & & 0 \\\\\\ A_{1}B_{0} & B_{1} & 0 & \\dots & 0 \\\\\\ A_{2}A_{1}B_{0} & A_{2}B_{1} & B_{2} & \\dots & 0 \\\\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\\\ \\prod_{i=1}^{n-1} A_{k}B_{0} & \\prod_{i=2}^{n-1} A_{k}B_{1} & \\dots & A_{n-1}B_{n-1} & B_{n-1} \\end{bmatrix} \\begin{bmatrix} u_{0} \\\\\\ u_{1} \\\\\\ u_{2} \\\\\\ \\vdots \\\\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} I & 0 & \\dots & & 0 \\\\\\ A_{1} & I & 0 & \\dots & 0 \\\\\\ A_{2}A_{1} & A_{2} & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\\\ \\prod_{i=1}^{n-1} A_{k} & \\prod_{i=2}^{n-1} A_{k} & \\dots & A_{n-1} & I \\end{bmatrix} \\begin{bmatrix} w_{0} \\\\\\ w_{1} \\\\\\ w_{2} \\\\\\ \\vdots \\\\\\ w_{n-1} \\end{bmatrix} \\end{align} \\]

As it has the same form as equation (6), convex optimization is applicable for as much as the model in the former section.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-functions-and-constraints","title":"The cost functions and constraints","text":"

In this section, we give the details on how to set up the cost function and constraint conditions.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-function","title":"The cost function","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-error-and-input","title":"Weight for error and input","text":"

MPC states and control weights appear in the cost function in a similar way as LQR (9). In the vehicle path following the problem described above, if C is the unit matrix, the output \\(y = x = \\left[y, \\theta, \\delta\\right]\\). (To avoid confusion with the y-directional deviation, here \\(e\\) is used for the lateral deviation.)

As an example, let's determine the weight matrix \\(Q_{1}\\) of the evaluation function for the number of prediction steps \\(n=2\\) system as follows.

\\[ \\begin{align} Q_{1} = \\begin{bmatrix} q_{e} & 0 & 0 & 0 & 0& 0 \\\\\\ 0 & q_{\\theta} & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & q_{e} & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & q_{\\theta} & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\end{bmatrix} \\end{align} \\]

The first term in the cost function (9) with \\(n=2\\), is shown as follow (\\(Y_{ref}\\) is set to \\(0\\))

\\[ \\begin{align} q_{e}\\left(e_{0}^{2} + e_{1}^{2} \\right) + q_{\\theta}\\left(\\theta_{0}^{2} + \\theta_{1}^{2} \\right) \\end{align} \\]

This shows that \\(q_{e}\\) is the weight for the lateral error and \\(q\\) is for the angular error. In this example, \\(q_{e}\\) acts as the proportional - P gain and \\(q_{\\theta}\\) as the derivative - D gain for the lateral tracking error. The balance of these factors (including R) will be determined through actual experiments.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-non-diagonal-term","title":"Weight for non-diagonal term","text":"

MPC can handle the non-diagonal term in its calculation (as long as the resulting matrix is positive definite).

For instance, write \\(Q_{2}\\) as follows for the \\(n=2\\) system.

\\[ \\begin{align} Q_{2} = \\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & q_{d} & 0 & 0 & -q_{d} \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\\\ 0 & 0 & -q_{d} & 0 & 0 & q_{d} \\end{bmatrix} \\end{align} \\]

Expanding the first term of the evaluation function using \\(Q_{2}\\)

\\[ \\begin{align} q_{d}\\left(\\delta_{0}^{2} -2\\delta_{0}\\delta_{1} + \\delta_{1}^{2} \\right) = q_{d}\\left( \\delta_{0} - \\delta_{1}\\right)^{2} \\end{align} \\]

The value of \\(q_{d}\\) is weighted by the amount of change in \\(\\delta\\), which will prevent the tire from moving quickly. By adding this section, the system can evaluate the balance between tracking accuracy and change of steering wheel angle.

Since the weight matrix can be added linearly, the final weight can be set as \\(Q = Q_{1} + Q_{2}\\).

Furthermore, MPC optimizes over a period of time, the time-varying weight can be considered in the optimization.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#constraints","title":"Constraints","text":""},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#input-constraint","title":"Input constraint","text":"

The main advantage of MPC controllers is the capability to deal with any state or input constraints. The constraints can be expressed as box constraints, such as \"the tire angle must be within \u00b130 degrees\", and can be put in the following form;

\\[ \\begin{align} u_{min} < u < u_{max} \\end{align} \\]

The constraints must be linear and convex in the linear MPC applications.

"},{"location":"control/autoware_mpc_lateral_controller/model_predictive_control_algorithm/#constraints-on-the-derivative-of-the-input","title":"Constraints on the derivative of the input","text":"

We can also put constraints on the input deviations. As the derivative of steering angle is \\(\\dot{u}\\), its box constraint is

\\[ \\begin{align} \\dot u_{min} < \\dot u < \\dot u_{max} \\end{align} \\]

We discretize \\(\\dot{u}\\) as \\(\\left(u_{k} - u_{k-1}\\right)/\\text{d}t\\) and multiply both sides by dt, and the resulting constraint become linear and convex

\\[ \\begin{align} \\dot u_{min}\\text{d}t < u_{k} - u_{k-1} < \\dot u_{max}\\text{d}t \\end{align} \\]

Along the prediction or control horizon, i.e for setting \\(n=3\\)

\\[ \\begin{align} \\dot u_{min}\\text{d}t < u_{1} - u_{0} < \\dot u_{max}\\text{d}t \\\\\\ \\dot u_{min}\\text{d}t < u_{2} - u_{1} < \\dot u_{max}\\text{d}t \\end{align} \\]

and aligning the inequality signs

\\[ \\begin{align} u_{1} - u_{0} &< \\dot u_{max}\\text{d}t \\\\\\ - u_{1} + u_{0} &< -\\dot u_{min}\\text{d}t \\\\\\ u_{2} - u_{1} &< \\dot u_{max}\\text{d}t \\\\\\ - u_{2} + u_{1} &< - \\dot u_{min}\\text{d}t \\end{align} \\]

We can obtain a matrix expression for the resulting constraint equation in the form of

\\[ \\begin{align} Ax \\leq b \\end{align} \\]

Thus, putting this inequality to fit the form above, the constraints against \\(\\dot{u}\\) can be included at the first-order approximation level.

\\[ \\begin{align} \\begin{bmatrix} -1 & 1 & 0 \\\\\\ 1 & -1 & 0 \\\\\\ 0 & -1 & 1 \\\\\\ 0 & 1 & -1 \\end{bmatrix}\\begin{bmatrix} u_{0} \\\\\\ u_{1} \\\\\\ u_{2} \\end{bmatrix} \\leq \\begin{bmatrix} \\dot u_{max}\\text{d}t \\\\\\ -\\dot u_{min}\\text{d}t \\\\\\ \\dot u_{max}\\text{d}t \\\\\\ -\\dot u_{min}\\text{d}t \\end{bmatrix} \\end{align} \\]"},{"location":"control/autoware_obstacle_collision_checker/","title":"obstacle_collision_checker","text":""},{"location":"control/autoware_obstacle_collision_checker/#obstacle_collision_checker","title":"obstacle_collision_checker","text":""},{"location":"control/autoware_obstacle_collision_checker/#purpose","title":"Purpose","text":"

obstacle_collision_checker is a module to check obstacle collision for predicted trajectory and publish diagnostic errors if collision is found.

"},{"location":"control/autoware_obstacle_collision_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_obstacle_collision_checker/#flow-chart","title":"Flow chart","text":""},{"location":"control/autoware_obstacle_collision_checker/#algorithms","title":"Algorithms","text":""},{"location":"control/autoware_obstacle_collision_checker/#check-data","title":"Check data","text":"

Check that obstacle_collision_checker receives no ground pointcloud, predicted_trajectory, reference trajectory, and current velocity data.

"},{"location":"control/autoware_obstacle_collision_checker/#diagnostic-update","title":"Diagnostic update","text":"

If any collision is found on predicted path, this module sets ERROR level as diagnostic status else sets OK.

"},{"location":"control/autoware_obstacle_collision_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_obstacle_collision_checker/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::msg::Trajectory Reference trajectory ~/input/trajectory autoware_planning_msgs::msg::Trajectory Predicted trajectory /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"control/autoware_obstacle_collision_checker/#output","title":"Output","text":"Name Type Description ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"control/autoware_obstacle_collision_checker/#parameters","title":"Parameters","text":"Name Type Description Default value delay_time double Delay time of vehicle [s] 0.3 footprint_margin double Foot print margin [m] 0.0 max_deceleration double Max deceleration for ego vehicle to stop [m/s^2] 2.0 resample_interval double Interval for resampling trajectory [m] 0.3 search_radius double Search distance from trajectory to point cloud [m] 5.0"},{"location":"control/autoware_obstacle_collision_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform proper collision check, it is necessary to get probably predicted trajectory and obstacle pointclouds without noise.

"},{"location":"control/autoware_operation_mode_transition_manager/","title":"autoware_operation_mode_transition_manager","text":""},{"location":"control/autoware_operation_mode_transition_manager/#autoware_operation_mode_transition_manager","title":"autoware_operation_mode_transition_manager","text":""},{"location":"control/autoware_operation_mode_transition_manager/#purpose-use-cases","title":"Purpose / Use cases","text":"

This module is responsible for managing the different modes of operation for the Autoware system. The possible modes are:

There is also an In Transition state that occurs during each mode transitions. During this state, the transition to the new operator is not yet complete, and the previous operator is still responsible for controlling the system until the transition is complete. Some actions may be restricted during the In Transition state, such as sudden braking or steering. (This is restricted by the vehicle_cmd_gate).

"},{"location":"control/autoware_operation_mode_transition_manager/#features","title":"Features","text":" "},{"location":"control/autoware_operation_mode_transition_manager/#design","title":"Design","text":"

A rough design of the relationship between `autoware_operation_mode_transition_manager`` and the other nodes is shown below.

A more detailed structure is below.

Here we see that autoware_operation_mode_transition_manager has multiple state transitions as follows

"},{"location":"control/autoware_operation_mode_transition_manager/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_operation_mode_transition_manager/#inputs","title":"Inputs","text":"

For the mode transition:

For the transition availability/completion check:

For the backward compatibility (to be removed):

"},{"location":"control/autoware_operation_mode_transition_manager/#outputs","title":"Outputs","text":" "},{"location":"control/autoware_operation_mode_transition_manager/#parameters","title":"Parameters","text":"Name Type Description Default Range transition_timeout float If the state transition is not completed within this time, it is considered a transition failure. 10.0 \u22650.0 frequency_hz float running hz 10.0 \u22650.0 enable_engage_on_driving boolean Set true if you want to engage the autonomous driving mode while the vehicle is driving. If set to false, it will deny Engage in any situation where the vehicle speed is not zero. Note that if you use this feature without adjusting the parameters, it may cause issues like sudden deceleration. Before using, please ensure the engage condition and the vehicle_cmd_gate transition filter are appropriately adjusted. false N/A check_engage_condition boolean If false, autonomous transition is always available. false N/A nearest_dist_deviation_threshold float distance threshold used to find nearest trajectory point [m] 3.0 \u22650.0 nearest_yaw_deviation_threshold float angle threshold used to find nearest trajectory point [rad] 1.57 \u2265-3.142 engage_acceptable_limits.allow_autonomous_in_stopped boolean If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true N/A engage_acceptable_limits.dist_threshold float The distance between the trajectory and ego vehicle must be within this distance for Autonomous transition. 1.5 \u22650.0 engage_acceptable_limits.yaw_threshold float The yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous transition. 0.524 \u2265-3.142 engage_acceptable_limits.speed_upper_threshold float The velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. 10.0 N/A engage_acceptable_limits.speed_lower_threshold float The velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. -10.0 N/A engage_acceptable_limits.acc_threshold float The control command acceleration must be less than this threshold for Autonomous transition. 1.5 \u22650.0 engage_acceptable_limits.lateral_acc_threshold float The control command lateral acceleration must be less than this threshold for Autonomous transition. 1.0 \u22650.0 engage_acceptable_limits.lateral_acc_diff_threshold float The lateral acceleration deviation between the control command must be less than this threshold for Autonomous transition. 0.5 \u22650.0 stable_check.duration float The stable condition must be satisfied for this duration to complete the transition. 0.1 \u22650.0 stable_check.dist_threshold float The distance between the trajectory and ego vehicle must be within this distance to complete Autonomous transition. 1.5 \u22650.0 stable_check.speed_upper_threshold float The velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0 N/A stable_check.speed_lower_threshold float The velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. -2.0 N/A stable_check.yaw_threshold float The yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous transition. 0,262 \u2265-3.142 Name Type Description Default value transition_timeout double If the state transition is not completed within this time, it is considered a transition failure. 10.0 frequency_hz double running hz 10.0 enable_engage_on_driving bool Set true if you want to engage the autonomous driving mode while the vehicle is driving. If set to false, it will deny Engage in any situation where the vehicle speed is not zero. Note that if you use this feature without adjusting the parameters, it may cause issues like sudden deceleration. Before using, please ensure the engage condition and the vehicle_cmd_gate transition filter are appropriately adjusted. 0.1 check_engage_condition bool If false, autonomous transition is always available 0.1 nearest_dist_deviation_threshold double distance threshold used to find nearest trajectory point 3.0 nearest_yaw_deviation_threshold double angle threshold used to find nearest trajectory point 1.57

For engage_acceptable_limits related parameters:

Name Type Description Default value allow_autonomous_in_stopped bool If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true dist_threshold double the distance between the trajectory and ego vehicle must be within this distance for Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous transition. 0.524 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. 10.0 speed_lower_threshold double the velocity deviation between the control command and ego vehicle must be within this threshold for Autonomous transition. -10.0 acc_threshold double the control command acceleration must be less than this threshold for Autonomous transition. 1.5 lateral_acc_threshold double the control command lateral acceleration must be less than this threshold for Autonomous transition. 1.0 lateral_acc_diff_threshold double the lateral acceleration deviation between the control command must be less than this threshold for Autonomous transition. 0.5

For stable_check related parameters:

Name Type Description Default value duration double the stable condition must be satisfied for this duration to complete the transition. 0.1 dist_threshold double the distance between the trajectory and ego vehicle must be within this distance to complete Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous transition. 0.262 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0 speed_lower_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0"},{"location":"control/autoware_operation_mode_transition_manager/#engage-check-behavior-on-each-parameter-setting","title":"Engage check behavior on each parameter setting","text":"

This matrix describes the scenarios in which the vehicle can be engaged based on the combinations of parameter settings:

enable_engage_on_driving check_engage_condition allow_autonomous_in_stopped Scenarios where engage is permitted x x x Only when the vehicle is stationary. x x o Only when the vehicle is stationary. x o x When the vehicle is stationary and all engage conditions are met. x o o Only when the vehicle is stationary. o x x At any time (Caution: Not recommended). o x o At any time (Caution: Not recommended). o o x When all engage conditions are met, regardless of vehicle status. o o o When all engage conditions are met or the vehicle is stationary."},{"location":"control/autoware_operation_mode_transition_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/autoware_pid_longitudinal_controller/","title":"PID Longitudinal Controller","text":""},{"location":"control/autoware_pid_longitudinal_controller/#pid-longitudinal-controller","title":"PID Longitudinal Controller","text":""},{"location":"control/autoware_pid_longitudinal_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

The longitudinal_controller computes the target acceleration to achieve the target velocity set at each point of the target trajectory using a feed-forward/back control.

It also contains a slope force correction that takes into account road slope information, and a delay compensation function. It is assumed that the target acceleration calculated here will be properly realized by the vehicle interface.

Note that the use of this module is not mandatory for Autoware if the vehicle supports the \"target speed\" interface.

"},{"location":"control/autoware_pid_longitudinal_controller/#design-inner-workings-algorithms","title":"Design / Inner-workings / Algorithms","text":""},{"location":"control/autoware_pid_longitudinal_controller/#states","title":"States","text":"

This module has four state transitions as shown below in order to handle special processing in a specific situation.

The state transition diagram is shown below.

"},{"location":"control/autoware_pid_longitudinal_controller/#logics","title":"Logics","text":""},{"location":"control/autoware_pid_longitudinal_controller/#control-block-diagram","title":"Control Block Diagram","text":""},{"location":"control/autoware_pid_longitudinal_controller/#feedforward-ff","title":"FeedForward (FF)","text":"

The reference acceleration set in the trajectory and slope compensation terms are output as a feedforward. Under ideal conditions with no modeling error, this FF term alone should be sufficient for velocity tracking.

Tracking errors causing modeling or discretization errors are removed by the feedback control (now using PID).

"},{"location":"control/autoware_pid_longitudinal_controller/#brake-keeping","title":"Brake keeping","text":"

From the viewpoint of ride comfort, stopping with 0 acceleration is important because it reduces the impact of braking. However, if the target acceleration when stopping is 0, the vehicle may cross over the stop line or accelerate a little in front of the stop line due to vehicle model error or gradient estimation error.

For reliable stopping, the target acceleration calculated by the FeedForward system is limited to a negative acceleration when stopping.

"},{"location":"control/autoware_pid_longitudinal_controller/#slope-compensation","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

We also offer the options to switch between these, depending on driving conditions.

Notation: This function works correctly only in a vehicle system that does not have acceleration feedback in the low-level control system.

This compensation adds gravity correction to the target acceleration, resulting in an output value that is no longer equal to the target acceleration that the autonomous driving system desires. Therefore, it conflicts with the role of the acceleration feedback in the low-level controller. For instance, if the vehicle is attempting to start with an acceleration of 1.0 m/s^2 and a gravity correction of -1.0 m/s^2 is applied, the output value will be 0. If this output value is mistakenly treated as the target acceleration, the vehicle will not start.

A suitable example of a vehicle system for the slope compensation function is one in which the output acceleration from the longitudinal_controller is converted into target accel/brake pedal input without any feedbacks. In this case, the output acceleration is just used as a feedforward term to calculate the target pedal, and hence the issue mentioned above does not arise.

Note: The angle of the slope is defined as positive for an uphill slope, while the pitch angle of the ego pose is defined as negative when facing upward. They have an opposite definition.

"},{"location":"control/autoware_pid_longitudinal_controller/#pid-control","title":"PID control","text":"

For deviations that cannot be handled by FeedForward control, such as model errors, PID control is used to construct a feedback system.

This PID control calculates the target acceleration from the deviation between the current ego-velocity and the target velocity.

This PID logic has a maximum value for the output of each term. This is to prevent the following:

Note: by default, the integral term in the control system is not accumulated when the vehicle is stationary. This precautionary measure aims to prevent unintended accumulation of the integral term in scenarios where Autoware assumes the vehicle is engaged, but an external system has immobilized the vehicle to initiate startup procedures.

However, certain situations may arise, such as when the vehicle encounters a depression in the road surface during startup or if the slope compensation is inaccurately estimated (lower than necessary), leading to a failure to initiate motion. To address these scenarios, it is possible to activate error integration even when the vehicle is at rest by setting the enable_integration_at_low_speed parameter to true.

When enable_integration_at_low_speed is set to true, the PID controller will initiate integration of the acceleration error after a specified duration defined by the time_threshold_before_pid_integration parameter has elapsed without the vehicle surpassing a minimum velocity set by the current_vel_threshold_pid_integration parameter.

The presence of the time_threshold_before_pid_integration parameter is important for practical PID tuning. Integrating the error when the vehicle is stationary or at low speed can complicate PID tuning. This parameter effectively introduces a delay before the integral part becomes active, preventing it from kicking in immediately. This delay allows for more controlled and effective tuning of the PID controller.

At present, PID control is implemented from the viewpoint of trade-off between development/maintenance cost and performance. This may be replaced by a higher performance controller (adaptive control or robust control) in future development.

"},{"location":"control/autoware_pid_longitudinal_controller/#time-delay-compensation","title":"Time delay compensation","text":"

At high speeds, the delay of actuator systems such as gas pedals and brakes has a significant impact on driving accuracy. Depending on the actuating principle of the vehicle, the mechanism that physically controls the gas pedal and brake typically has a delay of about a hundred millisecond.

In this controller, the predicted ego-velocity and the target velocity after the delay time are calculated and used for the feedback to address the time delay problem.

"},{"location":"control/autoware_pid_longitudinal_controller/#slope-compensation_1","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

"},{"location":"control/autoware_pid_longitudinal_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Smoothed target velocity and its acceleration shall be set in the trajectory
    1. The velocity command is not smoothed inside the controller (only noise may be removed).
    2. For step-like target signal, tracking is performed as fast as possible.
  2. The vehicle velocity must be an appropriate value
    1. The ego-velocity must be a signed-value corresponding to the forward/backward direction
    2. The ego-velocity should be given with appropriate noise processing.
    3. If there is a large amount of noise in the ego-velocity, the tracking performance will be significantly reduced.
  3. The output of this controller must be achieved by later modules (e.g. vehicle interface).
    1. If the vehicle interface does not have the target velocity or acceleration interface (e.g., the vehicle only has a gas pedal and brake interface), an appropriate conversion must be done after this controller.
"},{"location":"control/autoware_pid_longitudinal_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_pid_longitudinal_controller/#input","title":"Input","text":"

Set the following from the controller_node

"},{"location":"control/autoware_pid_longitudinal_controller/#output","title":"Output","text":"

Return LongitudinalOutput which contains the following to the controller node

"},{"location":"control/autoware_pid_longitudinal_controller/#pidcontroller-class","title":"PIDController class","text":"

The PIDController class is straightforward to use. First, gains and limits must be set (using setGains() and setLimits()) for the proportional (P), integral (I), and derivative (D) components. Then, the velocity can be calculated by providing the current error and time step duration to the calculate() function.

"},{"location":"control/autoware_pid_longitudinal_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

Name Type Description Default value delay_compensation_time double delay for longitudinal control [s] 0.17 enable_smooth_stop bool flag to enable transition to STOPPING true enable_overshoot_emergency bool flag to enable transition to EMERGENCY when the ego is over the stop line with a certain distance. See emergency_state_overshoot_stop_dist. true enable_large_tracking_error_emergency bool flag to enable transition to EMERGENCY when the closest trajectory point search is failed due to a large deviation between trajectory and ego pose. true enable_slope_compensation bool flag to modify output acceleration for slope compensation. The source of the slope angle can be selected from ego-pose or trajectory angle. See use_trajectory_for_pitch_calculation. true enable_brake_keeping_before_stop bool flag to keep a certain acceleration during DRIVE state before the ego stops. See Brake keeping. false enable_keep_stopped_until_steer_convergence bool flag to keep stopped condition until until the steer converges. true max_acc double max value of output acceleration [m/s^2] 3.0 min_acc double min value of output acceleration [m/s^2] -5.0 max_jerk double max value of jerk of output acceleration [m/s^3] 2.0 min_jerk double min value of jerk of output acceleration [m/s^3] -5.0 use_trajectory_for_pitch_calculation bool If true, the slope is estimated from trajectory z-level. Otherwise the pitch angle of the ego pose is used. false lpf_pitch_gain double gain of low-pass filter for pitch estimation 0.95 max_pitch_rad double max value of estimated pitch [rad] 0.1 min_pitch_rad double min value of estimated pitch [rad] -0.1"},{"location":"control/autoware_pid_longitudinal_controller/#state-transition","title":"State transition","text":"Name Type Description Default value drive_state_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 0.5 drive_state_offset_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 1.0 stopping_state_stop_dist double The state will transit to STOPPING when the distance to the stop point is shorter than stopping_state_stop_dist [m] 0.5 stopped_state_entry_vel double threshold of the ego velocity in transition to the STOPPED state [m/s] 0.01 stopped_state_entry_acc double threshold of the ego acceleration in transition to the STOPPED state [m/s^2] 0.1 emergency_state_overshoot_stop_dist double If enable_overshoot_emergency is true and the ego is emergency_state_overshoot_stop_dist-meter ahead of the stop point, the state will transit to EMERGENCY. [m] 1.5"},{"location":"control/autoware_pid_longitudinal_controller/#drive-parameter","title":"DRIVE Parameter","text":"Name Type Description Default value kp double p gain for longitudinal control 1.0 ki double i gain for longitudinal control 0.1 kd double d gain for longitudinal control 0.0 max_out double max value of PID's output acceleration during DRIVE state [m/s^2] 1.0 min_out double min value of PID's output acceleration during DRIVE state [m/s^2] -1.0 max_p_effort double max value of acceleration with p gain 1.0 min_p_effort double min value of acceleration with p gain -1.0 max_i_effort double max value of acceleration with i gain 0.3 min_i_effort double min value of acceleration with i gain -0.3 max_d_effort double max value of acceleration with d gain 0.0 min_d_effort double min value of acceleration with d gain 0.0 lpf_vel_error_gain double gain of low-pass filter for velocity error 0.9 enable_integration_at_low_speed bool Whether to enable integration of acceleration errors when the vehicle speed is lower than current_vel_threshold_pid_integration or not. false current_vel_threshold_pid_integration double Velocity error is integrated for I-term only when the absolute value of current velocity is larger than this parameter. [m/s] 0.5 time_threshold_before_pid_integration double How much time without the vehicle moving must past to enable PID error integration. [s] 5.0 brake_keeping_acc double If enable_brake_keeping_before_stop is true, a certain acceleration is kept during DRIVE state before the ego stops [m/s^2] See Brake keeping. 0.2"},{"location":"control/autoware_pid_longitudinal_controller/#stopping-parameter-smooth-stop","title":"STOPPING Parameter (smooth stop)","text":"

Smooth stop is enabled if enable_smooth_stop is true. In smooth stop, strong acceleration (strong_acc) will be output first to decrease the ego velocity. Then weak acceleration (weak_acc) will be output to stop smoothly by decreasing the ego jerk. If the ego does not stop in a certain time or some-meter over the stop point, weak acceleration to stop right (weak_stop_acc) now will be output. If the ego is still running, strong acceleration (strong_stop_acc) to stop right now will be output.

Name Type Description Default value smooth_stop_max_strong_acc double max strong acceleration [m/s^2] -0.5 smooth_stop_min_strong_acc double min strong acceleration [m/s^2] -0.8 smooth_stop_weak_acc double weak acceleration [m/s^2] -0.3 smooth_stop_weak_stop_acc double weak acceleration to stop right now [m/s^2] -0.8 smooth_stop_strong_stop_acc double strong acceleration to be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m/s^2] -3.4 smooth_stop_max_fast_vel double max fast vel to judge the ego is running fast [m/s]. If the ego is running fast, strong acceleration will be output. 0.5 smooth_stop_min_running_vel double min ego velocity to judge if the ego is running or not [m/s] 0.01 smooth_stop_min_running_acc double min ego acceleration to judge if the ego is running or not [m/s^2] 0.01 smooth_stop_weak_stop_time double max time to output weak acceleration [s]. After this, strong acceleration will be output. 0.8 smooth_stop_weak_stop_dist double Weak acceleration will be output when the ego is smooth_stop_weak_stop_dist-meter before the stop point. [m] -0.3 smooth_stop_strong_stop_dist double Strong acceleration will be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m] -0.5"},{"location":"control/autoware_pid_longitudinal_controller/#stopped-parameter","title":"STOPPED Parameter","text":"

The STOPPED state assumes that the vehicle is completely stopped with the brakes fully applied. Therefore, stopped_acc should be set to a value that allows the vehicle to apply the strongest possible brake. If stopped_acc is not sufficiently low, there is a possibility of sliding down on steep slopes.

Name Type Description Default value stopped_vel double target velocity in STOPPED state [m/s] 0.0 stopped_acc double target acceleration in STOPPED state [m/s^2] -3.4 stopped_jerk double target jerk in STOPPED state [m/s^3] -5.0"},{"location":"control/autoware_pid_longitudinal_controller/#emergency-parameter","title":"EMERGENCY Parameter","text":"Name Type Description Default value emergency_vel double target velocity in EMERGENCY state [m/s] 0.0 emergency_acc double target acceleration in an EMERGENCY state [m/s^2] -5.0 emergency_jerk double target jerk in an EMERGENCY state [m/s^3] -3.0"},{"location":"control/autoware_pid_longitudinal_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/autoware_pid_longitudinal_controller/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/autoware_pid_longitudinal_controller/#related-issues","title":"Related issues","text":""},{"location":"control/autoware_pure_pursuit/","title":"Pure Pursuit Controller","text":""},{"location":"control/autoware_pure_pursuit/#pure-pursuit-controller","title":"Pure Pursuit Controller","text":"

The Pure Pursuit Controller module calculates the steering angle for tracking a desired trajectory using the pure pursuit algorithm. This is used as a lateral controller plugin in the autoware_trajectory_follower_node.

"},{"location":"control/autoware_pure_pursuit/#inputs","title":"Inputs","text":"

Set the following from the controller_node

"},{"location":"control/autoware_pure_pursuit/#outputs","title":"Outputs","text":"

Return LateralOutput which contains the following to the controller node

"},{"location":"control/autoware_shift_decider/","title":"Shift Decider","text":""},{"location":"control/autoware_shift_decider/#shift-decider","title":"Shift Decider","text":""},{"location":"control/autoware_shift_decider/#purpose","title":"Purpose","text":"

autoware_shift_decider is a module to decide shift from ackermann control command.

"},{"location":"control/autoware_shift_decider/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/autoware_shift_decider/#flow-chart","title":"Flow chart","text":""},{"location":"control/autoware_shift_decider/#algorithms","title":"Algorithms","text":""},{"location":"control/autoware_shift_decider/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_shift_decider/#input","title":"Input","text":"Name Type Description ~/input/control_cmd autoware_control_msgs::msg::Control Control command for vehicle."},{"location":"control/autoware_shift_decider/#output","title":"Output","text":"Name Type Description ~output/gear_cmd autoware_vehicle_msgs::msg::GearCommand Gear for drive forward / backward."},{"location":"control/autoware_shift_decider/#parameters","title":"Parameters","text":"

none.

"},{"location":"control/autoware_shift_decider/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/","title":"Index","text":""},{"location":"control/autoware_smart_mpc_trajectory_follower/#smart-mpc-trajectory-follower","title":"Smart MPC Trajectory Follower","text":"

Smart MPC (Model Predictive Control) is a control algorithm that combines model predictive control and machine learning. While inheriting the advantages of model predictive control, it solves its disadvantage of modeling difficulty with a data-driven method using machine learning.

This technology makes it relatively easy to operate model predictive control, which is expensive to implement, as long as an environment for collecting data can be prepared.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#provided-features","title":"Provided features","text":"

This package provides smart MPC logic for path-following control as well as mechanisms for learning and evaluation. These features are described below.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#trajectory-following-control-based-on-ilqrmppi","title":"Trajectory following control based on iLQR/MPPI","text":"

The control mode can be selected from \"ilqr\", \"mppi\", or \"mppi_ilqr\", and can be set as mpc_parameter:system:mode in mpc_param.yaml. In \"mppi_ilqr\" mode, the initial value of iLQR is given by the MPPI solution.

[!NOTE] With the default settings, the performance of \"mppi\" mode is limited due to an insufficient number of samples. This issue is being addressed with ongoing work to introduce GPU support.

To perform a simulation, run the following command:

ros2 launch autoware_launch planning_simulator.launch.xml map_path:=$HOME/autoware_map/sample-map-planning vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit trajectory_follower_mode:=smart_mpc_trajectory_follower\n

[!NOTE] When running with the nominal model set in nominal_param.yaml, set trained_model_parameter:control_application:use_trained_model to false in trained_model_param.yaml. To run using the trained model, set trained_model_parameter:control_application:use_trained_model to true, but the trained model must have been generated according to the following procedure.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#training-of-model-and-reflection-in-control","title":"Training of model and reflection in control","text":"

To obtain training data, start autoware, perform a drive, and record rosbag data with the following commands.

ros2 bag record /localization/kinematic_state /localization/acceleration /vehicle/status/steering_status /control/command/control_cmd /control/trajectory_follower/control_cmd /control/trajectory_follower/lane_departure_checker_node/debug/deviation/lateral /control/trajectory_follower/lane_departure_checker_node/debug/deviation/yaw /system/operation_mode/state /vehicle/status/control_mode /sensing/imu/imu_data /debug_mpc_x_des /debug_mpc_y_des /debug_mpc_v_des /debug_mpc_yaw_des /debug_mpc_acc_des /debug_mpc_steer_des /debug_mpc_X_des_converted /debug_mpc_x_current /debug_mpc_error_prediction /debug_mpc_max_trajectory_err /debug_mpc_emergency_stop_mode /debug_mpc_goal_stop_mode /debug_mpc_total_ctrl_time /debug_mpc_calc_u_opt_time\n

Move rosbag2.bash to the rosbag directory recorded above and execute the following command on the directory

bash rosbag2.bash\n

This converts rosbag data into CSV format for training models.

[!NOTE] Note that a large number of terminals are automatically opened at runtime, but they are automatically closed after rosbag data conversion is completed. From the time you begin this process until all terminals are closed, autoware should not be running.

Instead, the same result can be obtained by executing the following command in a python environment:

from autoware_smart_mpc_trajectory_follower.training_and_data_check import train_drive_NN_model\nmodel_trainer = train_drive_NN_model.train_drive_NN_model()\nmodel_trainer.transform_rosbag_to_csv(rosbag_dir)\n

Here, rosbag_dir represents the rosbag directory. At this time, all CSV files in rosbag_dir are automatically deleted first.

We move on to an explanation of how the model is trained. If trained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to true, training is performed on models that include LSTM, and if it is set to false, training is performed on models that do not include LSTM. When using LSTM, cell states and hidden states are updated based on historical time series data and reflected in the prediction.

The paths of the rosbag directories used for training and validation, dir_0, dir_1, dir_2,..., dir_val_0, dir_val_1, dir_val_2,... and the directory save_dir where you save the models, the model can be saved in the python environment as follows:

from autoware_smart_mpc_trajectory_follower.training_and_data_check import train_drive_NN_model\nmodel_trainer = train_drive_NN_model.train_drive_NN_model()\nmodel_trainer.add_data_from_csv(dir_0, add_mode=\"as_train\")\nmodel_trainer.add_data_from_csv(dir_1, add_mode=\"as_train\")\nmodel_trainer.add_data_from_csv(dir_2, add_mode=\"as_train\")\n...\nmodel_trainer.add_data_from_csv(dir_val_0, add_mode=\"as_val\")\nmodel_trainer.add_data_from_csv(dir_val_1, add_mode=\"as_val\")\nmodel_trainer.add_data_from_csv(dir_val_2, add_mode=\"as_val\")\n...\nmodel_trainer.get_trained_model()\nmodel_trainer.save_models(save_dir)\n

If add_mode is not specified or validation data is not added, the training data is split to be used for training and validation.

After performing the polynomial regression, the NN can be trained on the residuals as follows:

model_trainer.get_trained_model(use_polynomial_reg=True)\n

[!NOTE] In the default setting, regression is performed by several preselected polynomials. When use_selected_polynomial=False is set as the argument of get_trained_model, the deg argument allows setting the maximum degree of the polynomial to be used.

If only polynomial regression is performed and no NN model is used, run the following command:

model_trainer.get_trained_model(use_polynomial_reg=True,force_NN_model_to_zero=True)\n

Move model_for_test_drive.pth and polynomial_reg_info.npz saved in save_dir to the home directory and set trained_model_parameter:control_application:use_trained_model in trained_model_param.yaml to true to reflect the trained model in the control.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#performance-evaluation","title":"Performance evaluation","text":"

Here, as an example, we describe the verification of the adaptive performance when the wheel base of the sample_vehicle is 2.79 m, but an incorrect value of 2.0 m is given to the controller side. To give the controller 2.0 m as the wheel base, set the value of nominal_parameter:vehicle_info:wheel_base in nominal_param.yaml to 2.0, and run the following command:

python3 -m smart_mpc_trajectory_follower.clear_pycache\n
"},{"location":"control/autoware_smart_mpc_trajectory_follower/#test-on-autoware","title":"Test on autoware","text":"

To perform a control test on autoware with the nominal model before training, make sure that trained_model_parameter:control_application:use_trained_model in trained_model_param.yaml is false and launch autoware in the manner described in \"Trajectory following control based on iLQR/MPPI\". This time, the following route will be used for the test:

Record rosbag and train the model in the manner described in \"Training of model and reflection in control\", and move the generated files model_for_test_drive.pth and polynomial_reg_info.npz to the home directory. Sample models, which work under the condition thattrained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to true, can be obtained at sample_models/wheel_base_changed.

[!NOTE] Although the data used for training is small, for the sake of simplicity, we will see how much performance can be improved with this amount of data.

To control using the trained model obtained here, set trained_model_parameter:control_application:use_trained_model to true, start autoware in the same way, and drive the same route recording rosbag. After the driving is complete, convert the rosbag file to CSV format using the method described in \"Training of model and reflection in control\". A plot of the lateral deviation is obtained by running the lateral_error_visualize function in control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/training_and_data_check/data_checker.ipynb for the nominal and training model rosbag files rosbag_nominal and rosbag_trained, respectively, as follows:

lateral_error_visualize(dir_name=rosbag_nominal,ylim=[-1.2,1.2])\nlateral_error_visualize(dir_name=rosbag_trained,ylim=[-1.2,1.2])\n

The following results were obtained.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#test-on-python-simulator","title":"Test on python simulator","text":"

First, to give wheel base 2.79 m in the python simulator, create the following file and save it in control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator with the name sim_setting.json:

{ \"wheel_base\": 2.79 }\n

Next, after moving to control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator, run the following commands to test the slalom driving on the python simulator with the nominal control:

python3 run_python_simulator.py nominal_test\n

The result of the driving is stored in test_python_nominal_sim.

The following results were obtained.

The center of the upper row represents the lateral deviation.

Run the following commands to perform training using figure eight driving data under the control of pure pursuit.

To perform training using a figure eight driving and driving based on the obtained model, run the following commands:

python3 run_python_simulator.py\n

The result of the driving is stored in test_python_trined_sim.

When trained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to true, the following results were obtained.

When trained_model_parameter:memory_for_training:use_memory_for_training in trained_model_param.yaml is set to false, the following results were obtained.

It can be seen that the lateral deviation has improved significantly. However, the difference in driving with and without LSTM is not very apparent.

To see the difference, for example, we can experiment with parameters such as steer_time_delay.

First, to restore nominal model settings to default values, set the value of nominal_parameter:vehicle_info:wheel_base in nominal_param.yaml to 2.79, and run the following command:

python3 -m smart_mpc_trajectory_follower.clear_pycache\n

Next, modify sim_setting.json as follows:

{ \"steer_time_delay\": 1.01 }\n

In this way, an experiment is performed when steer_time_delay is set to 1.01 sec.

The result of the driving using the nominal model is as follows:

The result of the driving using the trained model with LSTM is as follows:

The result of the driving using the trained model without LSTM is as follows:

It can be seen that the performance with the model that includes LSTM is significantly better than with the model that does not.

The parameters that can be passed to the python simulator are as follows.

Parameter Type Description steer_bias float steer bias [rad] steer_rate_lim float steer rate limit [rad/s] vel_rate_lim float acceleration limit [m/s^2] wheel_base float wheel base [m] steer_dead_band float steer dead band [rad] adaptive_gear_ratio_coef list[float] List of floats of length 6 specifying information on speed-dependent gear ratios from tire angle to steering wheel angle. acc_time_delay float acceleration time delay [s] steer_time_delay float steer time delay [s] acc_time_constant float acceleration time constant [s] steer_time_constant float steer time constant [s] accel_map_scale float Parameter that magnifies the corresponding distortion from acceleration input values to actual acceleration realizations. Correspondence information is kept in control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator/accel_map.csv. acc_scaling float acceleration scaling steer_scaling float steer scaling vehicle_type int Take values from 0 to 4 for pre-designed vehicle types. A description of each vehicle type is given below.

For example, to give the simulation side 0.01 [rad] of steer bias and 0.001 [rad] of steer dead band, edit the sim_setting.json as follows.

{ \"steer_bias\": 0.01, \"steer_dead_band\": 0.001 }\n
"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_0","title":"vehicle_type_0","text":"

This vehicle type matches the default vehicle type used in the control.

Parameter value wheel_base 2.79 acc_time_delay 0.1 steer_time_delay 0.27 acc_time_constant 0.1 steer_time_constant 0.24 acc_scaling 1.0"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_1","title":"vehicle_type_1","text":"

This vehicle type is intended for a heavy bus.

Parameter value wheel_base 4.76 acc_time_delay 1.0 steer_time_delay 1.0 acc_time_constant 1.0 steer_time_constant 1.0 acc_scaling 0.2"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_2","title":"vehicle_type_2","text":"

This vehicle type is intended for a light bus.

Parameter value wheel_base 4.76 acc_time_delay 0.5 steer_time_delay 0.5 acc_time_constant 0.5 steer_time_constant 0.5 acc_scaling 0.5"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_3","title":"vehicle_type_3","text":"

This vehicle type is intended for a small vehicle.

Parameter value wheel_base 1.335 acc_time_delay 0.3 steer_time_delay 0.3 acc_time_constant 0.3 steer_time_constant 0.3 acc_scaling 1.5"},{"location":"control/autoware_smart_mpc_trajectory_follower/#vehicle_type_4","title":"vehicle_type_4","text":"

This vehicle type is intended for a small robot.

Parameter value wheel_base 0.395 acc_time_delay 0.2 steer_time_delay 0.2 acc_time_constant 0.2 steer_time_constant 0.2 acc_scaling 1.0"},{"location":"control/autoware_smart_mpc_trajectory_follower/#auto-test-on-python-simulator","title":"Auto test on python simulator","text":"

Here, we describe a method for testing adaptive performance by giving the simulation side a predefined range of model parameters while the control side is given constant model parameters.

To run a driving experiment within the parameter change range set in run_sim.py, for example, move to control/autoware_smart_mpc_trajectory_follower/autoware_smart_mpc_trajectory_follower/python_simulator and run the following command:

python3 run_sim.py --param_name steer_bias\n

Here we described the experimental procedure for steer bias, and the same method can be used for other parameters.

To run the test for all parameters except limits at once, run the following command:

python3 run_auto_test.py\n

The results are stored in the auto_test directory. After the executions were completed, the following results were obtained by running plot_auto_test_result.ipynb:

The orange line shows the intermediate model trained using pure pursuit figure eight drive, and the blue line shows the final model trained using data from both the intermediate model and the figure eight drive. In most cases, sufficient performance is obtained, but for vehicle_type_1, which is intended for a heavy bus, a lateral deviation of about 2 m was observed, which is not satisfactory.

In run_sim.py, the following parameters can be set:

Parameter Type Description USE_TRAINED_MODEL_DIFF bool Whether the derivative of the trained model is reflected in the control DATA_COLLECTION_MODE DataCollectionMode Which method will be used to collect the training data\u3000 \"DataCollectionMode.ff\": Straight line driving with feed-forward input \"DataCollectionMode.pp\": Figure eight driving with pure pursuit control \"DataCollectionMode.mpc\": Slalom driving with mpc USE_POLYNOMIAL_REGRESSION bool Whether to perform polynomial regression before NN USE_SELECTED_POLYNOMIAL bool When USE_POLYNOMIAL_REGRESSION is True, perform polynomial regression using only some preselected polynomials. The choice of polynomials is intended to be able to absorb the contribution of some parameter shifts based on the nominal model of the vehicle. FORCE_NN_MODEL_TO_ZERO bool Whether to force the NN model to zero (i.e., erase the contribution of the NN model). When USE_POLYNOMIAL_REGRESSION is True, setting FORCE_MODEL_TO_ZERO to True allows the control to reflect the results of polynomial regression only, without using NN models. FIT_INTERCEPT bool Whether to include bias in polynomial regression. If it is False, perform the regression with a polynomial of the first degree or higher. USE_INTERCEPT bool When a polynomial regression including bias is performed, whether to use or discard the resulting bias information. It is meaningful only if FIT_INTERCEPT is True. If it is False, discard the bias in the polynomial regression in the hope that the NN model can remove the bias term, even if the polynomial regression is performed with the bias included.

[!NOTE] When run_sim.py is run, the use_trained_model_diff set in run_sim.py takes precedence over the trained_model_parameter:control_application:use_trained_model_diff set in trained_model_param.yaml.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#kernel-density-estimation-of-pure-pursuit-driving-data","title":"Kernel density estimation of pure pursuit driving data","text":"

The distribution of data obtained from pure pursuit runs can be displayed using Kernel density estimation. To do this, run density_estimation.ipynb.

The correlation between the minimum value of the density estimate and the lateral deviation of the run results is low. A scalar indicator that better predicts the value of lateral deviation is under development.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#change-of-nominal-parameters-and-their-reloading","title":"Change of nominal parameters and their reloading","text":"

The nominal parameters of vehicle model can be changed by editing the file nominal_param.yaml. After changing the nominal parameters, the cache must be deleted by running the following command:

python3 -m smart_mpc_trajectory_follower.clear_pycache\n

The nominal parameters include the following:

Parameter Type Description nominal_parameter:vehicle_info:wheel_base float wheel base [m] nominal_parameter:acceleration:acc_time_delay float acceleration time delay [s] nominal_parameter:acceleration:acc_time_constant float acceleration time constant [s] nominal_parameter:steering:steer_time_delay float steer time delay [s] nominal_parameter:steering:steer_time_constant float steer time constant [s]"},{"location":"control/autoware_smart_mpc_trajectory_follower/#change-of-control-parameters-and-their-reloading","title":"Change of control parameters and their reloading","text":"

The control parameters can be changed by editing files mpc_param.yaml and trained_model_param.yaml. Although it is possible to reflect parameter changes by restarting autoware, the following command allows us to do so without leaving autoware running:

ros2 topic pub /pympc_reload_mpc_param_trigger std_msgs/msg/String \"data: ''\" --once\n

The main parameters among the control parameters are as follows.

"},{"location":"control/autoware_smart_mpc_trajectory_follower/#mpc_paramyaml","title":"mpc_param.yaml","text":"Parameter Type Description mpc_parameter:system:mode str control mode \"ilqr\": iLQR mode \"mppi\": MPPI mode \"mppi_ilqr\": the initial value of iLQR is given by the MPPI solution. mpc_parameter:cost_parameters:Q list[float] Stage cost for states. List of length 8, in order: straight deviation, lateral deviation, velocity deviation, yaw angle deviation, acceleration deviation, steer deviation, acceleration input deviation, steer input deviation cost weights. mpc_parameter:cost_parameters:Q_c list[float] Cost in the horizon corresponding to the following timing_Q_c for the states. The correspondence of the components of the list is the same as for Q. mpc_parameter:cost_parameters:Q_f list[float] Termination cost for the states. The correspondence of the components of the list is the same as for Q. mpc_parameter:cost_parameters:R list[float] A list of length 2 where R[0] is weight of cost for the change rate of acceleration input value and R[1] is weight of cost for the change rate of steer input value. mpc_parameter:mpc_setting:timing_Q_c list[int] Horizon numbers such that the stage cost for the states is set to Q_c. mpc_parameter:compensation:acc_fb_decay float Coefficient of damping in integrating the error between the observed and predicted acceleration values in the compensator outside the MPC. mpc_parameter:compensation:acc_fb_gain float Gain of acceleration compensation. mpc_parameter:compensation:max_error_acc float Maximum acceleration compensation (m/s^2) mpc_parameter:compensation:steer_fb_decay float Coefficient of damping in integrating the error between the observed and predicted steering values in the compensator outside the MPC. mpc_parameter:compensation:steer_fb_gain float Gain of steering compensation. mpc_parameter:compensation:max_error_steer float Maximum steering compensation (rad)"},{"location":"control/autoware_smart_mpc_trajectory_follower/#trained_model_paramyaml","title":"trained_model_param.yaml","text":"Parameter Type Description trained_model_parameter:control_application:use_trained_model bool Whether the trained model is reflected in the control or not. trained_model_parameter:control_application:use_trained_model_diff bool Whether the derivative of the trained model is reflected on the control or not. It is meaningful only when use_trained_model is True, and if False, the nominal model is used for the derivative of the dynamics, and trained model is used only for prediction. trained_model_parameter:memory_for_training:use_memory_for_training bool Whether to use the model that includes LSTM for learning or not. trained_model_parameter:memory_for_training:use_memory_diff bool Whether the derivative with respect to the cell state and hidden state at the previous time of LSTM is reflected in the control or not."},{"location":"control/autoware_smart_mpc_trajectory_follower/#request-to-release-the-slow-stop-mode","title":"Request to release the slow stop mode","text":"

If the predicted trajectory deviates too far from the target trajectory, the system enters a slow stop mode and the vehicle stops moving. To cancel the slow stop mode and make the vehicle ready to run again, run the following command:

ros2 topic pub /pympc_stop_mode_reset_request std_msgs/msg/String \"data: ''\" --once\n
"},{"location":"control/autoware_smart_mpc_trajectory_follower/#limitation","title":"Limitation","text":" "},{"location":"control/autoware_trajectory_follower_base/","title":"Trajectory Follower","text":""},{"location":"control/autoware_trajectory_follower_base/#trajectory-follower","title":"Trajectory Follower","text":"

This is the design document for the trajectory_follower package.

"},{"location":"control/autoware_trajectory_follower_base/#purpose-use-cases","title":"Purpose / Use cases","text":"

This package provides the interface of longitudinal and lateral controllers used by the node of the autoware_trajectory_follower_node package. We can implement a detailed controller by deriving the longitudinal and lateral base interfaces.

"},{"location":"control/autoware_trajectory_follower_base/#design","title":"Design","text":"

There are lateral and longitudinal base interface classes and each algorithm inherits from this class to implement. The interface class has the following base functions.

See the Design of Trajectory Follower Nodes for how these functions work in the node.

"},{"location":"control/autoware_trajectory_follower_base/#separated-lateral-steering-and-longitudinal-velocity-controls","title":"Separated lateral (steering) and longitudinal (velocity) controls","text":"

This longitudinal controller assumes that the roles of lateral and longitudinal control are separated as follows.

Ideally, dealing with the lateral and longitudinal control as a single mixed problem can achieve high performance. In contrast, there are two reasons to provide velocity controller as a stand-alone function, described below.

"},{"location":"control/autoware_trajectory_follower_base/#complex-requirements-for-longitudinal-motion","title":"Complex requirements for longitudinal motion","text":"

The longitudinal vehicle behavior that humans expect is difficult to express in a single logic. For example, the expected behavior just before stopping differs depending on whether the ego-position is ahead/behind of the stop line, or whether the current speed is higher/lower than the target speed to achieve a human-like movement.

In addition, some vehicles have difficulty measuring the ego-speed at extremely low speeds. In such cases, a configuration that can improve the functionality of the longitudinal control without affecting the lateral control is important.

There are many characteristics and needs that are unique to longitudinal control. Designing them separately from the lateral control keeps the modules less coupled and improves maintainability.

"},{"location":"control/autoware_trajectory_follower_base/#nonlinear-coupling-of-lateral-and-longitudinal-motion","title":"Nonlinear coupling of lateral and longitudinal motion","text":"

The lat-lon mixed control problem is very complex and uses nonlinear optimization to achieve high performance. Since it is difficult to guarantee the convergence of the nonlinear optimization, a simple control logic is also necessary for development.

Also, the benefits of simultaneous longitudinal and lateral control are small if the vehicle doesn't move at high speed.

"},{"location":"control/autoware_trajectory_follower_base/#related-issues","title":"Related issues","text":""},{"location":"control/autoware_trajectory_follower_node/","title":"Trajectory Follower Nodes","text":""},{"location":"control/autoware_trajectory_follower_node/#trajectory-follower-nodes","title":"Trajectory Follower Nodes","text":""},{"location":"control/autoware_trajectory_follower_node/#purpose","title":"Purpose","text":"

Generate control commands to follow a given Trajectory.

"},{"location":"control/autoware_trajectory_follower_node/#design","title":"Design","text":"

This is a node of the functionalities implemented in the controller class derived from autoware_trajectory_follower_base package. It has instances of those functionalities, gives them input data to perform calculations, and publishes control commands.

By default, the controller instance with the Controller class as follows is used.

The process flow of Controller class is as follows.

// 1. create input data\nconst auto input_data = createInputData(*get_clock());\nif (!input_data) {\nreturn;\n}\n\n// 2. check if controllers are ready\nconst bool is_lat_ready = lateral_controller_->isReady(*input_data);\nconst bool is_lon_ready = longitudinal_controller_->isReady(*input_data);\nif (!is_lat_ready || !is_lon_ready) {\nreturn;\n}\n\n// 3. run controllers\nconst auto lat_out = lateral_controller_->run(*input_data);\nconst auto lon_out = longitudinal_controller_->run(*input_data);\n\n// 4. sync with each other controllers\nlongitudinal_controller_->sync(lat_out.sync_data);\nlateral_controller_->sync(lon_out.sync_data);\n\n// 5. publish control command\ncontrol_cmd_pub_->publish(out);\n

Giving the longitudinal controller information about steer convergence allows it to control steer when stopped if following parameters are true

"},{"location":"control/autoware_trajectory_follower_node/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/autoware_trajectory_follower_node/#inputs","title":"Inputs","text":""},{"location":"control/autoware_trajectory_follower_node/#outputs","title":"Outputs","text":""},{"location":"control/autoware_trajectory_follower_node/#parameter","title":"Parameter","text":""},{"location":"control/autoware_trajectory_follower_node/#debugging","title":"Debugging","text":"

Debug information are published by the lateral and longitudinal controller using autoware_internal_debug_msgs/Float32MultiArrayStamped messages.

A configuration file for PlotJuggler is provided in the config folder which, when loaded, allow to automatically subscribe and visualize information useful for debugging.

In addition, the predicted MPC trajectory is published on topic output/lateral/predicted_trajectory and can be visualized in Rviz.

"},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/","title":"Simple Trajectory Follower","text":""},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#simple-trajectory-follower","title":"Simple Trajectory Follower","text":""},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#purpose","title":"Purpose","text":"

Provide a base trajectory follower code that is simple and flexible to use. This node calculates control command based on a reference trajectory and an ego vehicle kinematics.

"},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#design","title":"Design","text":""},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#inputs-outputs","title":"Inputs / Outputs","text":"

Inputs

"},{"location":"control/autoware_trajectory_follower_node/design/simple_trajectory_follower-design/#parameters","title":"Parameters","text":"Name Type Description Default value use_external_target_vel bool use external target velocity defined by parameter when true, else follow the velocity on target trajectory points. false external_target_vel float target velocity used when use_external_target_vel is true. 0.0 lateral_deviation float target lateral deviation when following. 0.0"},{"location":"control/autoware_vehicle_cmd_gate/","title":"vehicle_cmd_gate","text":""},{"location":"control/autoware_vehicle_cmd_gate/#vehicle_cmd_gate","title":"vehicle_cmd_gate","text":""},{"location":"control/autoware_vehicle_cmd_gate/#purpose","title":"Purpose","text":"

vehicle_cmd_gate is the package to get information from emergency handler, planning module, external controller, and send a msg to vehicle.

"},{"location":"control/autoware_vehicle_cmd_gate/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/autoware_vehicle_cmd_gate/#input","title":"Input","text":"Name Type Description ~/input/steering autoware_vehicle_msgs::msg::SteeringReport steering status ~/input/auto/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity from planning module ~/input/auto/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from planning module ~/input/auto/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command from planning module ~/input/auto/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command from planning module ~/input/external/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity from external ~/input/external/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from external ~/input/external/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command from external ~/input/external/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command from external ~/input/external_emergency_stop_heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/input/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/input/emergency/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity from emergency handler ~/input/emergency/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command from emergency handler ~/input/emergency/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command from emergency handler ~/input/engage autoware_vehicle_msgs::msg::Engage engage signal ~/input/operation_mode autoware_adapi_v1_msgs::msg::OperationModeState operation mode of Autoware"},{"location":"control/autoware_vehicle_cmd_gate/#output","title":"Output","text":"Name Type Description ~/output/vehicle_cmd_emergency tier4_vehicle_msgs::msg::VehicleEmergencyStamped emergency state which was originally in vehicle command ~/output/command/control_cmd autoware_control_msgs::msg::Control command for lateral and longitudinal velocity to vehicle ~/output/command/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command to vehicle ~/output/command/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand hazard lights command to vehicle ~/output/command/gear_cmd autoware_vehicle_msgs::msg::GearCommand gear command to vehicle ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/output/engage autoware_vehicle_msgs::msg::Engage engage signal ~/output/external_emergency tier4_external_api_msgs::msg::Emergency external emergency signal ~/output/operation_mode tier4_system_msgs::msg::OperationMode current operation mode of the vehicle_cmd_gate"},{"location":"control/autoware_vehicle_cmd_gate/#parameters","title":"Parameters","text":"Parameter Type Description update_period double update period use_emergency_handling bool true when emergency handler is used check_external_emergency_heartbeat bool true when checking heartbeat for emergency stop system_emergency_heartbeat_timeout double timeout for system emergency external_emergency_stop_heartbeat_timeout double timeout for external emergency filter_activated_count_threshold int threshold for filter activation filter_activated_velocity_threshold double velocity threshold for filter activation stop_hold_acceleration double longitudinal acceleration cmd when vehicle should stop emergency_acceleration double longitudinal acceleration cmd when vehicle stop with emergency moderate_stop_service_acceleration double longitudinal acceleration cmd when vehicle stop with moderate stop service nominal.vel_lim double limit of longitudinal velocity (activated in AUTONOMOUS operation mode) nominal.reference_speed_point velocity point used as a reference when calculate control command limit (activated in AUTONOMOUS operation mode). The size of this array must be equivalent to the size of the limit array. nominal.lon_acc_lim array of limits of longitudinal acceleration (activated in AUTONOMOUS operation mode) nominal.lon_jerk_lim array of limits of longitudinal jerk (activated in AUTONOMOUS operation mode) nominal.lat_acc_lim array of limits of lateral acceleration (activated in AUTONOMOUS operation mode) nominal.lat_jerk_lim array of limits of lateral jerk (activated in AUTONOMOUS operation mode) on_transition.vel_lim double limit of longitudinal velocity (activated in TRANSITION operation mode) on_transition.reference_speed_point velocity point used as a reference when calculate control command limit (activated in TRANSITION operation mode). The size of this array must be equivalent to the size of the limit array. on_transition.lon_acc_lim array of limits of longitudinal acceleration (activated in TRANSITION operation mode) on_transition.lon_jerk_lim array of limits of longitudinal jerk (activated in TRANSITION operation mode) on_transition.lat_acc_lim array of limits of lateral acceleration (activated in TRANSITION operation mode) on_transition.lat_jerk_lim array of limits of lateral jerk (activated in TRANSITION operation mode)"},{"location":"control/autoware_vehicle_cmd_gate/#filter-function","title":"Filter function","text":"

This module incorporates a limitation filter to the control command right before its published. Primarily for safety, this filter restricts the output range of all control commands published through Autoware.

The limitation values are calculated based on the 1D interpolation of the limitation array parameters. Here is an example for the longitudinal jerk limit.

Notation: this filter is not designed to enhance ride comfort. Its main purpose is to detect and remove abnormal values in the control outputs during the final stages of Autoware. If this filter is frequently active, it implies the control module may need tuning. If you're aiming to smoothen the signal via a low-pass filter or similar techniques, that should be handled in the control module. When the filter is activated, the topic ~/is_filter_activated is published.

Notation 2: If you use vehicles in which the driving force is controlled by the accelerator/brake pedal, the jerk limit, denoting the pedal rate limit, must be sufficiently relaxed at low speeds. Otherwise, quick pedal changes at start/stop will not be possible, resulting in slow starts and creep down on hills. This functionality for starting/stopping was embedded in the source code but was removed because it was complex and could be achieved by parameters.

"},{"location":"control/autoware_vehicle_cmd_gate/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"control/autoware_vehicle_cmd_gate/#external-emergency-heartbeat","title":"External Emergency Heartbeat","text":"

The parameter check_external_emergency_heartbeat (true by default) enables an emergency stop request from external modules. This feature requires a ~/input/external_emergency_stop_heartbeat topic for health monitoring of the external module, and the vehicle_cmd_gate module will not start without the topic. The check_external_emergency_heartbeat parameter must be false when the \"external emergency stop\" function is not used.

"},{"location":"control/autoware_vehicle_cmd_gate/#commands-on-mode-changes","title":"Commands on Mode changes","text":"

Output commands' topics: turn_indicators_cmd, hazard_light and gear_cmd are selected based on gate_mode. However, to ensure the continuity of commands, these commands will not change until the topics of new input commands arrive, even if a mode change occurs.

"},{"location":"control/control_performance_analysis/","title":"control_performance_analysis","text":""},{"location":"control/control_performance_analysis/#control_performance_analysis","title":"control_performance_analysis","text":""},{"location":"control/control_performance_analysis/#purpose","title":"Purpose","text":"

control_performance_analysis is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.

This package is used as a tool to quantify the results of the control module. That's why it doesn't interfere with the core logic of autonomous driving.

Based on the various input from planning, control, and vehicle, it publishes the result of analysis as control_performance_analysis::msg::ErrorStamped defined in this package.

All results in ErrorStamped message are calculated in Frenet Frame of curve. Errors and velocity errors are calculated by using paper below.

Werling, Moritz & Groell, Lutz & Bretthauer, Georg. (2010). Invariant Trajectory Tracking With a Full-Size Autonomous Road Vehicle. IEEE Transactions on Robotics. 26. 758 - 765. 10.1109/TRO.2010.2052325.

If you are interested in calculations, you can see the error and error velocity calculations in section C. Asymptotical Trajectory Tracking With Orientation Control.

Error acceleration calculations are made based on the velocity calculations above. You can see below the calculation of error acceleration.

"},{"location":"control/control_performance_analysis/#input-output","title":"Input / Output","text":""},{"location":"control/control_performance_analysis/#input-topics","title":"Input topics","text":"Name Type Description /planning/scenario_planning/trajectory autoware_planning_msgs::msg::Trajectory Output trajectory from planning module. /control/command/control_cmd autoware_control_msgs::msg::Control Output control command from control module. /vehicle/status/steering_status autoware_vehicle_msgs::msg::SteeringReport Steering information from vehicle. /localization/kinematic_state nav_msgs::msg::Odometry Use twist from odometry. /tf tf2_msgs::msg::TFMessage Extract ego pose from tf."},{"location":"control/control_performance_analysis/#output-topics","title":"Output topics","text":"Name Type Description /control_performance/performance_vars control_performance_analysis::msg::ErrorStamped The result of the performance analysis. /control_performance/driving_status control_performance_analysis::msg::DrivingMonitorStamped Driving status (acceleration, jerk etc.) monitoring"},{"location":"control/control_performance_analysis/#outputs","title":"Outputs","text":""},{"location":"control/control_performance_analysis/#control_performance_analysismsgdrivingmonitorstamped","title":"control_performance_analysis::msg::DrivingMonitorStamped","text":"Name Type Description longitudinal_acceleration float \\([ \\mathrm{m/s^2} ]\\) longitudinal_jerk float \\([ \\mathrm{m/s^3} ]\\) lateral_acceleration float \\([ \\mathrm{m/s^2} ]\\) lateral_jerk float \\([ \\mathrm{m/s^3} ]\\) desired_steering_angle float \\([ \\mathrm{rad} ]\\) controller_processing_time float Timestamp between last two control command messages \\([ \\mathrm{ms} ]\\)"},{"location":"control/control_performance_analysis/#control_performance_analysismsgerrorstamped","title":"control_performance_analysis::msg::ErrorStamped","text":"Name Type Description lateral_error float \\([ \\mathrm{m} ]\\) lateral_error_velocity float \\([ \\mathrm{m/s} ]\\) lateral_error_acceleration float \\([ \\mathrm{m/s^2} ]\\) longitudinal_error float \\([ \\mathrm{m} ]\\) longitudinal_error_velocity float \\([ \\mathrm{m/s} ]\\) longitudinal_error_acceleration float \\([ \\mathrm{m/s^2} ]\\) heading_error float \\([ \\mathrm{rad} ]\\) heading_error_velocity float \\([ \\mathrm{rad/s} ]\\) control_effort_energy float \\([ \\mathbf{u}^\\top \\mathbf{R} \\mathbf{u} ]\\) simplified to \\([ R \\cdot u^2 ]\\) error_energy float \\(e_{\\text{lat}}^2 + e_\\theta^2\\) (squared lateral error + squared heading error) value_approximation float \\(V = \\mathbf{x}^\\top \\mathbf{P} \\mathbf{x}\\); Value function from DARE Lyapunov matrix \\(\\mathbf{P}\\) curvature_estimate float \\([ \\mathrm{1/m} ]\\) curvature_estimate_pp float \\([ \\mathrm{1/m} ]\\) vehicle_velocity_error float \\([ \\mathrm{m/s} ]\\) tracking_curvature_discontinuity_ability float Measures the ability to track curvature changes \\(\\frac{\\lvert \\Delta(\\text{curvature}) \\rvert}{1 + \\lvert \\Delta(e_{\\text{lat}}) \\rvert}\\)"},{"location":"control/control_performance_analysis/#parameters","title":"Parameters","text":"Name Type Description curvature_interval_length double Used for estimating current curvature prevent_zero_division_value double Value to avoid zero division. Default is 0.001 odom_interval unsigned integer Interval between odom messages, increase it for smoother curve. acceptable_max_distance_to_waypoint double Maximum distance between trajectory point and vehicle [m] acceptable_max_yaw_difference_rad double Maximum yaw difference between trajectory point and vehicle [rad] low_pass_filter_gain double Low pass filter gain"},{"location":"control/control_performance_analysis/#usage","title":"Usage","text":" "},{"location":"control/control_performance_analysis/#future-improvements","title":"Future Improvements","text":""},{"location":"control/predicted_path_checker/","title":"Predicted Path Checker","text":""},{"location":"control/predicted_path_checker/#predicted-path-checker","title":"Predicted Path Checker","text":""},{"location":"control/predicted_path_checker/#purpose","title":"Purpose","text":"

The Predicted Path Checker package is designed for autonomous vehicles to check the predicted path generated by control modules. It handles potential collisions that the planning module might not be able to handle and that in the brake distance. In case of collision in brake distance, the package will send a diagnostic message labeled \"ERROR\" to alert the system to send emergency and in the case of collisions in outside reference trajectory, it sends pause request to pause interface to make the vehicle stop.

"},{"location":"control/predicted_path_checker/#algorithm","title":"Algorithm","text":"

The package algorithm evaluates the predicted trajectory against the reference trajectory and the predicted objects in the environment. It checks for potential collisions and, if necessary, generates an appropriate response to avoid them ( emergency or pause request).

"},{"location":"control/predicted_path_checker/#inner-algorithm","title":"Inner Algorithm","text":"

cutTrajectory() -> It cuts the predicted trajectory with input length. Length is calculated by multiplying the velocity of ego vehicle with \"trajectory_check_time\" parameter and \"min_trajectory_length\".

filterObstacles() -> It filters the predicted objects in the environment. It filters the objects which are not in front of the vehicle and far away from predicted trajectory.

checkTrajectoryForCollision() -> It checks the predicted trajectory for collision with the predicted objects. It calculates both polygon of trajectory points and predicted objects and checks intersection of both polygons. If there is an intersection, it calculates the nearest collision point. It returns the nearest collision point of polygon and the predicted object. It also checks predicted objects history which are intersect with the footprint before to avoid unexpected behaviors. Predicted objects history stores the objects if it was detected below the \"chattering_threshold\" seconds ago.

If the \"enable_z_axis_obstacle_filtering\" parameter is set to true, it filters the predicted objects in the Z-axis by using \"z_axis_filtering_buffer\". If the object does not intersect with the Z-axis, it is filtered out.

calculateProjectedVelAndAcc() -> It calculates the projected velocity and acceleration of the predicted object on predicted trajectory's collision point's axes.

isInBrakeDistance() -> It checks if the stop point is in brake distance. It gets relative velocity and acceleration of ego vehicle with respect to the predicted object. It calculates the brake distance, if the point in brake distance, it returns true.

isItDiscretePoint() -> It checks if the stop point on predicted trajectory is discrete point or not. If it is not discrete point, planning should handle the stop.

isThereStopPointOnRefTrajectory() -> It checks if there is a stop point on reference trajectory. If there is a stop point before the stop index, it returns true. Otherwise, it returns false, and node is going to call pause interface to make the vehicle stop.

"},{"location":"control/predicted_path_checker/#inputs","title":"Inputs","text":"Name Type Description ~/input/reference_trajectory autoware_planning_msgs::msg::Trajectory Reference trajectory ~/input/predicted_trajectory autoware_planning_msgs::msg::Trajectory Predicted trajectory ~/input/objects autoware_perception_msgs::msg::PredictedObject Dynamic objects in the environment ~/input/odometry nav_msgs::msg::Odometry Odometry message of vehicle to get current velocity ~/input/current_accel geometry_msgs::msg::AccelWithCovarianceStamped Current acceleration /control/vehicle_cmd_gate/is_paused tier4_control_msgs::msg::IsPaused Current pause state of the vehicle"},{"location":"control/predicted_path_checker/#outputs","title":"Outputs","text":"Name Type Description ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization ~/debug/virtual_wall visualization_msgs::msg::MarkerArray Virtual wall marker for visualization /control/vehicle_cmd_gate/set_pause tier4_control_msgs::srv::SetPause Pause service to make the vehicle stop /diagnostics diagnostic_msgs::msg::DiagnosticStatus Diagnostic status of vehicle"},{"location":"control/predicted_path_checker/#parameters","title":"Parameters","text":""},{"location":"control/predicted_path_checker/#node-parameters","title":"Node Parameters","text":"Name Type Description Default value update_rate double The update rate [Hz] 10.0 delay_time double he time delay considered for the emergency response [s] 0.17 max_deceleration double Max deceleration for ego vehicle to stop [m/s^2] 1.5 resample_interval double Interval for resampling trajectory [m] 0.5 stop_margin double The stopping margin [m] 0.5 ego_nearest_dist_threshold double The nearest distance threshold for ego vehicle [m] 3.0 ego_nearest_yaw_threshold double The nearest yaw threshold for ego vehicle [rad] 1.046 min_trajectory_check_length double The minimum trajectory check length in meters [m] 1.5 trajectory_check_time double The trajectory check time in seconds. [s] 3.0 distinct_point_distance_threshold double The distinct point distance threshold [m] 0.3 distinct_point_yaw_threshold double The distinct point yaw threshold [deg] 5.0 filtering_distance_threshold double It ignores the objects if distance is higher than this [m] 1.5 use_object_prediction bool If true, node predicts current pose of the objects wrt delta time [-] true"},{"location":"control/predicted_path_checker/#collision-checker-parameters","title":"Collision Checker Parameters","text":"Name Type Description Default value width_margin double The width margin for collision checking [Hz] 0.2 chattering_threshold double The chattering threshold for collision detection [s] 0.2 z_axis_filtering_buffer double The Z-axis filtering buffer [m] 0.3 enable_z_axis_obstacle_filtering bool A boolean flag indicating if Z-axis obstacle filtering is enabled false"},{"location":"evaluator/autoware_control_evaluator/","title":"Control Evaluator","text":""},{"location":"evaluator/autoware_control_evaluator/#control-evaluator","title":"Control Evaluator","text":""},{"location":"evaluator/autoware_control_evaluator/#purpose","title":"Purpose","text":"

This package provides nodes that generate metrics to evaluate the quality of control.

It publishes metric information about control modules' outputs as well as the ego vehicle's current kinematics and position.

"},{"location":"evaluator/autoware_control_evaluator/#evaluated-metrics","title":"Evaluated metrics","text":"

The control evaluator uses the metrics defined in include/autoware/control_evaluator/metrics/metric.hppto calculate deviations in yaw and lateral distance from the ego's set-point. The control_evaluator can also be customized to offer metrics/evaluation about other control modules. Currently, the control_evaluator offers a simple metric output based on the autonomous_emergency_braking node's output, but this functionality can be extended to evaluate other control modules' performance.

"},{"location":"evaluator/autoware_control_evaluator/#kinematics-output","title":"Kinematics output","text":"

The control evaluator module also constantly publishes information regarding the ego vehicle's kinematics and position. It publishes the current ego lane id with the longitudinal s and lateral t arc coordinates. It also publishes the current ego speed, acceleration and jerk in its metric messages.

This information can be used by other nodes to establish automated evaluation using rosbags: by crosschecking the ego position and kinematics with the evaluated control module's output, it is possible to judge if the evaluated control modules reacted in a satisfactory way at certain interesting points of the rosbag reproduction.

"},{"location":"evaluator/autoware_planning_evaluator/","title":"Planning Evaluator","text":""},{"location":"evaluator/autoware_planning_evaluator/#planning-evaluator","title":"Planning Evaluator","text":""},{"location":"evaluator/autoware_planning_evaluator/#purpose","title":"Purpose","text":"

This package provides nodes that generate metrics to evaluate the quality of planning and control.

"},{"location":"evaluator/autoware_planning_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The evaluation node calculates metrics each time it receives a trajectory T(0). Metrics are calculated using the following information:

These information are maintained by an instance of class MetricsCalculator which is also responsible for calculating metrics.

"},{"location":"evaluator/autoware_planning_evaluator/#stat","title":"Stat","text":"

Each metric is calculated using a autoware::universe_utils::Accumulator instance which contains the minimum, maximum, and mean values calculated for the metric as well as the number of values measured.

"},{"location":"evaluator/autoware_planning_evaluator/#metric-calculation-and-adding-more-metrics","title":"Metric calculation and adding more metrics","text":"

All possible metrics are defined in the Metric enumeration defined include/planning_evaluator/metrics/metric.hpp. This file also defines conversions from/to string as well as human readable descriptions to be used as header of the output file.

The MetricsCalculator is responsible for calculating metric statistics through calls to function:

Accumulator<double> MetricsCalculator::calculate(const Metric metric, const Trajectory & traj) const;\n

Adding a new metric M requires the following steps:

"},{"location":"evaluator/autoware_planning_evaluator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"evaluator/autoware_planning_evaluator/#inputs","title":"Inputs","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::msg::Trajectory Main trajectory to evaluate ~/input/reference_trajectory autoware_planning_msgs::msg::Trajectory Reference trajectory to use for deviation metrics ~/input/objects autoware_perception_msgs::msg::PredictedObjects Obstacles"},{"location":"evaluator/autoware_planning_evaluator/#outputs","title":"Outputs","text":"

Each metric is published on a topic named after the metric name.

Name Type Description ~/metrics tier4_metric_msgs::msg::MetricArray MetricArray with many metrics of tier4_metric_msgs::msg::Metric

If output_metrics = true, the evaluation node writes the statics of the metrics measured during its lifetime to <ros2_logging_directory>/autoware_metrics/<node_name>-<time_stamp>.json when shut down.

"},{"location":"evaluator/autoware_planning_evaluator/#parameters","title":"Parameters","text":"Name Type Description Default Range output_metrics boolean If true, the evaluation node writes the metrics' statics to <ros2_logging_directory>/autoware_metrics/<node_name>-<time_stamp>.json when the node shut down, false N/A ego_frame string reference frame of ego base_link N/A selected_metrics array metrics to collect/record ['curvature', 'point_interval', 'relative_angle', 'length', 'duration', 'velocity', 'acceleration', 'jerk', 'lateral_deviation', 'yaw_deviation', 'velocity_deviation', 'lateral_trajectory_displacement', 'stability', 'stability_frechet', 'obstacle_distance', 'obstacle_ttc', 'modified_goal_longitudinal_deviation', 'modified_goal_lateral_deviation', 'modified_goal_yaw_deviation'] N/A trajectory.min_point_dist_m float minimum distance between two successive points to use for angle calculation 0.1 N/A lookahead.max_dist_m float maximum distance from ego along the trajectory to use for calculation 5.0 N/A lookahead.max_time_s float maximum time ahead of ego along the trajectory to use for calculation 3.0 N/A obstacle.dist_thr_m float distance between ego and the obstacle below which a collision is considered 1.0 N/A"},{"location":"evaluator/autoware_planning_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is a strong assumption that when receiving a trajectory T(0), it has been generated using the last received reference trajectory and objects. This can be wrong if a new reference trajectory or objects are published while T(0) is being calculated.

Precision is currently limited by the resolution of the trajectories. It is possible to interpolate the trajectory and reference trajectory to increase precision but would make computation significantly more expensive.

"},{"location":"evaluator/autoware_planning_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"evaluator/kinematic_evaluator/","title":"Kinematic Evaluator","text":""},{"location":"evaluator/kinematic_evaluator/#kinematic-evaluator","title":"Kinematic Evaluator","text":"

TBD

"},{"location":"evaluator/kinematic_evaluator/#parameters","title":"Parameters","text":"Name Type Description Default Range output_file string Name of the file to which kinematic metrics are written. If empty, metrics are not written to a file. kinematic_metrics.results N/A selected_metrics string The specific metrics selected for evaluation. velocity_stats N/A"},{"location":"evaluator/localization_evaluator/","title":"Localization Evaluator","text":""},{"location":"evaluator/localization_evaluator/#localization-evaluator","title":"Localization Evaluator","text":"

The Localization Evaluator evaluates the performance of the localization system and provides metrics

"},{"location":"evaluator/localization_evaluator/#parameters","title":"Parameters","text":"Name Type Description Default Range output_file string if empty, metrics are not written to file loc_metrics.results N/A selected_metrics array metrics to be calculated ['lateral_error', 'absolute_error'] N/A"},{"location":"evaluator/perception_online_evaluator/","title":"Perception Evaluator","text":""},{"location":"evaluator/perception_online_evaluator/#perception-evaluator","title":"Perception Evaluator","text":"

A node for evaluating the output of perception systems.

"},{"location":"evaluator/perception_online_evaluator/#purpose","title":"Purpose","text":"

This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.

"},{"location":"evaluator/perception_online_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The evaluated metrics are as follows:

"},{"location":"evaluator/perception_online_evaluator/#predicted-path-deviation-predicted-path-deviation-variance","title":"Predicted Path Deviation / Predicted Path Deviation Variance","text":"

Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from \\(T_N\\) seconds ago, where \\(T_N\\) is the maximum value of the prediction time horizon \\([T_1, T_2, ..., T_N]\\).

[!NOTE] The object from \\(T_N\\) seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.

\\[ \\begin{align} n_{points} = T / dt \\\\ ADE = \\Sigma_{i=1}^{n_{points}} d_i / n_{points}\u3000\\\\ Var = \\Sigma_{i=1}^{n_{points}} (d_i - ADE)^2 / n_{points} \\end{align} \\]

The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.

\\[ \\begin{align} ADE_{mean} = \\Sigma_{j=1}^{n_{objects}} ADE_j / n_{objects} \\\\ ADE_{max} = max(ADE_j) \\\\ ADE_{min} = min(ADE_j) \\end{align} \\] \\[ \\begin{align} Var_{mean} = \\Sigma_{j=1}^{n_{objects}} Var_j / n_{objects} \\\\ Var_{max} = max(Var_j) \\\\ Var_{min} = min(Var_j) \\end{align} \\]

The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00

"},{"location":"evaluator/perception_online_evaluator/#lateral-deviation","title":"Lateral Deviation","text":"

Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is \\(T=T_n\\) seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.

"},{"location":"evaluator/perception_online_evaluator/#yaw-deviation","title":"Yaw Deviation","text":"

Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is \\(T=T_n\\) seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.

"},{"location":"evaluator/perception_online_evaluator/#yaw-rate","title":"Yaw Rate","text":"

Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is \\(T_n\\) seconds ago.

"},{"location":"evaluator/perception_online_evaluator/#object-counts","title":"Object Counts","text":"

Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.

In the provided illustration, the range \\(R\\) is determined by a combination of lists of radii (e.g., \\(r_1, r_2, \\ldots\\)) and heights (e.g., \\(h_1, h_2, \\ldots\\)). For example,

"},{"location":"evaluator/perception_online_evaluator/#total-object-count","title":"Total Object Count","text":"

Counts the number of unique objects for each class within the specified detection range. The total object count is calculated as follows:

\\[ \\begin{align} \\text{Total Object Count (Class, Range)} = \\left| \\bigcup_{t=0}^{T_{\\text{now}}} \\{ \\text{uuid} \\mid \\text{class}(t, \\text{uuid}) = C \\wedge \\text{position}(t, \\text{uuid}) \\in R \\} \\right| \\end{align} \\]

where:

"},{"location":"evaluator/perception_online_evaluator/#average-object-count","title":"Average Object Count","text":"

Counts the average number of objects for each class within the specified detection range. This metric measures how many objects were detected in one frame, without considering uuids. The average object count is calculated as follows:

\\[ \\begin{align} \\text{Average Object Count (Class, Range)} = \\frac{1}{N} \\sum_{t=0}^{T_{\\text{now}}} \\left| \\{ \\text{object} \\mid \\text{class}(t, \\text{object}) = C \\wedge \\text{position}(t, \\text{object}) \\in R \\} \\right| \\end{align} \\]

where:

"},{"location":"evaluator/perception_online_evaluator/#interval-object-count","title":"Interval Object Count","text":"

Counts the average number of objects for each class within the specified detection range over the last objects_count_window_seconds. This metric measures how many objects were detected in one frame, without considering uuids. The interval object count is calculated as follows:

\\[ \\begin{align} \\text{Interval Object Count (Class, Range)} = \\frac{1}{W} \\sum_{t=T_{\\text{now}} - T_W}^{T_{\\text{now}}} \\left| \\{ \\text{object} \\mid \\text{class}(t, \\text{object}) = C \\wedge \\text{position}(t, \\text{object}) \\in R \\} \\right| \\end{align} \\]

where:

"},{"location":"evaluator/perception_online_evaluator/#inputs-outputs","title":"Inputs / Outputs","text":"Name Type Description ~/input/objects autoware_perception_msgs::msg::PredictedObjects The predicted objects to evaluate. ~/metrics tier4_metric_msgs::msg::MetricArray Metric information about perception accuracy. ~/markers visualization_msgs::msg::MarkerArray Visual markers for debugging and visualization."},{"location":"evaluator/perception_online_evaluator/#parameters","title":"Parameters","text":"Name Type Description selected_metrics List Metrics to be evaluated, such as lateral deviation, yaw deviation, and predicted path deviation. smoothing_window_size Integer Determines the window size for smoothing path, should be an odd number. prediction_time_horizons list[double] Time horizons for prediction evaluation in seconds. stopped_velocity_threshold double threshold velocity to check if vehicle is stopped detection_radius_list list[double] Detection radius for objects to be evaluated.(used for objects count only) detection_height_list list[double] Detection height for objects to be evaluated. (used for objects count only) detection_count_purge_seconds double Time window for purging object detection counts. objects_count_window_seconds double Time window for keeping object detection counts. The number of object detections within this time window is stored in detection_count_vector_ target_object.*.check_lateral_deviation bool Whether to check lateral deviation for specific object types (car, truck, etc.). target_object.*.check_yaw_deviation bool Whether to check yaw deviation for specific object types (car, truck, etc.). target_object.*.check_predicted_path_deviation bool Whether to check predicted path deviation for specific object types (car, truck, etc.). target_object.*.check_yaw_rate bool Whether to check yaw rate for specific object types (car, truck, etc.). target_object.*.check_total_objects_count bool Whether to check total object count for specific object types (car, truck, etc.). target_object.*.check_average_objects_count bool Whether to check average object count for specific object types (car, truck, etc.). target_object.*.check_interval_average_objects_count bool Whether to check interval average object count for specific object types (car, truck, etc.). debug_marker.* bool Debugging parameters for marker visualization (history path, predicted path, etc.)."},{"location":"evaluator/perception_online_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is assumed that the current positions of PredictedObjects are reasonably accurate.

"},{"location":"evaluator/perception_online_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"evaluator/scenario_simulator_v2_adapter/","title":"scenario_simulator_v2 Adapter","text":""},{"location":"evaluator/scenario_simulator_v2_adapter/#macro-rendering-error","title":"Macro Rendering Error","text":"

File: evaluator/scenario_simulator_v2_adapter/README.md

FileNotFoundError: [Errno 2] No such file or directory: 'src/autoware/universe/evaluator/scenario_simulator_v2_adapter/schema/scenario_simulator_v2_adapter.schema.json'

Traceback (most recent call last):\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 688, in render\n    return md_template.render(**page_variables)\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1295, in render\n    self.environment.handle_exception()\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 942, in handle_exception\n    raise rewrite_traceback_stack(source=source)\n  File \"<template>\", line 38, in top-level template code\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 70, in json_to_markdown\n    with open(json_schema_file_path) as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'src/autoware/universe/evaluator/scenario_simulator_v2_adapter/schema/scenario_simulator_v2_adapter.schema.json'\n
"},{"location":"launch/tier4_autoware_api_launch/","title":"tier4_autoware_api_launch","text":""},{"location":"launch/tier4_autoware_api_launch/#tier4_autoware_api_launch","title":"tier4_autoware_api_launch","text":""},{"location":"launch/tier4_autoware_api_launch/#description","title":"Description","text":"

This package contains launch files that run nodes to convert Autoware internal topics into consistent API used by external software (e.g., fleet management system, simulator).

"},{"location":"launch/tier4_autoware_api_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_autoware_api_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use autoware_api.launch.xml.

  <include file=\"$(find-pkg-share tier4_autoware_api_launch)/launch/autoware_api.launch.xml\"/>\n
"},{"location":"launch/tier4_autoware_api_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )

"},{"location":"launch/tier4_control_launch/","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#tier4_control_launch","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_control_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_control_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use control.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_control_launch)/launch/control.launch.py\">\n<!-- options for lateral_controller_mode: mpc_follower, pure_pursuit -->\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n  <arg name=\"lateral_controller_mode\" value=\"mpc_follower\" />\n</include>\n
"},{"location":"launch/tier4_control_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )

"},{"location":"launch/tier4_localization_launch/","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#tier4_localization_launch","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_localization_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_localization_launch/#usage","title":"Usage","text":"

Include localization.launch.xml in other launch files as follows.

You can select which methods in localization to launch as pose_estimator or twist_estimator by specifying pose_source and twist_source.

In addition, you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of localization.launch.xml.

  <include file=\"$(find-pkg-share tier4_localization_launch)/launch/localization.launch.xml\">\n<!-- Localization methods -->\n<arg name=\"pose_source\" value=\"...\"/>\n<arg name=\"twist_source\" value=\"...\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_map_launch/","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#tier4_map_launch","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_map_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_map_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use map.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of map.launch.xml.

<arg name=\"map_path\" description=\"point cloud and lanelet2 map directory path\"/>\n<arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n<arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n\n<include file=\"$(find-pkg-share tier4_map_launch)/launch/map.launch.py\">\n<arg name=\"lanelet2_map_path\" value=\"$(var map_path)/$(var lanelet2_map_file)\" />\n<arg name=\"pointcloud_map_path\" value=\"$(var map_path)/$(var pointcloud_map_file)\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_map_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )

"},{"location":"launch/tier4_perception_launch/","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#tier4_perception_launch","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_perception_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_perception_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file=\"$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml\">\n<!-- options for mode: camera_lidar_fusion, lidar, camera -->\n<arg name=\"mode\" value=\"lidar\" />\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_planning_launch/","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#tier4_planning_launch","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_planning_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_planning_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_planning_launch)/launch/planning.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_sensing_launch/","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#tier4_sensing_launch","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_sensing_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_sensing_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use sensing.launch.xml.

  <include file=\"$(find-pkg-share tier4_sensing_launch)/launch/sensing.launch.xml\">\n<arg name=\"launch_driver\" value=\"true\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n<arg name=\"vehicle_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/vehicle_info.param.yaml\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/mirror.param.yaml\"/>\n</include>\n
"},{"location":"launch/tier4_sensing_launch/#launch-directory-structure","title":"Launch Directory Structure","text":"

This package finds sensor settings of specified sensor model in launch.

launch/\n\u251c\u2500\u2500 aip_x1 # Sensor model name\n\u2502   \u251c\u2500\u2500 camera.launch.xml # Camera\n\u2502   \u251c\u2500\u2500 gnss.launch.xml # GNSS\n\u2502   \u251c\u2500\u2500 imu.launch.xml # IMU\n\u2502   \u251c\u2500\u2500 lidar.launch.xml # LiDAR\n\u2502   \u2514\u2500\u2500 pointcloud_preprocessor.launch.py # for preprocessing pointcloud\n...\n
"},{"location":"launch/tier4_sensing_launch/#notes","title":"Notes","text":"

This package finds settings with variables.

ex.)

<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/$(var sensor_model)/lidar.launch.xml\">\n
"},{"location":"launch/tier4_simulator_launch/","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#tier4_simulator_launch","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_simulator_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_simulator_launch/#usage","title":"Usage","text":"
  <include file=\"$(find-pkg-share tier4_simulator_launch)/launch/simulator.launch.xml\">\n<arg name=\"vehicle_info_param_file\" value=\"VEHICLE_INFO_PARAM_FILE\" />\n<arg name=\"vehicle_model\" value=\"VEHICLE_MODEL\"/>\n</include>\n

The simulator model used in simple_planning_simulator is loaded from \"config/simulator_model.param.yaml\" in the \"VEHICLE_MODEL_description\" package.

"},{"location":"launch/tier4_system_launch/","title":"tier4_system_launch","text":""},{"location":"launch/tier4_system_launch/#tier4_system_launch","title":"tier4_system_launch","text":""},{"location":"launch/tier4_system_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_system_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_system_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of system.launch.xml.

  <include file=\"$(find-pkg-share tier4_system_launch)/launch/system.launch.xml\">\n<arg name=\"run_mode\" value=\"online\"/>\n<arg name=\"sensor_model\" value=\"SENSOR_MODEL\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_vehicle_launch/","title":"tier4_vehicle_launch","text":""},{"location":"launch/tier4_vehicle_launch/#tier4_vehicle_launch","title":"tier4_vehicle_launch","text":""},{"location":"launch/tier4_vehicle_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_vehicle_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_vehicle_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use vehicle.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n\n<include file=\"$(find-pkg-share tier4_vehicle_launch)/launch/vehicle.launch.xml\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n</include>\n
"},{"location":"launch/tier4_vehicle_launch/#notes","title":"Notes","text":"

This package finds some external packages and settings with variables and package names.

ex.)

<let name=\"vehicle_model_pkg\" value=\"$(find-pkg-share $(var vehicle_model)_description)\"/>\n
<arg name=\"config_dir\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)\"/>\n
"},{"location":"launch/tier4_vehicle_launch/#vehiclexacro","title":"vehicle.xacro","text":""},{"location":"launch/tier4_vehicle_launch/#arguments","title":"Arguments","text":"Name Type Description Default sensor_model String sensor model name \"\" vehicle_model String vehicle model name \"\""},{"location":"launch/tier4_vehicle_launch/#usage_1","title":"Usage","text":"

You can write as follows in *.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n<arg name=\"model\" default=\"$(find-pkg-share tier4_vehicle_launch)/urdf/vehicle.xacro\"/>\n\n<node name=\"robot_state_publisher\" pkg=\"robot_state_publisher\" exec=\"robot_state_publisher\">\n<param name=\"robot_description\" value=\"$(command 'xacro $(var model) vehicle_model:=$(var vehicle_model) sensor_model:=$(var sensor_model)')\"/>\n</node>\n
"},{"location":"localization/autoware_ekf_localizer/","title":"Overview","text":""},{"location":"localization/autoware_ekf_localizer/#overview","title":"Overview","text":"

The Extend Kalman Filter Localizer estimates robust and less noisy robot pose and twist by integrating the 2D vehicle dynamics model with input ego-pose and ego-twist messages. The algorithm is designed especially for fast-moving robots such as autonomous driving systems.

"},{"location":"localization/autoware_ekf_localizer/#flowchart","title":"Flowchart","text":"

The overall flowchart of the autoware_ekf_localizer is described below.

"},{"location":"localization/autoware_ekf_localizer/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"localization/autoware_ekf_localizer/#node","title":"Node","text":""},{"location":"localization/autoware_ekf_localizer/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description measured_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Input pose source with the measurement covariance matrix. measured_twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped Input twist source with the measurement covariance matrix. initialpose geometry_msgs::msg::PoseWithCovarianceStamped Initial pose for EKF. The estimated pose is initialized with zeros at the start. It is initialized with this message whenever published."},{"location":"localization/autoware_ekf_localizer/#published-topics","title":"Published Topics","text":"Name Type Description ekf_odom nav_msgs::msg::Odometry Estimated odometry. ekf_pose geometry_msgs::msg::PoseStamped Estimated pose. ekf_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated pose with covariance. ekf_biased_pose geometry_msgs::msg::PoseStamped Estimated pose including the yaw bias ekf_biased_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated pose with covariance including the yaw bias ekf_twist geometry_msgs::msg::TwistStamped Estimated twist. ekf_twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped The estimated twist with covariance. diagnostics diagnostics_msgs::msg::DiagnosticArray The diagnostic information. debug/processing_time_ms autoware_internal_debug_msgs::msg::Float64Stamped The processing time [ms]."},{"location":"localization/autoware_ekf_localizer/#published-tf","title":"Published TF","text":""},{"location":"localization/autoware_ekf_localizer/#functions","title":"Functions","text":""},{"location":"localization/autoware_ekf_localizer/#predict","title":"Predict","text":"

The current robot state is predicted from previously estimated data using a given prediction model. This calculation is called at a constant interval (predict_frequency [Hz]). The prediction equation is described at the end of this page.

"},{"location":"localization/autoware_ekf_localizer/#measurement-update","title":"Measurement Update","text":"

Before the update, the Mahalanobis distance is calculated between the measured input and the predicted state, the measurement update is not performed for inputs where the Mahalanobis distance exceeds the given threshold.

The predicted state is updated with the latest measured inputs, measured_pose, and measured_twist. The updates are performed with the same frequency as prediction, usually at a high frequency, in order to enable smooth state estimation.

"},{"location":"localization/autoware_ekf_localizer/#parameter-description","title":"Parameter description","text":"

The parameters are set in launch/ekf_localizer.launch .

"},{"location":"localization/autoware_ekf_localizer/#for-node","title":"For Node","text":"Name Type Description Default Range show_debug_info boolean Flag to display debug info 0 N/A predict_frequency float Frequency for filtering and publishing [Hz] 50 N/A tf_rate float Frequency for tf broadcasting [Hz] 50 N/A extend_state_step integer Max delay step which can be dealt with in EKF. Large number increases computational cost. 50 N/A enable_yaw_bias_estimation boolean Flag to enable yaw bias estimation 1 N/A"},{"location":"localization/autoware_ekf_localizer/#for-pose-measurement","title":"For pose measurement","text":"Name Type Description Default Range pose_additional_delay float Additional delay time for pose measurement [s] 0 N/A pose_measure_uncertainty_time float Measured time uncertainty used for covariance calculation [s] 0.01 N/A pose_smoothing_steps integer A value for smoothing steps 5 N/A pose_gate_dist float Limit of Mahalanobis distance used for outliers detection 49.5 N/A"},{"location":"localization/autoware_ekf_localizer/#for-twist-measurement","title":"For twist measurement","text":"Name Type Description Default Range twist_additional_delay float Additional delay time for twist [s] 0 N/A twist_smoothing_steps integer A value for smoothing steps 2 N/A twist_gate_dist float Limit of Mahalanobis distance used for outliers detection 46.1 N/A"},{"location":"localization/autoware_ekf_localizer/#for-process-noise","title":"For process noise","text":"Name Type Description Default Range proc_stddev_vx_c float Standard deviation of process noise in time differentiation expression of linear velocity x, noise for d_vx = 0 10 N/A proc_stddev_wz_c float Standard deviation of process noise in time differentiation expression of angular velocity z, noise for d_wz = 0 5 N/A proc_stddev_yaw_c float Standard deviation of process noise in time differentiation expression of yaw, noise for d_yaw = omega 0.005 N/A

note: process noise for positions x & y are calculated automatically from nonlinear dynamics.

"},{"location":"localization/autoware_ekf_localizer/#simple-1d-filter-parameters","title":"Simple 1D Filter Parameters","text":"Name Type Description Default Range z_filter_proc_dev float Simple1DFilter - Z filter process deviation 1 N/A roll_filter_proc_dev float Simple1DFilter - Roll filter process deviation 0.1 N/A pitch_filter_proc_dev float Simple1DFilter - Pitch filter process deviation 0.1 N/A"},{"location":"localization/autoware_ekf_localizer/#for-diagnostics","title":"For diagnostics","text":"Name Type Description Default Range pose_no_update_count_threshold_warn integer The threshold at which a WARN state is triggered due to the Pose Topic update not happening continuously for a certain number of times 50 N/A pose_no_update_count_threshold_error integer The threshold at which an ERROR state is triggered due to the Pose Topic update not happening continuously for a certain number of times 100 N/A twist_no_update_count_threshold_warn integer The threshold at which a WARN state is triggered due to the Twist Topic update not happening continuously for a certain number of times 50 N/A twist_no_update_count_threshold_error integer The threshold at which an ERROR state is triggered due to the Twist Topic update not happening continuously for a certain number of times 100 N/A ellipse_scale float The scale factor to apply the error ellipse size 3 N/A error_ellipse_size float The long axis size of the error ellipse to trigger a ERROR state 1.5 N/A warn_ellipse_size float The long axis size of the error ellipse to trigger a WARN state 1.2 N/A error_ellipse_size_lateral_direction float The lateral direction size of the error ellipse to trigger a ERROR state 0.3 N/A warn_ellipse_size_lateral_direction float The lateral direction size of the error ellipse to trigger a WARN state 0.25 N/A"},{"location":"localization/autoware_ekf_localizer/#misc","title":"Misc","text":"Name Type Description Default Range threshold_observable_velocity_mps float Minimum value for velocity that will be used for EKF. Mainly used for dead zone in velocity sensor [m/s] (0.0 means disabled) 0.0 N/A pose_frame_id string Parent frame_id of EKF output pose map N/A"},{"location":"localization/autoware_ekf_localizer/#how-to-tune-ekf-parameters","title":"How to tune EKF parameters","text":""},{"location":"localization/autoware_ekf_localizer/#0-preliminaries","title":"0. Preliminaries","text":""},{"location":"localization/autoware_ekf_localizer/#1-tune-sensor-parameters","title":"1. Tune sensor parameters","text":"

Set standard deviation for each sensor. The pose_measure_uncertainty_time is for the uncertainty of the header timestamp data. You can also tune a number of steps for smoothing for each observed sensor data by tuning *_smoothing_steps. Increasing the number will improve the smoothness of the estimation, but may have an adverse effect on the estimation performance.

"},{"location":"localization/autoware_ekf_localizer/#2-tune-process-model-parameters","title":"2. Tune process model parameters","text":""},{"location":"localization/autoware_ekf_localizer/#3-tune-gate-parameters","title":"3. Tune gate parameters","text":"

EKF performs gating using Mahalanobis distance before updating by observation. The gate size is determined by the pose_gate_dist parameter and the twist_gate_dist. If the Mahalanobis distance is larger than this value, the observation is ignored.

This gating process is based on a statistical test using the chi-square distribution. As modeled, we assume that the Mahalanobis distance follows a chi-square distribution with 3 degrees of freedom for pose and 2 degrees of freedom for twist.

Currently, the accuracy of covariance estimation itself is not very good, so it is recommended to set the significance level to a very small value to reduce rejection due to false positives.

Significance level Threshold for 2 dof Threshold for 3 dof \\(10 ^ {-2}\\) 9.21 11.3 \\(10 ^ {-3}\\) 13.8 16.3 \\(10 ^ {-4}\\) 18.4 21.1 \\(10 ^ {-5}\\) 23.0 25.9 \\(10 ^ {-6}\\) 27.6 30.7 \\(10 ^ {-7}\\) 32.2 35.4 \\(10 ^ {-8}\\) 36.8 40.1 \\(10 ^ {-9}\\) 41.4 44.8 \\(10 ^ {-10}\\) 46.1 49.5"},{"location":"localization/autoware_ekf_localizer/#kalman-filter-model","title":"Kalman Filter Model","text":""},{"location":"localization/autoware_ekf_localizer/#kinematics-model-in-update-function","title":"kinematics model in update function","text":"

where, \\(\\theta_k\\) represents the vehicle's heading angle, including the mounting angle bias. \\(b_k\\) is a correction term for the yaw bias, and it is modeled so that \\((\\theta_k+b_k)\\) becomes the heading angle of the base_link. The pose_estimator is expected to publish the base_link in the map coordinate system. However, the yaw angle may be offset due to calibration errors. This model compensates this error and improves estimation accuracy.

"},{"location":"localization/autoware_ekf_localizer/#time-delay-model","title":"time delay model","text":"

The measurement time delay is handled by an augmented state [1] (See, Section 7.3 FIXED-LAG SMOOTHING).

Note that, although the dimension gets larger since the analytical expansion can be applied based on the specific structures of the augmented states, the computational complexity does not significantly change.

"},{"location":"localization/autoware_ekf_localizer/#test-result-with-autoware-ndt","title":"Test Result with Autoware NDT","text":""},{"location":"localization/autoware_ekf_localizer/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/autoware_ekf_localizer/#the-conditions-that-result-in-a-warn-state","title":"The conditions that result in a WARN state","text":""},{"location":"localization/autoware_ekf_localizer/#the-conditions-that-result-in-an-error-state","title":"The conditions that result in an ERROR state","text":""},{"location":"localization/autoware_ekf_localizer/#known-issues","title":"Known issues","text":""},{"location":"localization/autoware_ekf_localizer/#reference","title":"reference","text":"

[1] Anderson, B. D. O., & Moore, J. B. (1979). Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall.

"},{"location":"localization/autoware_geo_pose_projector/","title":"autoware_geo_pose_projector","text":""},{"location":"localization/autoware_geo_pose_projector/#autoware_geo_pose_projector","title":"autoware_geo_pose_projector","text":""},{"location":"localization/autoware_geo_pose_projector/#overview","title":"Overview","text":"

This node is a simple node that subscribes to the geo-referenced pose topic and publishes the pose in the map frame.

"},{"location":"localization/autoware_geo_pose_projector/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description input_geo_pose geographic_msgs::msg::GeoPoseWithCovarianceStamped geo-referenced pose /map/map_projector_info autoware_map_msgs::msg::MapProjectedObjectInfo map projector info"},{"location":"localization/autoware_geo_pose_projector/#published-topics","title":"Published Topics","text":"Name Type Description output_pose geometry_msgs::msg::PoseWithCovarianceStamped pose in map frame /tf tf2_msgs::msg::TFMessage tf from parent link to the child link"},{"location":"localization/autoware_geo_pose_projector/#parameters","title":"Parameters","text":"Name Type Description Default Range publish_tf boolean whether to publish tf True N/A parent_frame string parent frame for published tf map N/A child_frame string child frame for published tf pose_estimator_base_link N/A"},{"location":"localization/autoware_geo_pose_projector/#limitations","title":"Limitations","text":"

The covariance conversion may be incorrect depending on the projection type you are using. The covariance of input topic is expressed in (Latitude, Longitude, Altitude) as a diagonal matrix. Currently, we assume that the x axis is the east direction and the y axis is the north direction. Thus, the conversion may be incorrect when this assumption breaks, especially when the covariance of latitude and longitude is different.

"},{"location":"localization/autoware_gyro_odometer/","title":"autoware_gyro_odometer","text":""},{"location":"localization/autoware_gyro_odometer/#autoware_gyro_odometer","title":"autoware_gyro_odometer","text":""},{"location":"localization/autoware_gyro_odometer/#purpose","title":"Purpose","text":"

autoware_gyro_odometer is the package to estimate twist by combining imu and vehicle speed.

"},{"location":"localization/autoware_gyro_odometer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_gyro_odometer/#input","title":"Input","text":"Name Type Description vehicle/twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance from vehicle imu sensor_msgs::msg::Imu imu from sensor"},{"location":"localization/autoware_gyro_odometer/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped estimated twist with covariance"},{"location":"localization/autoware_gyro_odometer/#parameters","title":"Parameters","text":"Name Type Description Default Range output_frame string output's frame id base_link N/A message_timeout_sec float delay tolerance time for message 0.2 N/A"},{"location":"localization/autoware_gyro_odometer/#assumptions-known-limits","title":"Assumptions / Known limits","text":" "},{"location":"localization/autoware_gyro_odometer/#diagnostics","title":"Diagnostics","text":"Name Description Transition condition to Warning Transition condition to Error topic_time_stamp the time stamp of service calling. [nano second] none none is_arrived_first_vehicle_twist whether the vehicle twist topic has been received even once. not arrive yet none is_arrived_first_imu whether the imu topic has been received even once. not arrive yet none vehicle_twist_time_stamp_dt the time difference between the current time and the latest vehicle twist topic. [second] none the time is longer than message_timeout_sec imu_time_stamp_dt the time difference between the current time and the latest imu topic. [second] none the time is longer than message_timeout_sec vehicle_twist_queue_size the size of vehicle_twist_queue. none none imu_queue_size the size of gyro_queue. none none is_succeed_transform_imu whether transform imu is succeed or not. none failed"},{"location":"localization/autoware_landmark_based_localizer/","title":"Landmark Based Localizer","text":""},{"location":"localization/autoware_landmark_based_localizer/#landmark-based-localizer","title":"Landmark Based Localizer","text":"

This directory contains packages for landmark-based localization.

Landmarks are, for example

etc.

Since these landmarks are easy to detect and estimate pose, the ego pose can be calculated from the pose of the detected landmark if the pose of the landmark is written on the map in advance.

Currently, landmarks are assumed to be flat.

The following figure shows the principle of localization in the case of ar_tag_based_localizer.

This calculated ego pose is passed to the EKF, where it is fused with the twist information and used to estimate a more accurate ego pose.

"},{"location":"localization/autoware_landmark_based_localizer/#node-diagram","title":"Node diagram","text":""},{"location":"localization/autoware_landmark_based_localizer/#landmark_manager","title":"landmark_manager","text":"

The definitions of the landmarks written to the map are introduced in the next section. See Map Specifications.

The landmark_manager is a utility package to load landmarks from the map.

Users can define landmarks as Lanelet2 4-vertex polygons. In this case, it is possible to define an arrangement in which the four vertices cannot be considered to be on the same plane. The direction of the landmark in that case is difficult to calculate. So, if the 4 vertices are considered as forming a tetrahedron and its volume exceeds the volume_threshold parameter, the landmark will not publish tf_static.

"},{"location":"localization/autoware_landmark_based_localizer/#landmark-based-localizer-packages","title":"Landmark based localizer packages","text":""},{"location":"localization/autoware_landmark_based_localizer/#map-specifications","title":"Map specifications","text":"

See https://github.com/autowarefoundation/autoware_lanelet2_extension/blob/main/autoware_lanelet2_extension/docs/lanelet2_format_extension.md#localization-landmarks

"},{"location":"localization/autoware_landmark_based_localizer/#about-consider_orientation","title":"About consider_orientation","text":"

The calculate_new_self_pose function in the LandmarkManager class includes a boolean argument named consider_orientation. This argument determines the method used to calculate the new self pose based on detected and mapped landmarks. The following image illustrates the difference between the two methods.

"},{"location":"localization/autoware_landmark_based_localizer/#consider_orientation-true","title":"consider_orientation = true","text":"

In this mode, the new self pose is calculated so that the relative Pose of the \"landmark detected from the current self pose\" is equal to the relative Pose of the \"landmark mapped from the new self pose\". This method can correct for orientation, but is strongly affected by the orientation error of the landmark detection.

"},{"location":"localization/autoware_landmark_based_localizer/#consider_orientation-false","title":"consider_orientation = false","text":"

In this mode, the new self pose is calculated so that only the relative position is correct for x, y, and z.

This method can not correct for orientation, but it is not affected by the orientation error of the landmark detection.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/","title":"AR Tag Based Localizer","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#ar-tag-based-localizer","title":"AR Tag Based Localizer","text":"

ArTagBasedLocalizer is a vision-based localization node.

This node uses the ArUco library to detect AR-Tags from camera images and calculates and publishes the pose of the ego vehicle based on these detections. The positions and orientations of the AR-Tags are assumed to be written in the Lanelet2 format.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#ar_tag_based_localizer-node","title":"ar_tag_based_localizer node","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#input","title":"Input","text":"Name Type Description ~/input/lanelet2_map autoware_map_msgs::msg::LaneletMapBin Data of lanelet2 ~/input/image sensor_msgs::msg::Image Camera Image ~/input/camera_info sensor_msgs::msg::CameraInfo Camera Info ~/input/ekf_pose geometry_msgs::msg::PoseWithCovarianceStamped EKF Pose without IMU correction. It is used to validate detected AR tags by filtering out False Positives. Only if the EKF Pose and the AR tag-detected Pose are within a certain temporal and spatial range, the AR tag-detected Pose is considered valid and published."},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#output","title":"Output","text":"Name Type Description ~/output/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated Pose ~/debug/result sensor_msgs::msg::Image [debug topic] Image in which marker detection results are superimposed on the input image ~/debug/marker visualization_msgs::msg::MarkerArray [debug topic] Loaded landmarks to visualize in Rviz as thin boards /tf geometry_msgs::msg::TransformStamped [debug topic] TF from camera to detected tag /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics outputs"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#parameters","title":"Parameters","text":"Name Type Description Default Range marker_size float marker_size 0.6 N/A target_tag_ids array target_tag_ids ['0','1','2','3','4','5','6'] N/A base_covariance array base_covariance [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02] N/A distance_threshold float distance_threshold(m) 13.0 N/A consider_orientation boolean consider_orientation false N/A detection_mode string detection_mode select from [DM_NORMAL, DM_FAST, DM_VIDEO_FAST] DM_NORMAL N/A min_marker_size float min_marker_size 0.02 N/A ekf_time_tolerance float ekf_time_tolerance(sec) 5.0 N/A ekf_position_tolerance float ekf_position_tolerance(m) 10.0 N/A"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#how-to-launch","title":"How to launch","text":"

When launching Autoware, set artag for pose_source.

ros2 launch autoware_launch ... \\\npose_source:=artag \\\n...\n
"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#rosbag","title":"Rosbag","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#sample-rosbag-and-map-awsim-data","title":"Sample rosbag and map (AWSIM data)","text":"

This data is simulated data created by AWSIM. Essentially, AR tag-based self-localization is not intended for such public road driving, but for driving in a smaller area, so the max driving speed is set at 15 km/h.

It is a known problem that the timing of when each AR tag begins to be detected can cause significant changes in estimation.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#sample-rosbag-and-map-real-world-data","title":"Sample rosbag and map (Real world data)","text":"

Please remap the topic names and play it.

ros2 bag play /path/to/ar_tag_based_localizer_sample_bag/ -r 0.5 -s sqlite3 \\\n--remap /sensing/camera/front/image:=/sensing/camera/traffic_light/image_raw \\\n/sensing/camera/front/image/info:=/sensing/camera/traffic_light/camera_info\n

This dataset contains issues such as missing IMU data, and overall the accuracy is low. Even when running AR tag-based self-localization, significant difference from the true trajectory can be observed.

The image below shows the trajectory when the sample is executed and plotted.

The pull request video below should also be helpful.

https://github.com/autowarefoundation/autoware.universe/pull/4347#issuecomment-1663155248

"},{"location":"localization/autoware_landmark_based_localizer/autoware_ar_tag_based_localizer/#principle","title":"Principle","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/","title":"LiDAR Marker Localizer","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#lidar-marker-localizer","title":"LiDAR Marker Localizer","text":"

LiDARMarkerLocalizer is a detect-reflector-based localization node .

"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#lidar_marker_localizer-node","title":"lidar_marker_localizer node","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#input","title":"Input","text":"Name Type Description ~/input/lanelet2_map autoware_map_msgs::msg::HADMapBin Data of lanelet2 ~/input/pointcloud sensor_msgs::msg::PointCloud2 PointCloud ~/input/ekf_pose geometry_msgs::msg::PoseWithCovarianceStamped EKF Pose"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#output","title":"Output","text":"Name Type Description ~/output/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped Estimated pose ~/debug/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped [debug topic] Estimated pose ~/debug/marker_detected geometry_msgs::msg::PoseArray [debug topic] Detected marker poses ~/debug/marker_mapped visualization_msgs::msg::MarkerArray [debug topic] Loaded landmarks to visualize in Rviz as thin boards ~/debug/marker_pointcloud sensor_msgs::msg::PointCloud2 [debug topic] PointCloud of the detected marker /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics outputs"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#parameters","title":"Parameters","text":"Name Type Description Default Range marker_name string The name of the markers listed in the HD map. reflector N/A resolution float Grid size for marker detection algorithm. [m] 0.05 \u22650.0 intensity_pattern array A sequence of high/low intensity to perform pattern matching. 1: high intensity (positive match), 0: not consider, -1: low intensity (negative match) [-1, -1, 0, 1, 1, 1, 1, 1, 0, -1, -1] N/A match_intensity_difference_threshold float Threshold for determining high/low. 20 \u22650 positive_match_num_threshold float Threshold for the number of required matches with the pattern. 3 \u22650 negative_match_num_threshold float Threshold for the number of required matches with the pattern. 3 \u22650 vote_threshold_for_detect_marker float Threshold for the number of rings matching the pattern needed to detect it as a marker. 20 \u22650 marker_height_from_ground float Height from the ground to the center of the marker. [m] 1.075 N/A self_pose_timeout_sec float Timeout for self pose. [sec] 1.0 \u22650.0 self_pose_distance_tolerance_m float Tolerance for the distance between two points when performing linear interpolation. [m] 1.0 \u22650.0 limit_distance_from_self_pose_to_nearest_marker float Distance limit for the purpose of determining whether the node should detect a marker. [m] 2.0 \u22650.0 limit_distance_from_self_pose_to_marker float Distance limit for avoiding miss detection. [m] 2.0 \u22650.0 base_covariance array Output covariance in the base_link coordinate. [0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 7.569e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 7.569e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00030625] N/A marker_width float Width of a marker. This param is used for visualizing the detected marker pointcloud[m] 0.8 \u22650.0 enable_save_log boolean False N/A save_file_directory_path string $(env HOME)/detected_reflector_intensity N/A save_file_name string detected_reflector_intensity N/A save_frame_id string velodyne_top N/A"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#how-to-launch","title":"How to launch","text":"

When launching Autoware, set lidar-marker for pose_source.

ros2 launch autoware_launch ... \\\npose_source:=lidar-marker \\\n...\n
"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#design","title":"Design","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#flowchart","title":"Flowchart","text":""},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#detection-algorithm","title":"Detection Algorithm","text":"
  1. Split the LiDAR point cloud into rings along the x-axis of the base_link coordinate system at intervals of the resolution size.
  2. Find the portion of intensity that matches the intensity_pattern.
  3. Perform steps 1 and 2 for each ring, accumulate the matching indices, and detect portions where the count exceeds the vote_threshold_for_detect_marker as markers.
"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#sample-dataset","title":"Sample Dataset","text":"

This dataset was acquired in National Institute for Land and Infrastructure Management, Full-scale tunnel experiment facility. The reflectors were installed by Taisei Corporation.

"},{"location":"localization/autoware_landmark_based_localizer/autoware_lidar_marker_localizer/#collaborators","title":"Collaborators","text":""},{"location":"localization/autoware_localization_error_monitor/","title":"autoware_localization_error_monitor","text":""},{"location":"localization/autoware_localization_error_monitor/#autoware_localization_error_monitor","title":"autoware_localization_error_monitor","text":""},{"location":"localization/autoware_localization_error_monitor/#purpose","title":"Purpose","text":"

autoware_localization_error_monitor is a package for diagnosing localization errors by monitoring uncertainty of the localization results. The package monitors the following two values:

"},{"location":"localization/autoware_localization_error_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_localization_error_monitor/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization result"},{"location":"localization/autoware_localization_error_monitor/#output","title":"Output","text":"Name Type Description debug/ellipse_marker visualization_msgs::msg::Marker ellipse marker diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics outputs"},{"location":"localization/autoware_localization_error_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range scale float scale factor for monitored values 3 N/A error_ellipse_size float error threshold for long radius of confidence ellipse [m] 1.5 N/A warn_ellipse_size float warning threshold for long radius of confidence ellipse [m] 1.2 N/A error_ellipse_size_lateral_direction float error threshold for size of confidence ellipse along lateral direction [m] 0.3 N/A warn_ellipse_size_lateral_direction float warning threshold for size of confidence ellipse along lateral direction [m] 0.25 N/A"},{"location":"localization/autoware_ndt_scan_matcher/","title":"autoware_ndt_scan_matcher","text":""},{"location":"localization/autoware_ndt_scan_matcher/#autoware_ndt_scan_matcher","title":"autoware_ndt_scan_matcher","text":""},{"location":"localization/autoware_ndt_scan_matcher/#purpose","title":"Purpose","text":"

autoware_ndt_scan_matcher is a package for position estimation using the NDT scan matching method.

There are two main functions in this package:

One optional function is regularization. Please see the regularization chapter in the back for details. It is disabled by default.

"},{"location":"localization/autoware_ndt_scan_matcher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_ndt_scan_matcher/#input","title":"Input","text":"Name Type Description ekf_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped initial pose points_raw sensor_msgs::msg::PointCloud2 sensor pointcloud sensing/gnss/pose_with_covariance sensor_msgs::msg::PoseWithCovarianceStamped base position for regularization term

sensing/gnss/pose_with_covariance is required only when regularization is enabled.

"},{"location":"localization/autoware_ndt_scan_matcher/#output","title":"Output","text":"Name Type Description ndt_pose geometry_msgs::msg::PoseStamped estimated pose ndt_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped estimated pose with covariance /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics points_aligned sensor_msgs::msg::PointCloud2 [debug topic] pointcloud aligned by scan matching points_aligned_no_ground sensor_msgs::msg::PointCloud2 [debug topic] no ground pointcloud aligned by scan matching initial_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped [debug topic] initial pose used in scan matching multi_ndt_pose geometry_msgs::msg::PoseArray [debug topic] estimated poses from multiple initial poses in real-time covariance estimation multi_initial_pose geometry_msgs::msg::PoseArray [debug topic] initial poses for real-time covariance estimation exe_time_ms autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] execution time for scan matching [ms] transform_probability autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching no_ground_transform_probability autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching based on no ground LiDAR scan iteration_num autoware_internal_debug_msgs::msg::Int32Stamped [debug topic] number of scan matching iterations initial_to_result_relative_pose geometry_msgs::msg::PoseStamped [debug topic] relative pose between the initial point and the convergence point initial_to_result_distance autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the initial point and the convergence point [m] initial_to_result_distance_old autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the older of the two initial points used in linear interpolation and the convergence point [m] initial_to_result_distance_new autoware_internal_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the newer of the two initial points used in linear interpolation and the convergence point [m] ndt_marker visualization_msgs::msg::MarkerArray [debug topic] markers for debugging monte_carlo_initial_pose_marker visualization_msgs::msg::MarkerArray [debug topic] particles used in initial position estimation"},{"location":"localization/autoware_ndt_scan_matcher/#service","title":"Service","text":"Name Type Description ndt_align_srv autoware_localization_srvs::srv::PoseWithCovarianceStamped service to estimate initial pose"},{"location":"localization/autoware_ndt_scan_matcher/#parameters","title":"Parameters","text":""},{"location":"localization/autoware_ndt_scan_matcher/#core-parameters","title":"Core Parameters","text":""},{"location":"localization/autoware_ndt_scan_matcher/#frame","title":"Frame","text":"Name Type Description Default Range base_frame string Vehicle reference frame. base_link N/A ndt_base_frame string NDT reference frame. ndt_base_link N/A map_frame string Map frame. map N/A"},{"location":"localization/autoware_ndt_scan_matcher/#sensor-points","title":"Sensor Points","text":"Name Type Description Default Range timeout_sec float Tolerance of timestamp difference between current time and sensor pointcloud. [sec] 1 \u22650.0 required_distance float Required distance of input sensor points [m]. If the max distance of input sensor points is lower than this value, the scan matching will not be performed. 10 N/A"},{"location":"localization/autoware_ndt_scan_matcher/#ndt","title":"Ndt","text":"Name Type Description Default Range trans_epsilon float The maximum difference between two consecutive transformations in order to consider convergence. 0.01 \u22650.0 step_size float The newton line search maximum step length. 0.1 \u22650.0 resolution float The ND voxel grid resolution. 2 \u22650.0 max_iterations float The number of iterations required to calculate alignment. 30 \u22651 num_threads float Number of threads used for parallel computing. 4 \u22651"},{"location":"localization/autoware_ndt_scan_matcher/#initial-pose-estimation","title":"Initial Pose Estimation","text":"Name Type Description Default Range particles_num float The number of particles to estimate initial pose. 200 \u22651 n_startup_trials float The number of initial random trials in the TPE (Tree-Structured Parzen Estimator). This value should be equal to or less than 'initial_estimate_particles_num' and more than 0. If it is equal to 'initial_estimate_particles_num', the search will be the same as a full random search. 100 \u22651"},{"location":"localization/autoware_ndt_scan_matcher/#validation","title":"Validation","text":"Name Type Description Default Range initial_pose_timeout_sec float Tolerance of timestamp difference between initial_pose and sensor pointcloud. [sec] 1 \u22650.0 initial_pose_distance_tolerance_m float Tolerance of distance difference between two initial poses used for linear interpolation. [m] 10 \u22650.0 initial_to_result_distance_tolerance_m float Tolerance of distance difference from initial pose to result pose. [m] 3 \u22650.0 critical_upper_bound_exe_time_ms float The execution time which means probably NDT cannot matches scans properly. [ms] 100 \u22650.0 skipping_publish_num float Tolerance for the number for times rejected estimation results consecutively. 5 \u22651"},{"location":"localization/autoware_ndt_scan_matcher/#score-estimation","title":"Score Estimation","text":"Name Type Description Default Range converged_param_type float Converged param type. 0=TRANSFORM_PROBABILITY, 1=NEAREST_VOXEL_TRANSFORMATION_LIKELIHOOD 1 \u22650\u22641 converged_param_transform_probability float If converged_param_type is 0, threshold for deciding whether to trust the estimation result. 3 \u22650.0 converged_param_nearest_voxel_transformation_likelihood float If converged_param_type is 1, threshold for deciding whether to trust the estimation result. 2.3 \u22650.0"},{"location":"localization/autoware_ndt_scan_matcher/#covariance","title":"Covariance","text":"Name Type Description Default Range output_pose_covariance array The covariance of output pose. Note that this covariance matrix is empirically derived. [0.0225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000625] N/A"},{"location":"localization/autoware_ndt_scan_matcher/#regularization","title":"Regularization","text":""},{"location":"localization/autoware_ndt_scan_matcher/#abstract","title":"Abstract","text":"

This is a function that adds the regularization term to the NDT optimization problem as follows.

\\[ \\begin{align} \\min_{\\mathbf{R},\\mathbf{t}} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) +\\mathrm{scale\\ factor}\\cdot \\left| \\mathbf{R}^\\top (\\mathbf{t_{base}-\\mathbf{t}}) \\cdot \\begin{pmatrix} 1\\\\ 0\\\\ 0 \\end{pmatrix} \\right|^2 \\end{align} \\]

, where t_base is base position measured by GNSS or other means. NDT(R,t) stands for the pure NDT cost function. The regularization term shifts the optimal solution to the base position in the longitudinal direction of the vehicle. Only errors along the longitudinal direction with respect to the base position are considered; errors along Z-axis and lateral-axis error are not considered.

Although the regularization term has rotation as a parameter, the gradient and hessian associated with it is not computed to stabilize the optimization. Specifically, the gradients are computed as follows.

\\[ \\begin{align} &g_x=\\nabla_x \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\cos\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_y=\\nabla_y \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\sin\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_z=\\nabla_z \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\\\ &g_\\mathbf{R}=\\nabla_\\mathbf{R} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\end{align} \\]

Regularization is disabled by default. If you wish to use it, please edit the following parameters to enable it.

"},{"location":"localization/autoware_ndt_scan_matcher/#where-is-regularization-available","title":"Where is regularization available","text":"

This feature is effective on feature-less roads where GNSS is available, such as

By remapping the base position topic to something other than GNSS, as described below, it can be valid outside of these.

"},{"location":"localization/autoware_ndt_scan_matcher/#using-other-base-position","title":"Using other base position","text":"

Other than GNSS, you can give other global position topics obtained from magnetic markers, visual markers or etc. if they are available in your environment. (Currently Autoware does not provide a node that gives such pose.) To use your topic for regularization, you need to remap the input_regularization_pose_topic with your topic in ndt_scan_matcher.launch.xml. By default, it is remapped with /sensing/gnss/pose_with_covariance.

"},{"location":"localization/autoware_ndt_scan_matcher/#limitations","title":"Limitations","text":"

Since this function determines the base position by linear interpolation from the recently subscribed poses, topics that are published at a low frequency relative to the driving speed cannot be used. Inappropriate linear interpolation may result in bad optimization results.

When using GNSS for base location, the regularization can have negative effects in tunnels, indoors, and near skyscrapers. This is because if the base position is far off from the true value, NDT scan matching may converge to inappropriate optimal position.

"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_1","title":"Parameters","text":"Name Type Description Default Range enable boolean Regularization switch. 0 N/A scale_factor float Regularization scale factor. 0.01 \u22650.0

Regularization is disabled by default because GNSS is not always accurate enough to serve the appropriate base position in any scenes.

If the scale_factor is too large, the NDT will be drawn to the base position and scan matching may fail. Conversely, if it is too small, the regularization benefit will be lost.

Note that setting scale_factor to 0 is equivalent to disabling regularization.

"},{"location":"localization/autoware_ndt_scan_matcher/#example","title":"Example","text":"

The following figures show tested maps.

The following figures show the trajectories estimated on the feature-less map with standard NDT and regularization-enabled NDT, respectively. The color of the trajectory indicates the error (meter) from the reference trajectory, which is computed with the feature-rich map.

"},{"location":"localization/autoware_ndt_scan_matcher/#dynamic-map-loading","title":"Dynamic map loading","text":"

Autoware supports dynamic map loading feature for ndt_scan_matcher. Using this feature, NDT dynamically requests for the surrounding pointcloud map to pointcloud_map_loader, and then receive and preprocess the map in an online fashion.

Using the feature, ndt_scan_matcher can theoretically handle any large size maps in terms of memory usage. (Note that it is still possible that there exists a limitation due to other factors, e.g. floating-point error)

"},{"location":"localization/autoware_ndt_scan_matcher/#additional-interfaces","title":"Additional interfaces","text":""},{"location":"localization/autoware_ndt_scan_matcher/#additional-outputs","title":"Additional outputs","text":"Name Type Description debug/loaded_pointcloud_map sensor_msgs::msg::PointCloud2 pointcloud maps used for localization (for debug)"},{"location":"localization/autoware_ndt_scan_matcher/#additional-client","title":"Additional client","text":"Name Type Description client_map_loader autoware_map_msgs::srv::GetDifferentialPointCloudMap map loading client"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_2","title":"Parameters","text":"Name Type Description Default Range update_distance float Dynamic map loading distance. 20 \u22650.0 map_radius float Dynamic map loading loading radius. 150 \u22650.0 lidar_radius float Radius of input LiDAR range (used for diagnostics of dynamic map loading). 100 \u22650.0"},{"location":"localization/autoware_ndt_scan_matcher/#notes-for-dynamic-map-loading","title":"Notes for dynamic map loading","text":"

To use dynamic map loading feature for ndt_scan_matcher, you also need to split the PCD files into grids (recommended size: 20[m] x 20[m])

Note that the dynamic map loading may FAIL if the map is split into two or more large size map (e.g. 1000[m] x 1000[m]). Please provide either of

Here is a split PCD map for sample-map-rosbag from Autoware tutorial: sample-map-rosbag_split.zip

PCD files How NDT loads map(s) single file at once (standard) multiple files dynamically"},{"location":"localization/autoware_ndt_scan_matcher/#scan-matching-score-based-on-no-ground-lidar-scan","title":"Scan matching score based on no ground LiDAR scan","text":""},{"location":"localization/autoware_ndt_scan_matcher/#abstract_1","title":"Abstract","text":"

This is a function that uses no ground LiDAR scan to estimate the scan matching score. This score can reflect the current localization performance more accurately. related issue.

"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_3","title":"Parameters","text":"Name Type Description Default Range enable boolean A flag for using scan matching score based on de-grounded LiDAR scan. 0 N/A z_margin_for_ground_removal float If lidar_point.z - base_link.z <= this threshold , the point will be removed. 0.8 \u22650.0"},{"location":"localization/autoware_ndt_scan_matcher/#2d-real-time-covariance-estimation","title":"2D real-time covariance estimation","text":""},{"location":"localization/autoware_ndt_scan_matcher/#abstract_2","title":"Abstract","text":"

Initially, the covariance of NDT scan matching is a fixed value(FIXED_VALUE mode). So, three modes are provided for 2D covariance (xx, xy, yx, yy) estimation in real time: LAPLACE_APPROXIMATION, MULTI_NDT, and MULTI_NDT_SCORE. LAPLACE_APPROXIMATION calculates the inverse matrix of the XY (2x2) part of the Hessian obtained by NDT scan matching and uses it as the covariance matrix. On the other hand, MULTI_NDT, and MULTI_NDT_SCORE use NDT convergence from multiple initial poses to obtain 2D covariance. Ideally, the arrangement of multiple initial poses is efficiently limited by the Hessian matrix of the NDT score function. In this implementation, the number of initial positions is fixed to simplify the code. To obtain the covariance, MULTI_NDT computes until convergence at each initial position, while MULTI_NDT_SCORE uses the nearest voxel transformation likelihood. The covariance can be seen as error ellipse from ndt_pose_with_covariance setting on rviz2. original paper.

Note that this function may spoil healthy system behavior if it consumes much calculation resources.

"},{"location":"localization/autoware_ndt_scan_matcher/#parameters_4","title":"Parameters","text":"

There are three types in the calculation of 2D covariance in real time.You can select the method by changing covariance_estimation_type. initial_pose_offset_model is rotated around (x,y) = (0,0) in the direction of the first principal component of the Hessian matrix. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements. In MULTI_NDT_SCORE mode, the scale of the output 2D covariance can be adjusted according to the temperature.

Name Type Description Default Range covariance_estimation_type float 2D Real-time Converged estimation type. 0=FIXED_VALUE, 1=LAPLACE_APPROXIMATION, 2=MULTI_NDT, 3=MULTI_NDT_SCORE (FIXED_VALUE mode does not perform 2D Real-time Converged estimation) 0 \u22650\u22643 initial_pose_offset_model_x array Offset arrangement in covariance estimation [m]. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements. [0.0, 0.0, 0.5, -0.5, 1.0, -1.0] N/A initial_pose_offset_model_y array Offset arrangement in covariance estimation [m]. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements. [0.5, -0.5, 0.0, 0.0, 0.0, 0.0] N/A temperature float In MULTI_NDT_SCORE, the parameter that adjusts the estimated 2D covariance 0.05 >0 scale_factor float Scale value for adjusting the estimated covariance by a constant multiplication 1.0 >0"},{"location":"localization/autoware_ndt_scan_matcher/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/autoware_ndt_scan_matcher/#scan_matching_status","title":"scan_matching_status","text":"Name Description Transition condition to Warning Transition condition to Error Whether to reject the estimation result (affects skipping_publish_num) topic_time_stamp the time stamp of input topic none none no sensor_points_size the size of sensor points the size is 0 none yes sensor_points_delay_time_sec the delay time of sensor points the time is longer than sensor_points.timeout_sec none yes is_succeed_transform_sensor_points whether transform sensor points is succeed or not none failed yes sensor_points_max_distance the max distance of sensor points the max distance is shorter than sensor_points.required_distance none yes is_activated whether the node is in the \"activate\" state or not not \"activate\" state none if is_activated is false, then estimation is not executed and skipping_publish_num is set to 0. is_succeed_interpolate_initial_pose whether the interpolate of initial pose is succeed or not failed. (1) the size of initial_pose_buffer_ is smaller than 2. (2) the timestamp difference between initial_pose and sensor pointcloud is longer than validation.initial_pose_timeout_sec. (3) distance difference between two initial poses used for linear interpolation is longer than validation.initial_pose_distance_tolerance_m none yes is_set_map_points whether the map points is set or not not set none yes iteration_num the number of times calculate alignment the number of times is larger than ndt.max_iterations none yes local_optimal_solution_oscillation_num the number of times the solution is judged to be oscillating the number of times is larger than 10 none yes transform_probability the score of how well the map aligns with the sensor points the score is smaller thanscore_estimation.converged_param_transform_probability (only in the case of score_estimation.converged_param_type is 0=TRANSFORM_PROBABILITY) none yes transform_probability_diff the tp score difference for the current ndt optimization none none no transform_probability_before the tp score before the current ndt optimization none none no nearest_voxel_transformation_likelihood the score of how well the map aligns with the sensor points the score is smaller than score_estimation.converged_param_nearest_voxel_transformation_likelihood (only in the case of score_estimation.converged_param_type is 1=NEAREST_VOXEL_TRANSFORMATION_LIKELIHOOD) none yes nearest_voxel_transformation_likelihood_diff the nvtl score difference for the current ndt optimization none none no nearest_voxel_transformation_likelihood_before the nvtl score before the current ndt optimization none none no distance_initial_to_result the distance between the position before convergence processing and the position after the distance is longer than validation.initial_to_result_distance_tolerance_m none no execution_time the time for convergence processing the time is longer than validation.critical_upper_bound_exe_time_ms none no skipping_publish_num the number of times rejected estimation results consecutively the number of times is validation.skipping_publish_num or more none -

\u203bThe sensor_points_callback shares the same callback group as the trigger_node_service and ndt_align_service. Consequently, if the initial pose estimation takes too long, this diagnostic may become stale.

"},{"location":"localization/autoware_ndt_scan_matcher/#initial_pose_subscriber_status","title":"initial_pose_subscriber_status","text":"Name Description Transition condition to Warning Transition condition to Error topic_time_stamp the time stamp of input topic none none is_activated whether the node is in the \"activate\" state or not not \"activate\" state none is_expected_frame_id whether the input frame_id is the same as frame.map_frame or not none not the same"},{"location":"localization/autoware_ndt_scan_matcher/#regularization_pose_subscriber_status","title":"regularization_pose_subscriber_status","text":"Name Description Transition condition to Warning Transition condition to Error topic_time_stamp the time stamp of input topic none none"},{"location":"localization/autoware_ndt_scan_matcher/#trigger_node_service_status","title":"trigger_node_service_status","text":"Name Description Transition condition to Warning Transition condition to Error service_call_time_stamp the time stamp of service calling none none is_activated whether the node is in the \"activate\" state or not none none is_succeed_service whether the process of service is succeed or not none none

\u203b This diagnostic is only published when the service is called, so it becomes stale after the initial pose estimation is completed.

"},{"location":"localization/autoware_ndt_scan_matcher/#ndt_align_service_status","title":"ndt_align_service_status","text":"Name Description Transition condition to Warning Transition condition to Error service_call_time_stamp the time stamp of service calling none none is_succeed_transform_initial_pose whether transform initial pose is succeed or not none failed is_need_rebuild whether it need to rebuild the map. If the map has not been loaded yet or if distance_last_update_position_to_current_position encounters is an Error state, it is considered necessary to reconstruct the map, and is_need_rebuild becomes True. none none maps_size_before the number of maps before update map none none is_succeed_call_pcd_loader whether call pcd_loader service is succeed or not failed none maps_to_add_size the number of maps to be added none none maps_to_remove_size the number of maps to be removed none none map_update_execution_time the time for map updating none none maps_size_after the number of maps after update map none none is_updated_map whether map is updated. If the map update couldn't be performed or there was no need to update the map, it becomes False none is_updated_map is False but is_need_rebuild is True is_set_map_points whether the map points is set or not not set none is_set_sensor_points whether the sensor points is set or not not set none best_particle_score the best score of particle none none is_succeed_service whether the process of service is succeed or not failed none

\u203b This diagnostic is only published when the service is called, so it becomes stale after the initial pose estimation is completed.

"},{"location":"localization/autoware_ndt_scan_matcher/#map_update_status","title":"map_update_status","text":"Name Description Transition condition to Warning Transition condition to Error timer_callback_time_stamp the time stamp of timer_callback calling none none is_activated whether the node is in the \"activate\" state or not not \"activate\" state none is_set_last_update_position whether the last_update_position is set or not not set none distance_last_update_position_to_current_position the distance of last_update_position to current position none (the distance + dynamic_map_loading.lidar_radius) is larger than dynamic_map_loading.map_radius is_need_rebuild whether it need to rebuild the map. If the map has not been loaded yet or if distance_last_update_position_to_current_position encounters is an Error state, it is considered necessary to reconstruct the map, and is_need_rebuild becomes True. none none maps_size_before the number of maps before update map none none is_succeed_call_pcd_loader whether call pcd_loader service is succeed or not failed none maps_to_add_size the number of maps to be added none none maps_to_remove_size the number of maps to be removed none none map_update_execution_time the time for map updating none none maps_size_after the number of maps after update map none none is_updated_map whether map is updated. If the map update couldn't be performed or there was no need to update the map, it becomes False none is_updated_map is False but is_need_rebuild is True"},{"location":"localization/autoware_ndt_scan_matcher/include/autoware/ndt_scan_matcher/ndt_omp/","title":"ndt_omp","text":""},{"location":"localization/autoware_ndt_scan_matcher/include/autoware/ndt_scan_matcher/ndt_omp/#ndt_omp","title":"ndt_omp","text":"cspell:ignore Kenji, Koide <-->

The codes in this directory are derived from https://github.com/tier4/ndt_omp, which is a fork of https://github.com/koide3/ndt_omp.

We use this code in accordance with the LICENSE, and we have directly confirmed this with Dr. Kenji Koide.

We sincerely appreciate Dr. Koide\u2019s support and contributions.

"},{"location":"localization/autoware_pose2twist/","title":"autoware_pose2twist","text":""},{"location":"localization/autoware_pose2twist/#autoware_pose2twist","title":"autoware_pose2twist","text":""},{"location":"localization/autoware_pose2twist/#purpose","title":"Purpose","text":"

This autoware_pose2twist calculates the velocity from the input pose history. In addition to the computed twist, this node outputs the linear-x and angular-z components as a float message to simplify debugging.

The twist.linear.x is calculated as sqrt(dx * dx + dy * dy + dz * dz) / dt, and the values in the y and z fields are zero. The twist.angular is calculated as relative_rotation_vector / dt for each field.

"},{"location":"localization/autoware_pose2twist/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_pose2twist/#input","title":"Input","text":"Name Type Description pose geometry_msgs::msg::PoseStamped pose source to used for the velocity calculation."},{"location":"localization/autoware_pose2twist/#output","title":"Output","text":"Name Type Description twist geometry_msgs::msg::TwistStamped twist calculated from the input pose history. linear_x autoware_internal_debug_msgs::msg::Float32Stamped linear-x field of the output twist. angular_z autoware_internal_debug_msgs::msg::Float32Stamped angular-z field of the output twist."},{"location":"localization/autoware_pose2twist/#parameters","title":"Parameters","text":"

none.

"},{"location":"localization/autoware_pose2twist/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

none.

"},{"location":"localization/autoware_pose_covariance_modifier/","title":"Autoware Pose Covariance Modifier Node","text":""},{"location":"localization/autoware_pose_covariance_modifier/#autoware-pose-covariance-modifier-node","title":"Autoware Pose Covariance Modifier Node","text":""},{"location":"localization/autoware_pose_covariance_modifier/#purpose","title":"Purpose","text":"

This package makes it possible to use GNSS and NDT poses together in real time localization.

"},{"location":"localization/autoware_pose_covariance_modifier/#function","title":"Function","text":"

This package takes in GNSS (Global Navigation Satellite System) and NDT (Normal Distribution Transform) poses with covariances.

It outputs a single pose with covariance:

"},{"location":"localization/autoware_pose_covariance_modifier/#assumptions","title":"Assumptions","text":""},{"location":"localization/autoware_pose_covariance_modifier/#requirements","title":"Requirements","text":""},{"location":"localization/autoware_pose_covariance_modifier/#description","title":"Description","text":"

GNSS and NDT nodes provide the pose with covariance data utilized in an Extended Kalman Filter (EKF).

Accurate covariance values are crucial for the effectiveness of the EKF in estimating the state.

The GNSS system generates reliable standard deviation values, which can be transformed into covariance measures.

But we currently don't have a reliable way to determine the covariance values for the NDT poses. And the NDT matching system in Autoware outputs poses with preset covariance values.

For this reason, this package is designed to manage the selection of the pose source, based on the standard deviation values provided by the GNSS system.

It also tunes the covariance values of the NDT poses, based on the GNSS standard deviation values.

"},{"location":"localization/autoware_pose_covariance_modifier/#flowcharts","title":"Flowcharts","text":""},{"location":"localization/autoware_pose_covariance_modifier/#without-this-package","title":"Without this package","text":"

Only NDT pose is used in localization. GNSS pose is only used for initialization.

graph TD\n    ndt_scan_matcher[\"ndt_scan_matcher\"] --> |\"/localization/pose_estimator/pose_with_covariance\"| ekf_localizer[\"ekf_localizer\"]\n\nclassDef cl_node fill:#FFF2CC,stroke-width:3px,stroke:#D6B656;\n\nclass ndt_scan_matcher cl_node;\nclass ekf_localizer cl_node;
"},{"location":"localization/autoware_pose_covariance_modifier/#with-this-package","title":"With this package","text":"

Both NDT and GNSS poses are used in localization, depending on the standard deviation values coming from the GNSS system.

Here is a flowchart depicting the process and the predefined thresholds:

graph TD\n    gnss_poser[\"gnss_poser\"] --> |\"/sensing/gnss/<br/>pose_with_covariance\"| pose_covariance_modifier_node\n    ndt_scan_matcher[\"ndt_scan_matcher\"] --> |\"/localization/pose_estimator/ndt_scan_matcher/<br/>pose_with_covariance\"| pose_covariance_modifier_node\n\n    subgraph pose_covariance_modifier_node [\"Pose Covariance Modifier Node\"]\n        pc1gnss_pose_yaw<br/>stddev\n        pc1 -->|\"<= 0.3 rad\"| pc2gnss_pose_z<br/>stddev\n        pc2 -->|\"<= 0.1 m\"| pc3gnss_pose_xy<br/>stddev\n        pc2 -->|\"&gt; 0.1 m\"| ndt_pose(\"NDT Pose\")\n        pc3 -->|\"<= 0.1 m\"| gnss_pose(\"GNSS Pose\")\n        pc3 -->|\"0.1 m < x <= 0.2 m\"| gnss_ndt_pose(\"`Both GNSS and NDT Pose\n        (_with modified covariance_)`\")\n        pc3 -->|\"&gt; 0.2 m\"| ndt_pose\n        pc1 -->|\"&gt; 0.3 rad\"| ndt_pose\n    end\n\n    pose_covariance_modifier_node -->|\"/localization/pose_estimator/pose_with_covariance\"| ekf_localizer[\"ekf_localizer\"]\n\nclassDef cl_node fill:#FFF2CC,stroke-width:3px,stroke:#D6B656;\nclassDef cl_conditional fill:#FFE6CC,stroke-width:3px,stroke:#D79B00;\nclassDef cl_output fill:#D5E8D4,stroke-width:3px,stroke:#82B366;\n\nclass gnss_poser cl_node;\nclass ndt_scan_matcher cl_node;\nclass ekf_localizer cl_node;\nclass pose_covariance_modifier_node cl_node;\n\nclass pc1 cl_conditional;\nclass pc2 cl_conditional;\nclass pc3 cl_conditional;\n\nclass ndt_pose cl_output;\nclass gnss_pose cl_output;\nclass gnss_ndt_pose cl_output;
"},{"location":"localization/autoware_pose_covariance_modifier/#how-to-use-this-package","title":"How to use this package","text":"

This package is disabled by default in Autoware, you need to manually enable it.

To enable this package, you need to change the use_autoware_pose_covariance_modifier parameter to true within the pose_twist_estimator.launch.xml.

"},{"location":"localization/autoware_pose_covariance_modifier/#without-this-condition-default","title":"Without this condition (default)","text":""},{"location":"localization/autoware_pose_covariance_modifier/#with-this-condition","title":"With this condition","text":""},{"location":"localization/autoware_pose_covariance_modifier/#node","title":"Node","text":""},{"location":"localization/autoware_pose_covariance_modifier/#subscribed-topics","title":"Subscribed topics","text":"Name Type Description input_gnss_pose_with_cov_topic geometry_msgs::msg::PoseWithCovarianceStamped Input GNSS pose topic. input_ndt_pose_with_cov_topic geometry_msgs::msg::PoseWithCovarianceStamped Input NDT pose topic."},{"location":"localization/autoware_pose_covariance_modifier/#published-topics","title":"Published topics","text":"Name Type Description output_pose_with_covariance_topic geometry_msgs::msg::PoseWithCovarianceStamped Output pose topic. This topic is used by the ekf_localizer package. selected_pose_type std_msgs::msg::String Declares which pose sources are used in the output of this package output/ndt_position_stddev std_msgs::msg::Float64 Output pose ndt average standard deviation in position xy. It is published only when the enable_debug_topics is true. output/gnss_position_stddev std_msgs::msg::Float64 Output pose gnss average standard deviation in position xy. It is published only when the enable_debug_topics is true."},{"location":"localization/autoware_pose_covariance_modifier/#parameters","title":"Parameters","text":"

The parameters are set in config/pose_covariance_modifier.param.yaml .

Name Type Description Default Range threshold_gnss_stddev_yaw_deg_max float If GNSS yaw standard deviation values are larger than this, trust only NDT 0.3 N/A threshold_gnss_stddev_z_max float If GNSS position Z standard deviation values are larger than this, trust only NDT 0.1 N/A threshold_gnss_stddev_xy_bound_lower float If GNSS position XY standard deviation values are lower than this, trust only GNSS 0.1 N/A threshold_gnss_stddev_xy_bound_upper float If GNSS position XY standard deviation values are higher than this, trust only NDT 0.25 N/A ndt_std_dev_bound_lower float Lower bound value for standard deviation of NDT positions (x, y, z) when used with GNSS 0.15 N/A ndt_std_dev_bound_upper float Upper bound value for standard deviation of NDT positions (x, y, z) when used with GNSS 0.3 N/A gnss_pose_timeout_sec float If GNSS data is not received for this duration, trust only NDT 1 N/A enable_debug_topics boolean Publish additional debug topics 1 N/A"},{"location":"localization/autoware_pose_covariance_modifier/#faq","title":"FAQ","text":""},{"location":"localization/autoware_pose_covariance_modifier/#how-are-varying-frequency-rates-handled","title":"How are varying frequency rates handled?","text":"

The GNSS and NDT pose topics may have different frequencies. The GNSS pose topic may have a higher frequency than the NDT.

Let's assume that the inputs have the following frequencies:

Source Frequency GNSS 200 Hz NDT 10 Hz

This package publishes the output poses as they come in, depending on the mode.

End result:

Mode Output Freq GNSS Only 200 Hz GNSS + NDT 210 Hz NDT Only 10 Hz"},{"location":"localization/autoware_pose_covariance_modifier/#how-and-when-are-the-ndt-covariance-values-overwritten","title":"How and when are the NDT covariance values overwritten?","text":"Mode Outputs, Covariance GNSS Only GNSS, Unmodified GNSS + NDT GNSS: Unmodified, NDT: Interpolated NDT Only NDT, Unmodified

NDT covariance values overwritten only for the GNSS + NDT mode.

This enables a smooth transition between GNSS Only and NDT Only modes.

In this mode, both NDT and GNSS poses are published from this node.

"},{"location":"localization/autoware_pose_covariance_modifier/#ndt-covariance-calculation","title":"NDT covariance calculation","text":"

As the gnss_std_dev increases within its bounds, ndt_std_dev should proportionally decrease within its own bounds.

To achieve this, we first linearly interpolate:

"},{"location":"localization/autoware_pose_estimator_arbiter/","title":"autoware_pose_estimator_arbiter","text":""},{"location":"localization/autoware_pose_estimator_arbiter/#autoware_pose_estimator_arbiter","title":"autoware_pose_estimator_arbiter","text":"

Table of contents:

"},{"location":"localization/autoware_pose_estimator_arbiter/#abstract","title":"Abstract","text":"

This package launches multiple pose estimators and provides the capability to stop or resume specific pose estimators based on the situation. It provides provisional switching rules and will be adaptable to a wide variety of rules in the future.

Please refer to this discussion about other ideas on implementation.

"},{"location":"localization/autoware_pose_estimator_arbiter/#why-do-we-need-a-stopresume-mechanism","title":"Why do we need a stop/resume mechanism?","text":"

It is possible to launch multiple pose_estimators and fuse them using a Kalman filter by editing launch files. However, this approach is not preferable due to computational costs.

Particularly, NDT and YabLoc are computationally intensive, and it's not recommended to run them simultaneously. Also, even if both can be activated at the same time, the Kalman Filter may be affected by one of them giving bad output.

[!NOTE] Currently, there is ONLY A RULE implemented that always enables all pose_estimators. If users want to toggle pose_estimator with their own rules, they need to add new rules. by referring to example_rule. The example_rule has source code that can be used as a reference for implementing the rules.

"},{"location":"localization/autoware_pose_estimator_arbiter/#supporting-pose_estimators","title":"Supporting pose_estimators","text":""},{"location":"localization/autoware_pose_estimator_arbiter/#demonstration","title":"Demonstration","text":"

The following video demonstrates the switching of four different pose estimators.

Users can reproduce the demonstration using the following data and launch command:

sample data (rosbag & map) The rosbag is simulated data created by AWSIM. The map is an edited version of the original map data published on the AWSIM documentation page to make it suitable for multiple pose_estimators.

ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=<your-map-path> \\\nvehicle_model:=sample_vehicle \\\nsensor_model:=awsim_sensor_kit \\\npose_source:=ndt_yabloc_artag_eagleye\n
"},{"location":"localization/autoware_pose_estimator_arbiter/#interfaces","title":"Interfaces","text":"Click to show details ### Parameters There are no parameters. ### Services | Name | Type | Description | | ---------------- | ------------------------------- | ------------------------------- | | `/config_logger` | logging_demo::srv::ConfigLogger | service to modify logging level | ### Clients | Name | Type | Description | | --------------------- | --------------------- | --------------------------------- | | `/yabloc_suspend_srv` | std_srv::srv::SetBool | service to stop or restart yabloc | ### Subscriptions For pose estimator arbitration: | Name | Type | Description | | ------------------------------------- | --------------------------------------------- | -------------- | | `/input/artag/image` | sensor_msgs::msg::Image | ArTag input | | `/input/yabloc/image` | sensor_msgs::msg::Image | YabLoc input | | `/input/eagleye/pose_with_covariance` | geometry_msgs::msg::PoseWithCovarianceStamped | Eagleye output | | `/input/ndt/pointcloud` | sensor_msgs::msg::PointCloud2 | NDT input | For switching rule: | Name | Type | Description | | ----------------------------- | ------------------------------------------------------------ | --------------------------------- | | `/input/vector_map` | autoware_map_msgs::msg::LaneletMapBin | vector map | | `/input/pose_with_covariance` | geometry_msgs::msg::PoseWithCovarianceStamped | localization final output | | `/input/initialization_state` | autoware_adapi_v1_msgs::msg::LocalizationInitializationState | localization initialization state | ### Publications | Name | Type | Description | | -------------------------------------- | --------------------------------------------- | ------------------------------------------------------ | | `/output/artag/image` | sensor_msgs::msg::Image | relayed ArTag input | | `/output/yabloc/image` | sensor_msgs::msg::Image | relayed YabLoc input | | `/output/eagleye/pose_with_covariance` | geometry_msgs::msg::PoseWithCovarianceStamped | relayed Eagleye output | | `/output/ndt/pointcloud` | sensor_msgs::msg::PointCloud2 | relayed NDT input | | `/output/debug/marker_array` | visualization_msgs::msg::MarkerArray | [debug topic] everything for visualization | | `/output/debug/string` | visualization_msgs::msg::MarkerArray | [debug topic] debug information such as current status |"},{"location":"localization/autoware_pose_estimator_arbiter/#trouble-shooting","title":"Trouble Shooting","text":"

If it does not seems to work, users can get more information in the following ways.

[!TIP]

ros2 service call /localization/autoware_pose_estimator_arbiter/config_logger logging_demo/srv/ConfigLogger \\\n'{logger_name: localization.autoware_pose_estimator_arbiter, level: debug}'\n
"},{"location":"localization/autoware_pose_estimator_arbiter/#architecture","title":"Architecture","text":"Click to show details ### Case of running a single pose estimator When each pose_estimator is run alone, this package does nothing. Following figure shows the node configuration when NDT, YabLoc Eagleye and AR-Tag are run independently. ### Case of running multiple pose estimators When running multiple pose_estimators, autoware_pose_estimator_arbiter is executed. It comprises a **switching rule** and **stoppers** corresponding to each pose_estimator. - Stoppers control the pose_estimator activity by relaying inputs or outputs, or by requesting a suspend service. - Switching rules determine which pose_estimator to use. Which stoppers and switching rules are instantiated depends on the runtime arguments at startup. Following figure shows the node configuration when all pose_estimator are run simultaneously. - **NDT** The NDT stopper relays topics in the front side of the point cloud pre-processor. - **YabLoc** The YabLoc stopper relays input image topics in the frontend of the image pre-processor. YabLoc includes a particle filter process that operates on a timer, and even when image topics are not streamed, the particle prediction process continues to work. To address this, the YabLoc stopper also has a service client for explicitly stopping and resuming YabLoc. - **Eagleye** The Eagleye stopper relays Eagleye's output pose topics in the backend of Eagleye's estimation process. Eagleye performs time-series processing internally, and it can't afford to stop the input stream. Furthermore, Eagleye's estimation process is lightweight enough to be run continuously without a significant load, so the relay is inserted in the backend. - **ArTag** The ArTag stopper relays image topics in the front side of the landmark localizer."},{"location":"localization/autoware_pose_estimator_arbiter/#how-to-launch","title":"How to launch","text":"Click to show details The user can launch the desired pose_estimators by giving the pose_estimator names as a concatenation of underscores for the runtime argument `pose_source`.
ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=<your-map-path> \\\nvehicle_model:=sample_vehicle \\\nsensor_model:=awsim_sensor_kit \\\npose_source:=ndt_yabloc_artag_eagleye\n
Even if `pose_source` includes an unexpected string, it will be filtered appropriately. Please see the table below for details. | given runtime argument | parsed autoware_pose_estimator_arbiter's param (pose_sources) | | ------------------------------------------- | ------------------------------------------------------------- | | `pose_source:=ndt` | `[\"ndt\"]` | | `pose_source:=nan` | `[]` | | `pose_source:=yabloc_ndt` | `[\"ndt\",\"yabloc\"]` | | `pose_source:=yabloc_ndt_ndt_ndt` | `[\"ndt\",\"yabloc\"]` | | `pose_source:=ndt_yabloc_eagleye` | `[\"ndt\",\"yabloc\",\"eagleye\"]` | | `pose_source:=ndt_yabloc_nan_eagleye_artag` | `[\"ndt\",\"yabloc\",\"eagleye\",\"artag\"]` |"},{"location":"localization/autoware_pose_estimator_arbiter/#switching-rules","title":"Switching Rules","text":"Click to show details Currently, **ONLY ONE RULE** (`enable_all_rule`) is implemented. In the future, several rules will be implemented and users will be able to select rules. > [!TIP] > There are presets available to extend the rules. If you want to extend the rules, please see [example_rule](./example_rule/README.md). ### Enable All Rule This is the default and simplest rule. This rule enables all pose_estimators regardless of their current state.
flowchart LR\n  A{ }\n  A --whatever --> _A[enable all pose_estimators]
"},{"location":"localization/autoware_pose_estimator_arbiter/#pose-initialization","title":"Pose Initialization","text":"

When using multiple pose_estimators, it is necessary to appropriately adjust the parameters provided to the pose_initializer.

Click to show details The following table is based on the runtime argument \"pose_source\" indicating which initial pose estimation methods are available and the parameters that should be provided to the pose_initialization node. To avoid making the application too complicated, a priority is established so that NDT is always used when it is available. (The pose_initializer will only perform NDT-based initial pose estimation when `ndt_enabled` and `yabloc_enabled` are both `true`). This table's usage is described from three perspectives: - **Autoware Users:** Autoware users do not need to consult this table. They simply provide the desired combinations of pose_estimators, and the appropriate parameters are automatically provided to the pose_initializer. - **Autoware Developers:** Autoware developers can consult this table to know which parameters are assigned. - **Who implements New Pose Estimator Switching:** Developers must extend this table and implement the assignment of appropriate parameters to the pose_initializer. | pose_source | invoked initialization method | `ndt_enabled` | `yabloc_enabled` | `gnss_enabled` | `sub_gnss_pose_cov` | | :-------------------------: | ----------------------------- | ------------- | ---------------- | -------------- | -------------------------------------------- | | ndt | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | yabloc | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | eagleye | vehicle needs run for a while | false | false | true | /localization/pose_estimator/eagleye/... | | artag | 2D Pose Estimate (RViz) | false | false | true | /sensing/gnss/pose_with_covariance | | ndt, yabloc | ndt | true | true | true | /sensing/gnss/pose_with_covariance | | ndt, eagleye | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | ndt, artag | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | yabloc, eagleye | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | yabloc, artag | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | eagleye, artag | vehicle needs run for a while | false | false | true | /localization/pose_estimator/eagleye/pose... | | ndt, yabloc, eagleye | ndt | true | true | true | /sensing/gnss/pose_with_covariance | | ndt, eagleye, artag | ndt | true | false | true | /sensing/gnss/pose_with_covariance | | yabloc, eagleye, artag | yabloc | false | true | true | /sensing/gnss/pose_with_covariance | | ndt, yabloc, eagleye, artag | ndt | true | true | true | /sensing/gnss/pose_with_covariance |"},{"location":"localization/autoware_pose_estimator_arbiter/#future-plans","title":"Future Plans","text":"Click to show details ### gradually switching In the future, this package will provide not only ON/OFF switching, but also a mechanism for low frequency operation, such as 50% NDT & 50% YabLoc. ### stopper for pose_estimators to be added in the future The basic strategy is to realize ON/OFF switching by relaying the input or output topics of that pose_estimator. If pose_estimator involves time-series processing with heavy computations, it's not possible to pause and resume with just topic relaying. In such cases, there may not be generally applicable solutions, but the following methods may help: 1. Completely stop and **reinitialize** time-series processing, as seen in the case of YabLoc. 2. Subscribe to `localization/kinematic_state` and **keep updating states** to ensure that the estimation does not break (relying on the output of the active pose_estimator). 3. The multiple pose_estimator **does not support** that particular pose_estimator. Please note that this issue is fundamental to realizing multiple pose_estimators, and it will arise regardless of the architecture proposed in this case."},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/","title":"example rule","text":""},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#example-rule","title":"example rule","text":"

The example rule provides a sample rule for controlling the arbiter. By combining the provided rules, it is possible to achieve demonstrations as follows. Users can extend the rules as needed by referencing this code, allowing them to control the arbiter as desired.

"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#demonstration","title":"Demonstration","text":"

The following video demonstrates the switching of four different pose estimators.

"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#switching-rules","title":"Switching Rules","text":""},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#pcd-map-based-rule","title":"Pcd Map Based Rule","text":"
flowchart LR\n  A{PCD is enough dense }\n  A --true--> B[enable NDT]\n  A --false--> C[enable YabLoc]
"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#vector-map-based-rule","title":"Vector Map Based Rule","text":"
flowchart LR\n  A{ }\n  A --whatever --> _A[When the ego vehicle is in a predetermined pose_estimator_area,\\n it enables the corresponding pose_estamtor.]
"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#rule-helpers","title":"Rule helpers","text":"

Rule helpers are auxiliary tools for describing switching rules.

"},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#pcd-occupancy","title":"PCD occupancy","text":""},{"location":"localization/autoware_pose_estimator_arbiter/example_rule/#pose-estimator-area","title":"Pose estimator area","text":"

The pose_estimator_area is a planar area described by polygon in lanelet2. The height of the area is meaningless; it judges if the projection of its self-position is contained within the polygon or not.

A sample pose_estimator_area is shown below. The values provided below are placeholders. To be correctly read, the area should have the type \"pose_estimator_specify\" and the subtype should be one of ndt, yabloc, eagleye, or artag.

  <node id=\"1\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"10.0\"/>\n<tag k=\"local_y\" v=\"10.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n<node id=\"2\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"10.0\"/>\n<tag k=\"local_y\" v=\"20.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n<node id=\"3\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"20.0\"/>\n<tag k=\"local_y\" v=\"20.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n<node id=\"4\" lat=\"35.8xxxxx\" lon=\"139.6xxxxx\">\n<tag k=\"mgrs_code\" v=\"54SUE000000\"/>\n<tag k=\"local_x\" v=\"10.0\"/>\n<tag k=\"local_y\" v=\"20.0\"/>\n<tag k=\"ele\" v=\"1.0\"/>\n</node>\n\n...\n\n  <way id=\"5\">\n<nd ref=\"1\"/>\n<nd ref=\"2\"/>\n<nd ref=\"3\"/>\n<nd ref=\"4\"/>\n<tag k=\"type\" v=\"pose_estimator_specify\"/>\n<tag k=\"subtype\" v=\"eagleye\"/>\n<tag k=\"area\" v=\"yes\"/>\n</way>\n\n<way id=\"6\">\n<nd ref=\"7\"/>\n<nd ref=\"8\"/>\n<nd ref=\"9\"/>\n<nd ref=\"10\"/>\n<tag k=\"type\" v=\"pose_estimator_specify\"/>\n<tag k=\"subtype\" v=\"yabloc\"/>\n<tag k=\"area\" v=\"yes\"/>\n</way>\n
"},{"location":"localization/autoware_pose_initializer/","title":"autoware_pose_initializer","text":""},{"location":"localization/autoware_pose_initializer/#autoware_pose_initializer","title":"autoware_pose_initializer","text":""},{"location":"localization/autoware_pose_initializer/#purpose","title":"Purpose","text":"

The autoware_pose_initializer is the package to send an initial pose to ekf_localizer. It receives roughly estimated initial pose from GNSS/user. Passing the pose to ndt_scan_matcher, and it gets a calculated ego pose from ndt_scan_matcher via service. Finally, it publishes the initial pose to ekf_localizer. This node depends on the map height fitter library. See here for more details.

"},{"location":"localization/autoware_pose_initializer/#interfaces","title":"Interfaces","text":""},{"location":"localization/autoware_pose_initializer/#parameters","title":"Parameters","text":"Name Type Description Default Range user_defined_initial_pose.enable string If true, user_defined_initial_pose.pose is set as the initial position. [boolean] false N/A user_defined_initial_pose.pose string initial pose (x, y, z, quat_x, quat_y, quat_z, quat_w). [array] [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0] N/A gnss_pose_timeout float The duration that the GNSS pose is valid. [sec] 3.0 \u22650.0 pose_error_check_enabled boolean If true, check error between initial pose result and GNSS pose. false N/A pose_error_threshold float The error threshold between GNSS and initial pose 5.0 \u22650.0 stop_check_enabled string If true, initialization is accepted only when the vehicle is stopped. true N/A stop_check_duration float The duration used for the stop check above. [sec] 3.0 \u22650.0 ekf_enabled string If true, EKF localizer is activated. true N/A gnss_enabled string If true, use the GNSS pose when no pose is specified. true N/A yabloc_enabled string If true, YabLocModule is used. true N/A ndt_enabled string If true, the pose will be estimated by NDT scan matcher, otherwise it is passed through. true N/A gnss_particle_covariance array gnss particle covariance [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.0] N/A output_pose_covariance array output pose covariance [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2] N/A"},{"location":"localization/autoware_pose_initializer/#services","title":"Services","text":"Name Type Description /localization/initialize tier4_localization_msgs::srv::InitializeLocalization initial pose from api"},{"location":"localization/autoware_pose_initializer/#clients","title":"Clients","text":"Name Type Description /localization/pose_estimator/ndt_align_srv tier4_localization_msgs::srv::PoseWithCovarianceStamped pose estimation service"},{"location":"localization/autoware_pose_initializer/#subscriptions","title":"Subscriptions","text":"Name Type Description /sensing/gnss/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped pose from gnss /sensing/vehicle_velocity_converter/twist_with_covariance geometry_msgs::msg::TwistStamped twist for stop check"},{"location":"localization/autoware_pose_initializer/#publications","title":"Publications","text":"Name Type Description /localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState pose initialization state /initialpose3d geometry_msgs::msg::PoseWithCovarianceStamped calculated initial ego pose /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics"},{"location":"localization/autoware_pose_initializer/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/autoware_pose_initializer/#pose_initializer_status","title":"pose_initializer_status","text":"

If the score of initial pose estimation result is lower than score threshold, ERROR message is output to the /diagnostics topic.

"},{"location":"localization/autoware_pose_initializer/#connection-with-default-ad-api","title":"Connection with Default AD API","text":"

This autoware_pose_initializer is used via default AD API. For detailed description of the API description, please refer to the description of autoware_default_adapi.

"},{"location":"localization/autoware_pose_initializer/#initialize-pose-via-cli","title":"Initialize pose via CLI","text":""},{"location":"localization/autoware_pose_initializer/#using-the-gnss-estimated-position","title":"Using the GNSS estimated position","text":"
ros2 service call /localization/initialize tier4_localization_msgs/srv/InitializeLocalization\n

The GNSS estimated position is used as the initial guess, and the localization algorithm automatically estimates a more accurate position.

"},{"location":"localization/autoware_pose_initializer/#using-the-input-position","title":"Using the input position","text":"
ros2 service call /localization/initialize tier4_localization_msgs/srv/InitializeLocalization \"\npose_with_covariance:\n  - header:\n      frame_id: map\n    pose:\n      pose:\n        position:\n          x: 89571.1640625\n          y: 42301.1875\n          z: -3.1565165519714355\n        orientation:\n          x: 0.0\n          y: 0.0\n          z: 0.28072773940524687\n          w: 0.9597874433062874\n      covariance: [0.25, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06853891909122467]\nmethod: 0\n\"\n

The input position is used as the initial guess, and the localization algorithm automatically estimates a more accurate position.

"},{"location":"localization/autoware_pose_initializer/#direct-initial-position-set","title":"Direct initial position set","text":"
ros2 service call /localization/initialize tier4_localization_msgs/srv/InitializeLocalization \"\npose_with_covariance:\n  - header:\n      frame_id: map\n    pose:\n      pose:\n        position:\n          x: 89571.1640625\n          y: 42301.1875\n          z: -3.1565165519714355\n        orientation:\n          x: 0.0\n          y: 0.0\n          z: 0.28072773940524687\n          w: 0.9597874433062874\n      covariance: [0.25, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06853891909122467]\nmethod: 1\n\"\n

The initial position is set directly by the input position without going through localization algorithm.

"},{"location":"localization/autoware_pose_initializer/#via-ros2-topic-pub","title":"Via ros2 topic pub","text":"
ros2 topic pub --once /initialpose geometry_msgs/msg/PoseWithCovarianceStamped \"\nheader:\n  frame_id: map\npose:\n  pose:\n    position:\n      x: 89571.1640625\n      y: 42301.1875\n      z: 0.0\n    orientation:\n      x: 0.0\n      y: 0.0\n      z: 0.28072773940524687\n      w: 0.9597874433062874\n\"\n

It behaves the same as \"initialpose (from rviz)\". The position.z and the covariance will be overwritten by ad_api_adaptors, so there is no need to input them.

"},{"location":"localization/autoware_pose_instability_detector/","title":"autoware_pose_instability_detector","text":""},{"location":"localization/autoware_pose_instability_detector/#autoware_pose_instability_detector","title":"autoware_pose_instability_detector","text":"

The pose_instability_detector is a node designed to monitor the stability of /localization/kinematic_state, which is an output topic of the Extended Kalman Filter (EKF).

This node triggers periodic timer callbacks to compare two poses:

The results of this comparison are then output to the /diagnostics topic.

If this node outputs WARN messages to /diagnostics, it means that the EKF output is significantly different from the integrated twist values. In other words, WARN outputs indicate that the vehicle has moved to a place outside the expected range based on the twist values. This discrepancy suggests that there may be an issue with either the estimated pose or the input twist.

The following diagram provides an overview of how the procedure looks like:

"},{"location":"localization/autoware_pose_instability_detector/#dead-reckoning-algorithm","title":"Dead reckoning algorithm","text":"

Dead reckoning is a method of estimating the position of a vehicle based on its previous position and velocity. The procedure for dead reckoning is as follows:

  1. Capture the necessary twist values from the /input/twist topic.
  2. Integrate the twist values to calculate the pose transition.
  3. Apply the pose transition to the previous pose to obtain the current pose.
"},{"location":"localization/autoware_pose_instability_detector/#collecting-twist-values","title":"Collecting twist values","text":"

The pose_instability_detector node collects the twist values from the ~/input/twist topic to perform dead reckoning. Ideally, pose_instability_detector needs the twist values between the previous pose and the current pose. Therefore, pose_instability_detector snips the twist buffer and apply interpolations and extrapolations to obtain the twist values at the desired time.

"},{"location":"localization/autoware_pose_instability_detector/#linear-transition-and-angular-transition","title":"Linear transition and angular transition","text":"

After the twist values are collected, the node calculates the linear transition and angular transition based on the twist values and add them to the previous pose.

"},{"location":"localization/autoware_pose_instability_detector/#threshold-definition","title":"Threshold definition","text":"

The pose_instability_detector node compares the pose calculated by dead reckoning with the latest pose from the EKF output. These two pose are ideally the same, but in reality, they are not due to the error in the twist values the pose observation. If these two poses are significantly different so that the absolute value exceeds the threshold, the node outputs a WARN message to the /diagnostics topic. There are six thresholds (x, y, z, roll, pitch, and yaw) to determine whether the poses are significantly different, and these thresholds are determined by the following subsections.

"},{"location":"localization/autoware_pose_instability_detector/#diff_position_x","title":"diff_position_x","text":"

This threshold examines the difference in the longitudinal axis between the two poses, and check whether the vehicle goes beyond the expected error. This threshold is a sum of \"maximum longitudinal error due to velocity scale factor error\" and \"pose estimation error tolerance\".

\\[ \\tau_x = v_{\\rm max}\\frac{\\beta_v}{100} \\Delta t + \\epsilon_x\\\\ \\] Symbol Description Unit \\(\\tau_x\\) Threshold for the difference in the longitudinal axis \\(m\\) \\(v_{\\rm max}\\) Maximum velocity \\(m/s\\) \\(\\beta_v\\) Scale factor tolerance for the maximum velocity \\(\\%\\) \\(\\Delta t\\) Time interval \\(s\\) \\(\\epsilon_x\\) Pose estimator (e. g. ndt_scan_matcher) error tolerance in the longitudinal axis \\(m\\)"},{"location":"localization/autoware_pose_instability_detector/#diff_position_y-and-diff_position_z","title":"diff_position_y and diff_position_z","text":"

These thresholds examine the difference in the lateral and vertical axes between the two poses, and check whether the vehicle goes beyond the expected error. The pose_instability_detector calculates the possible range where the vehicle goes, and get the maximum difference between the nominal dead reckoning pose and the maximum limit pose.

Addition to this, the pose_instability_detector node considers the pose estimation error tolerance to determine the threshold.

\\[ \\tau_y = l + \\epsilon_y \\] Symbol Description Unit \\(\\tau_y\\) Threshold for the difference in the lateral axis \\(m\\) \\(l\\) Maximum lateral distance described in the image above (See the appendix how this is calculated) \\(m\\) \\(\\epsilon_y\\) Pose estimator (e. g. ndt_scan_matcher) error tolerance in the lateral axis \\(m\\)

Note that pose_instability_detector sets the threshold for the vertical axis as the same as the lateral axis. Only the pose estimator error tolerance is different.

"},{"location":"localization/autoware_pose_instability_detector/#diff_angle_x-diff_angle_y-and-diff_angle_z","title":"diff_angle_x, diff_angle_y, and diff_angle_z","text":"

These thresholds examine the difference in the roll, pitch, and yaw angles between the two poses. This threshold is a sum of \"maximum angular error due to velocity scale factor error and bias error\" and \"pose estimation error tolerance\".

\\[ \\tau_\\phi = \\tau_\\theta = \\tau_\\psi = \\left(\\omega_{\\rm max}\\frac{\\beta_\\omega}{100} + b \\right) \\Delta t + \\epsilon_\\psi \\] Symbol Description Unit \\(\\tau_\\phi\\) Threshold for the difference in the roll angle \\({\\rm rad}\\) \\(\\tau_\\theta\\) Threshold for the difference in the pitch angle \\({\\rm rad}\\) \\(\\tau_\\psi\\) Threshold for the difference in the yaw angle \\({\\rm rad}\\) \\(\\omega_{\\rm max}\\) Maximum angular velocity \\({\\rm rad}/s\\) \\(\\beta_\\omega\\) Scale factor tolerance for the maximum angular velocity \\(\\%\\) \\(b\\) Bias tolerance of the angular velocity \\({\\rm rad}/s\\) \\(\\Delta t\\) Time interval \\(s\\) \\(\\epsilon_\\psi\\) Pose estimator (e. g. ndt_scan_matcher) error tolerance in the yaw angle \\({\\rm rad}\\)"},{"location":"localization/autoware_pose_instability_detector/#parameters","title":"Parameters","text":"Name Type Description Default Range timer_period float The period of timer_callback (sec). 0.5 >0 heading_velocity_maximum float The maximum of heading velocity (m/s). 16.667 \u22650.0 heading_velocity_scale_factor_tolerance float The tolerance of heading velocity scale factor (%). 3 \u22650.0 angular_velocity_maximum float The maximum of angular velocity (rad/s). 0.523 \u22650.0 angular_velocity_scale_factor_tolerance float The tolerance of angular velocity scale factor (%). 0.2 \u22650.0 angular_velocity_bias_tolerance float The tolerance of angular velocity bias (rad/s). 0.00698 \u22650.0 pose_estimator_longitudinal_tolerance float The tolerance of longitudinal position of pose estimator (m). 0.11 \u22650.0 pose_estimator_lateral_tolerance float The tolerance of lateral position of pose estimator (m). 0.11 \u22650.0 pose_estimator_vertical_tolerance float The tolerance of vertical position of pose estimator (m). 0.11 \u22650.0 pose_estimator_angular_tolerance float The tolerance of roll angle of pose estimator (rad). 0.0175 \u22650.0"},{"location":"localization/autoware_pose_instability_detector/#input","title":"Input","text":"Name Type Description ~/input/odometry nav_msgs::msg::Odometry Pose estimated by EKF ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped Twist"},{"location":"localization/autoware_pose_instability_detector/#output","title":"Output","text":"Name Type Description ~/debug/diff_pose geometry_msgs::msg::PoseStamped diff_pose /diagnostics diagnostic_msgs::msg::DiagnosticArray Diagnostics"},{"location":"localization/autoware_pose_instability_detector/#appendix","title":"Appendix","text":"

On calculating the maximum lateral distance \\(l\\), the pose_instability_detector node will estimate the following poses.

Pose heading velocity \\(v\\) angular velocity \\(\\omega\\) Nominal dead reckoning pose \\(v_{\\rm max}\\) \\(\\omega_{\\rm max}\\) Dead reckoning pose of corner A \\(\\left(1+\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1+\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} + b\\) Dead reckoning pose of corner B \\(\\left(1-\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1+\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} + b\\) Dead reckoning pose of corner C \\(\\left(1-\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1-\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} - b\\) Dead reckoning pose of corner D \\(\\left(1+\\frac{\\beta_v}{100}\\right) v_{\\rm max}\\) \\(\\left(1-\\frac{\\beta_\\omega}{100}\\right) \\omega_{\\rm max} - b\\)

Given a heading velocity \\(v\\) and \\(\\omega\\), the 2D theoretical variation seen from the previous pose is calculated as follows:

\\[ \\begin{align*} \\left[ \\begin{matrix} \\Delta x\\\\ \\Delta y \\end{matrix} \\right] &= \\left[ \\begin{matrix} \\int_{0}^{\\Delta t} v \\cos(\\omega t) dt\\\\ \\int_{0}^{\\Delta t} v \\sin(\\omega t) dt \\end{matrix} \\right] \\\\ &= \\left[ \\begin{matrix} \\frac{v}{\\omega} \\sin(\\omega \\Delta t)\\\\ \\frac{v}{\\omega} \\left(1 - \\cos(\\omega \\Delta t)\\right) \\end{matrix} \\right] \\end{align*} \\]

We calculate this variation for each corner and get the maximum value of the lateral distance \\(l\\) by comparing the distance between the nominal dead reckoning pose and the corner poses.

"},{"location":"localization/autoware_stop_filter/","title":"stop_filter","text":""},{"location":"localization/autoware_stop_filter/#stop_filter","title":"stop_filter","text":""},{"location":"localization/autoware_stop_filter/#purpose","title":"Purpose","text":"

When this function did not exist, each node used a different criterion to determine whether the vehicle is stopping or not, resulting that some nodes were in operation of stopping the vehicle and some nodes continued running in the drive mode. This node aims to:

"},{"location":"localization/autoware_stop_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_stop_filter/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry"},{"location":"localization/autoware_stop_filter/#output","title":"Output","text":"Name Type Description output/odom nav_msgs::msg::Odometry odometry with suppressed longitudinal and yaw twist debug/stop_flag autoware_internal_debug_msgs::msg::BoolStamped flag to represent whether the vehicle is stopping or not"},{"location":"localization/autoware_stop_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range vx_threshold float Longitudinal velocity threshold to determine if the vehicle is stopping. [m/s] 0.01 \u22650.0 wz_threshold float Yaw velocity threshold to determine if the vehicle is stopping. [rad/s] 0.01 \u22650.0"},{"location":"localization/autoware_twist2accel/","title":"autoware_twist2accel","text":""},{"location":"localization/autoware_twist2accel/#autoware_twist2accel","title":"autoware_twist2accel","text":""},{"location":"localization/autoware_twist2accel/#purpose","title":"Purpose","text":"

This package is responsible for estimating acceleration using the output of ekf_localizer. It uses lowpass filter to mitigate the noise.

"},{"location":"localization/autoware_twist2accel/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/autoware_twist2accel/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry input/twist geometry_msgs::msg::TwistWithCovarianceStamped twist"},{"location":"localization/autoware_twist2accel/#output","title":"Output","text":"Name Type Description output/accel geometry_msgs::msg::AccelWithCovarianceStamped estimated acceleration"},{"location":"localization/autoware_twist2accel/#parameters","title":"Parameters","text":"Name Type Description Default Range use_odom boolean use odometry if true, else use twist input. 1 N/A accel_lowpass_gain float lowpass gain for lowpass filter in estimating acceleration. 0.9 \u22650.0"},{"location":"localization/autoware_twist2accel/#future-work","title":"Future work","text":"

Future work includes integrating acceleration into the EKF state.

"},{"location":"localization/yabloc/","title":"YabLoc","text":""},{"location":"localization/yabloc/#yabloc","title":"YabLoc","text":"

YabLoc is vision-based localization with vector map. https://youtu.be/Eaf6r_BNFfk

It estimates position by matching road surface markings extracted from images with a vector map. Point cloud maps and LiDAR are not required. YabLoc enables users localize vehicles that are not equipped with LiDAR and in environments where point cloud maps are not available.

"},{"location":"localization/yabloc/#packages","title":"Packages","text":""},{"location":"localization/yabloc/#how-to-launch-yabloc-instead-of-ndt","title":"How to launch YabLoc instead of NDT","text":"

When launching autoware, if you set pose_source:=yabloc as an argument, YabLoc will be launched instead of NDT. By default, pose_source is ndt.

A sample command to run YabLoc is as follows

ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=$HOME/autoware_map/sample-map-rosbag\\\nvehicle_model:=sample_vehicle \\\nsensor_model:=sample_sensor_kit \\\npose_source:=yabloc\n
"},{"location":"localization/yabloc/#architecture","title":"Architecture","text":""},{"location":"localization/yabloc/#principle","title":"Principle","text":"

The diagram below illustrates the basic principle of YabLoc. It extracts road surface markings by extracting the line segments using the road area obtained from graph-based segmentation. The red line at the center-top of the diagram represents the line segments identified as road surface markings. YabLoc transforms these segments for each particle and determines the particle's weight by comparing them with the cost map generated from Lanelet2.

"},{"location":"localization/yabloc/#visualization","title":"Visualization","text":""},{"location":"localization/yabloc/#core-visualization-topics","title":"Core visualization topics","text":"

These topics are not visualized by default.

index topic name description 1 /localization/yabloc/pf/predicted_particle_marker particle distribution of particle filter. Red particles are probable candidate. 2 /localization/yabloc/pf/scored_cloud 3D projected line segments. the color indicates how well they match the map. 3 /localization/yabloc/image_processing/lanelet2_overlay_image overlay of lanelet2 (yellow lines) onto image based on estimated pose. If they match well with the actual road markings, it means that the localization performs well."},{"location":"localization/yabloc/#image-topics-for-debug","title":"Image topics for debug","text":"

These topics are not visualized by default.

index topic name description 1 /localization/yabloc/pf/cost_map_image cost map made from lanelet2 2 /localization/yabloc/pf/match_image projected line segments 3 /localization/yabloc/image_processing/image_with_colored_line_segment classified line segments. green line segments are used in particle correction 4 /localization/yabloc/image_processing/lanelet2_overlay_image overlay of lanelet2 5 /localization/yabloc/image_processing/segmented_image graph based segmentation result"},{"location":"localization/yabloc/#limitation","title":"Limitation","text":""},{"location":"localization/yabloc/yabloc_common/","title":"yabloc_common","text":""},{"location":"localization/yabloc/yabloc_common/#yabloc_common","title":"yabloc_common","text":"

This package contains some executable nodes related to map. Also, This provides some yabloc common library.

"},{"location":"localization/yabloc/yabloc_common/#ground_server","title":"ground_server","text":""},{"location":"localization/yabloc/yabloc_common/#purpose","title":"Purpose","text":"

It estimates the height and tilt of the ground from lanelet2.

"},{"location":"localization/yabloc/yabloc_common/#input-outputs","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input","title":"Input","text":"Name Type Description input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map input/pose geometry_msgs::msg::PoseStamped estimated self pose"},{"location":"localization/yabloc/yabloc_common/#output","title":"Output","text":"Name Type Description output/ground std_msgs::msg::Float32MultiArray estimated ground parameters. it contains x, y, z, normal_x, normal_y, normal_z. output/ground_markers visualization_msgs::msg::Marker visualization of estimated ground plane output/ground_status std_msgs::msg::String status log of ground plane estimation output/height std_msgs::msg::Float32 altitude output/near_cloud sensor_msgs::msg::PointCloud2 point cloud extracted from lanelet2 and used for ground tilt estimation"},{"location":"localization/yabloc/yabloc_common/#parameters","title":"Parameters","text":"Name Type Description Default Range force_zero_tilt boolean if true, the tilt is always determined to be horizontal False N/A K float the number of neighbors for ground search on a map 50 N/A R float radius for ground search on a map [m] 10 N/A"},{"location":"localization/yabloc/yabloc_common/#ll2_decomposer","title":"ll2_decomposer","text":""},{"location":"localization/yabloc/yabloc_common/#purpose_1","title":"Purpose","text":"

This node extracts the elements related to the road surface markings and yabloc from lanelet2.

"},{"location":"localization/yabloc/yabloc_common/#input-outputs_1","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input_1","title":"Input","text":"Name Type Description input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"localization/yabloc/yabloc_common/#output_1","title":"Output","text":"Name Type Description output/ll2_bounding_box sensor_msgs::msg::PointCloud2 bounding boxes extracted from lanelet2 output/ll2_road_marking sensor_msgs::msg::PointCloud2 road surface markings extracted from lanelet2 output/ll2_sign_board sensor_msgs::msg::PointCloud2 traffic sign boards extracted from lanelet2 output/sign_board_marker visualization_msgs::msg::MarkerArray visualized traffic sign boards"},{"location":"localization/yabloc/yabloc_common/#parameters_1","title":"Parameters","text":"Name Type Description Default Range road_marking_labels array line string types that indicating road surface markings in lanelet2 ['cross_walk', 'zebra_marking', 'line_thin', 'line_thick', 'pedestrian_marking', 'stop_line', 'road_border'] N/A sign_board_labels array line string types that indicating traffic sign boards in lanelet2 ['sign-board'] N/A bounding_box_labels array line string types that indicating not mapped areas in lanelet2 ['none'] N/A"},{"location":"localization/yabloc/yabloc_image_processing/","title":"yabloc_image_processing","text":""},{"location":"localization/yabloc/yabloc_image_processing/#yabloc_image_processing","title":"yabloc_image_processing","text":"

This package contains some executable nodes related to image processing.

"},{"location":"localization/yabloc/yabloc_image_processing/#line_segment_detector","title":"line_segment_detector","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose","title":"Purpose","text":"

This node extract all line segments from gray scale image.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input","title":"Input","text":"Name Type Description input/image_raw sensor_msgs::msg::Image undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output","title":"Output","text":"Name Type Description output/image_with_line_segments sensor_msgs::msg::Image image with line segments highlighted output/line_segments_cloud sensor_msgs::msg::PointCloud2 detected line segments as point cloud. each point contains x,y,z, normal_x, normal_y, normal_z and z, and normal_z are always empty."},{"location":"localization/yabloc/yabloc_image_processing/#graph_segmentation","title":"graph_segmentation","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_1","title":"Purpose","text":"

This node extract road surface region by graph-based-segmentation.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_1","title":"Input","text":"Name Type Description input/image_raw sensor_msgs::msg::Image undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output_1","title":"Output","text":"Name Type Description output/mask_image sensor_msgs::msg::Image image with masked segments determined as road surface area output/segmented_image sensor_msgs::msg::Image segmented image for visualization"},{"location":"localization/yabloc/yabloc_image_processing/#parameters","title":"Parameters","text":"Name Type Description Default Range target_height_ratio float height on the image to retrieve the candidate road surface 0.85 N/A target_candidate_box_width float size of the square area to search for candidate road surfaces 15 N/A pickup_additional_graph_segment boolean if this is true, additional regions of similar color are retrieved 1 N/A similarity_score_threshold float threshold for picking up additional areas 0.8 N/A sigma float parameter for cv::ximgproc::segmentation::GraphSegmentation 0.5 N/A k float parameter for cv::ximgproc::segmentation::GraphSegmentation 300 N/A min_size float parameter for cv::ximgproc::segmentation::GraphSegmentation 100 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#segment_filter","title":"segment_filter","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_2","title":"Purpose","text":"

This is a node that integrates the results of graph_segment and lsd to extract road surface markings.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_2","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_2","title":"Input","text":"Name Type Description input/line_segments_cloud sensor_msgs::msg::PointCloud2 detected line segment input/mask_image sensor_msgs::msg::Image image with masked segments determined as road surface area input/camera_info sensor_msgs::msg::CameraInfo undistorted camera info"},{"location":"localization/yabloc/yabloc_image_processing/#output_2","title":"Output","text":"Name Type Description output/line_segments_cloud sensor_msgs::msg::PointCloud2 filtered line segments for visualization output/projected_image sensor_msgs::msg::Image projected filtered line segments for visualization output/projected_line_segments_cloud sensor_msgs::msg::PointCloud2 projected filtered line segments"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_1","title":"Parameters","text":"Name Type Description Default Range min_segment_length float min length threshold (if it is negative, it is unlimited) 1.5 N/A max_segment_distance float max distance threshold (if it is negative, it is unlimited) 30 N/A max_lateral_distance float max lateral distance threshold (if it is negative, it is unlimited) 10 N/A publish_image_with_segment_for_debug boolean toggle whether to publish the filtered line segment for debug 1 N/A max_range float range of debug projection visualization 20 N/A image_size float image size of debug projection visualization 800 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#undistort","title":"undistort","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_3","title":"Purpose","text":"

This node performs image resizing and undistortion at the same time.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_3","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_3","title":"Input","text":"Name Type Description input/camera_info sensor_msgs::msg::CameraInfo camera info input/image_raw sensor_msgs::msg::Image raw camera image input/image_raw/compressed sensor_msgs::msg::CompressedImage compressed camera image

This node subscribes to both compressed image and raw image topics. If raw image is subscribed to even once, compressed image will no longer be subscribed to. This is to avoid redundant decompression within Autoware.

"},{"location":"localization/yabloc/yabloc_image_processing/#output_3","title":"Output","text":"Name Type Description output/camera_info sensor_msgs::msg::CameraInfo resized camera info output/image_raw sensor_msgs::msg::CompressedImage undistorted and resized image"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_2","title":"Parameters","text":"Name Type Description Default Range use_sensor_qos boolean whether to use sensor qos or not True N/A width float resized image width size 800 N/A override_frame_id string value for overriding the camera's frame_id. if blank, frame_id of static_tf is not overwritten N/A"},{"location":"localization/yabloc/yabloc_image_processing/#about-tf_static-overriding","title":"about tf_static overriding","text":"click to open Some nodes requires `/tf_static` from `/base_link` to the frame_id of `/sensing/camera/traffic_light/image_raw/compressed` (e.g. `/traffic_light_left_camera/camera_optical_link`). You can verify that the tf_static is correct with the following command.
ros2 run tf2_ros tf2_echo base_link traffic_light_left_camera/camera_optical_link\n
If the wrong `/tf_static` are broadcasted due to using a prototype vehicle, not having accurate calibration data, or some other unavoidable reason, it is useful to give the frame_id in `override_camera_frame_id`. If you give it a non-empty string, `/image_processing/undistort_node` will rewrite the frame_id in camera_info. For example, you can give a different tf_static as follows.
ros2 launch yabloc_launch sample_launch.xml override_camera_frame_id:=fake_camera_optical_link\nros2 run tf2_ros static_transform_publisher \\\n--frame-id base_link \\\n--child-frame-id fake_camera_optical_link \\\n--roll -1.57 \\\n--yaw -1.570\n
"},{"location":"localization/yabloc/yabloc_image_processing/#lanelet2_overlay","title":"lanelet2_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_4","title":"Purpose","text":"

This node overlays lanelet2 on the camera image based on the estimated self-position.

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_4","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_4","title":"Input","text":"Name Type Description input/pose geometry_msgs::msg::PoseStamped estimated self pose input/projected_line_segments_cloud sensor_msgs::msg::PointCloud2 projected line segments including non-road markings input/camera_info sensor_msgs::msg::CameraInfo undistorted camera info input/image_raw sensor_msgs::msg::Image undistorted camera image input/ground std_msgs::msg::Float32MultiArray ground tilt input/ll2_road_marking sensor_msgs::msg::PointCloud2 lanelet2 elements regarding road surface markings input/ll2_sign_board sensor_msgs::msg::PointCloud2 lanelet2 elements regarding traffic sign boards"},{"location":"localization/yabloc/yabloc_image_processing/#output_4","title":"Output","text":"Name Type Description output/lanelet2_overlay_image sensor_msgs::msg::Image lanelet2 overlaid image output/projected_marker visualization_msgs::msg::Marker 3d projected line segments including non-road markings"},{"location":"localization/yabloc/yabloc_image_processing/#line_segments_overlay","title":"line_segments_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_5","title":"Purpose","text":"

This node visualize classified line segments on the camera image

"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_5","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_5","title":"Input","text":"Name Type Description input/line_segments_cloud sensor_msgs::msg::PointCloud2 classified line segments input/image_raw sensor_msgs::msg::Image undistorted camera image"},{"location":"localization/yabloc/yabloc_image_processing/#output_5","title":"Output","text":"Name Type Description output/image_with_colored_line_segments sensor_msgs::msg::Image image with highlighted line segments"},{"location":"localization/yabloc/yabloc_monitor/","title":"yabloc_monitor","text":""},{"location":"localization/yabloc/yabloc_monitor/#yabloc_monitor","title":"yabloc_monitor","text":"

YabLoc monitor is a node that monitors the status of the YabLoc localization system. It is a wrapper node that monitors the status of the YabLoc localization system and publishes the status as diagnostics.

"},{"location":"localization/yabloc/yabloc_monitor/#feature","title":"Feature","text":""},{"location":"localization/yabloc/yabloc_monitor/#availability","title":"Availability","text":"

The node monitors the final output pose of YabLoc to verify the availability of YabLoc.

"},{"location":"localization/yabloc/yabloc_monitor/#others","title":"Others","text":"

To be added,

"},{"location":"localization/yabloc/yabloc_monitor/#interfaces","title":"Interfaces","text":""},{"location":"localization/yabloc/yabloc_monitor/#input","title":"Input","text":"Name Type Description ~/input/yabloc_pose geometry_msgs/PoseStamped The final output pose of YabLoc"},{"location":"localization/yabloc/yabloc_monitor/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"localization/yabloc/yabloc_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range availability/timestamp_tolerance float tolerable time difference between current time and latest estimated pose 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/","title":"yabLoc_particle_filter","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#yabloc_particle_filter","title":"yabLoc_particle_filter","text":"

This package contains some executable nodes related to particle filter.

"},{"location":"localization/yabloc/yabloc_particle_filter/#particle_predictor","title":"particle_predictor","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#input","title":"Input","text":"Name Type Description input/initialpose geometry_msgs::msg::PoseWithCovarianceStamped to specify the initial position of particles input/twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped linear velocity and angular velocity of prediction update input/height std_msgs::msg::Float32 ground height input/weighted_particles yabloc_particle_filter::msg::ParticleArray particles weighted by corrector nodes"},{"location":"localization/yabloc/yabloc_particle_filter/#output","title":"Output","text":"Name Type Description output/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped particle centroid with covariance output/pose geometry_msgs::msg::PoseStamped particle centroid with covariance output/predicted_particles yabloc_particle_filter::msg::ParticleArray particles weighted by predictor nodes debug/init_marker visualization_msgs::msg::Marker debug visualization of initial position debug/particles_marker_array visualization_msgs::msg::MarkerArray particles visualization. published if visualize is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range visualize boolean whether particles are also published in visualization_msgs or not True N/A static_linear_covariance float overriding covariance of /twist_with_covariance 0.04 N/A static_angular_covariance float overriding covariance of /twist_with_covariance 0.006 N/A resampling_interval_seconds float the interval of particle resampling 1.0 N/A num_of_particles float the number of particles 500 N/A prediction_rate float frequency of forecast updates, in Hz 50.0 N/A cov_xx_yy array the covariance of initial pose [2.0, 0.25] N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services","title":"Services","text":"Name Type Description yabloc_trigger_srv std_srvs::srv::SetBool activation and deactivation of yabloc estimation"},{"location":"localization/yabloc/yabloc_particle_filter/#gnss_particle_corrector","title":"gnss_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_1","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#input_1","title":"Input","text":"Name Type Description input/height std_msgs::msg::Float32 ground height input/predicted_particles yabloc_particle_filter::msg::ParticleArray predicted particles input/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped gnss measurement. used if use_ublox_msg is false input/navpvt ublox_msgs::msg::NavPVT gnss measurement. used if use_ublox_msg is true"},{"location":"localization/yabloc/yabloc_particle_filter/#output_1","title":"Output","text":"Name Type Description output/weighted_particles yabloc_particle_filter::msg::ParticleArray weighted particles debug/gnss_range_marker visualization_msgs::msg::MarkerArray gnss weight distribution debug/particles_marker_array visualization_msgs::msg::MarkerArray particles visualization. published if visualize is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_1","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A mahalanobis_distance_threshold float if the Mahalanobis distance to the GNSS for particle exceeds this, the correction skips. 30 N/A for_fixed/max_weight float gnss weight distribution used when observation is fixed 5 N/A for_fixed/flat_radius float gnss weight distribution used when observation is fixed 0.5 N/A for_fixed/max_radius float gnss weight distribution used when observation is fixed 10 N/A for_fixed/min_weight float gnss weight distribution used when observation is fixed 0.5 N/A for_not_fixed/max_weight float gnss weight distribution used when observation is not fixed 1 N/A for_not_fixed/flat_radius float gnss weight distribution used when observation is not fixed 5 N/A for_not_fixed/max_radius float gnss weight distribution used when observation is not fixed 20 N/A for_not_fixed/min_weight float gnss weight distribution used when observation is not fixed 0.5 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#camera_particle_corrector","title":"camera_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_2","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#inputs-outputs_2","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#input_2","title":"Input","text":"Name Type Description input/predicted_particles yabloc_particle_filter::msg::ParticleArray predicted particles input/ll2_bounding_box sensor_msgs::msg::PointCloud2 road surface markings converted to line segments input/ll2_road_marking sensor_msgs::msg::PointCloud2 road surface markings converted to line segments input/projected_line_segments_cloud sensor_msgs::msg::PointCloud2 projected line segments input/pose geometry_msgs::msg::PoseStamped reference to retrieve the area map around the self location"},{"location":"localization/yabloc/yabloc_particle_filter/#output_2","title":"Output","text":"Name Type Description output/weighted_particles yabloc_particle_filter::msg::ParticleArray weighted particles debug/cost_map_image sensor_msgs::msg::Image cost map created from lanelet2 debug/cost_map_range visualization_msgs::msg::MarkerArray cost map boundary debug/match_image sensor_msgs::msg::Image projected line segments image debug/scored_cloud sensor_msgs::msg::PointCloud2 weighted 3d line segments debug/scored_post_cloud sensor_msgs::msg::PointCloud2 weighted 3d line segments which are iffy debug/state_string std_msgs::msg::String string describing the node state debug/particles_marker_array visualization_msgs::msg::MarkerArray particles visualization. published if visualize is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_2","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A image_size float image size of debug/cost_map_image 800 N/A max_range float width of hierarchical cost map 40 N/A gamma float gamma value of the intensity gradient of the cost map 5 N/A min_prob float minimum particle weight the corrector node gives 0.1 N/A far_weight_gain float exp(-far_weight_gain_ * squared_distance_from_camera) is weight gain. if this is large, the nearby road markings will be more important 0.001 N/A enabled_at_first boolean if it is false, this node is not activated at first. you can activate by service call 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services_1","title":"Services","text":"Name Type Description switch_srv std_srvs::srv::SetBool activation and deactivation of correction"},{"location":"localization/yabloc/yabloc_pose_initializer/","title":"yabloc_pose_initializer","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#yabloc_pose_initializer","title":"yabloc_pose_initializer","text":"

This package contains a node related to initial pose estimation.

This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by ansible during env preparation phase of the installation. It is also possible to download it manually. Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.

To download and extract the model manually:

$ mkdir -p ~/autoware_data/yabloc_pose_initializer/\n$ wget -P ~/autoware_data/yabloc_pose_initializer/ \\\nhttps://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/136_road-segmentation-adas-0001/resources.tar.gz\n$ tar xzf ~/autoware_data/yabloc_pose_initializer/resources.tar.gz -C ~/autoware_data/yabloc_pose_initializer/\n
"},{"location":"localization/yabloc/yabloc_pose_initializer/#note","title":"Note","text":"

This package makes use of external code. The trained files are provided by apollo. The trained files are automatically downloaded during env preparation.

Original model URL

https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/road-segmentation-adas-0001

Open Model Zoo is licensed under Apache License Version 2.0.

Converted model URL

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/136_road-segmentation-adas-0001

model conversion scripts are released under the MIT license

"},{"location":"localization/yabloc/yabloc_pose_initializer/#special-thanks","title":"Special thanks","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#camera_pose_initializer","title":"camera_pose_initializer","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#purpose","title":"Purpose","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#input","title":"Input","text":"Name Type Description input/camera_info sensor_msgs::msg::CameraInfo undistorted camera info input/image_raw sensor_msgs::msg::Image undistorted camera image input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"localization/yabloc/yabloc_pose_initializer/#output","title":"Output","text":"Name Type Description output/candidates visualization_msgs::msg::MarkerArray initial pose candidates"},{"location":"localization/yabloc/yabloc_pose_initializer/#parameters","title":"Parameters","text":"Name Type Description Default Range angle_resolution float how many divisions of 1 sigma angle range 30 N/A"},{"location":"localization/yabloc/yabloc_pose_initializer/#services","title":"Services","text":"Name Type Description yabloc_align_srv tier4_localization_msgs::srv::PoseWithCovarianceStamped initial pose estimation request"},{"location":"map/autoware_lanelet2_map_visualizer/","title":"autoware_lanelet2_map_visualizer package","text":""},{"location":"map/autoware_lanelet2_map_visualizer/#autoware_lanelet2_map_visualizer-package","title":"autoware_lanelet2_map_visualizer package","text":"

This package provides the features of visualizing the lanelet2 maps.

"},{"location":"map/autoware_lanelet2_map_visualizer/#lanelet2_map_visualization","title":"lanelet2_map_visualization","text":""},{"location":"map/autoware_lanelet2_map_visualizer/#feature","title":"Feature","text":"

lanelet2_map_visualization visualizes autoware_map_msgs/LaneletMapBin messages into visualization_msgs/MarkerArray.

"},{"location":"map/autoware_lanelet2_map_visualizer/#how-to-run","title":"How to Run","text":"

ros2 run autoware_lanelet2_map_visualizer lanelet2_map_visualization

"},{"location":"map/autoware_lanelet2_map_visualizer/#subscribed-topics","title":"Subscribed Topics","text":""},{"location":"map/autoware_lanelet2_map_visualizer/#published-topics","title":"Published Topics","text":""},{"location":"map/autoware_map_height_fitter/","title":"autoware_map_height_fitter","text":""},{"location":"map/autoware_map_height_fitter/#autoware_map_height_fitter","title":"autoware_map_height_fitter","text":"

This library fits the given point with the ground of the point cloud map. The map loading operation is switched by the parameter enable_partial_load of the node specified by map_loader_name. The node using this library must use multi thread executor.

"},{"location":"map/autoware_map_height_fitter/#parameters","title":"Parameters","text":"Name Type Description Default Range map_loader_name string Node name of the map loader from which this map_height_fitter will retrieve its parameters /map/pointcloud_map_loader N/A target string Target map to fit (choose from 'pointcloud_map', 'vector_map') pointcloud_map N/A"},{"location":"map/autoware_map_height_fitter/#topic-subscription","title":"Topic subscription","text":"Topic Name Description ~/pointcloud_map The topic containing the whole pointcloud map (only used when enable_partial_load = false)"},{"location":"map/autoware_map_height_fitter/#service-client","title":"Service client","text":"Service Name Description ~/partial_map_load The service to load the partial map"},{"location":"map/autoware_map_loader/","title":"autoware_map_loader package","text":""},{"location":"map/autoware_map_loader/#autoware_map_loader-package","title":"autoware_map_loader package","text":"

This package provides the features of loading various maps.

"},{"location":"map/autoware_map_loader/#pointcloud_map_loader","title":"pointcloud_map_loader","text":""},{"location":"map/autoware_map_loader/#feature","title":"Feature","text":"

pointcloud_map_loader provides pointcloud maps to the other Autoware nodes in various configurations. Currently, it supports the following two types:

NOTE: We strongly recommend to use divided maps when using large pointcloud map to enable the latter two features (partial and differential load). Please go through the prerequisites section for more details, and follow the instruction for dividing the map and preparing the metadata.

"},{"location":"map/autoware_map_loader/#prerequisites","title":"Prerequisites","text":""},{"location":"map/autoware_map_loader/#prerequisites-on-pointcloud-map-files","title":"Prerequisites on pointcloud map file(s)","text":"

You may provide either a single .pcd file or multiple .pcd files. If you are using multiple PCD data, it MUST obey the following rules:

  1. The pointcloud map should be projected on the same coordinate defined in map_projection_loader, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to the readme of map_projection_loader.
  2. It must be divided by straight lines parallel to the x-axis and y-axis. The system does not support division by diagonal lines or curved lines.
  3. The division size along each axis should be equal.
  4. The division size should be about 20m x 20m. Particularly, care should be taken as using too large division size (for example, more than 100m) may have adverse effects on dynamic map loading features in ndt_scan_matcher and autoware_compare_map_segmentation.
  5. All the split maps should not overlap with each other.
  6. Metadata file should also be provided. The metadata structure description is provided below.
"},{"location":"map/autoware_map_loader/#metadata-structure","title":"Metadata structure","text":"

The metadata should look like this:

x_resolution: 20.0\ny_resolution: 20.0\nA.pcd: [1200, 2500] # -> 1200 < x < 1220, 2500 < y < 2520\nB.pcd: [1220, 2500] # -> 1220 < x < 1240, 2500 < y < 2520\nC.pcd: [1200, 2520] # -> 1200 < x < 1220, 2520 < y < 2540\nD.pcd: [1240, 2520] # -> 1240 < x < 1260, 2520 < y < 2540\n

where,

You may use pointcloud_divider for dividing pointcloud map as well as generating the compatible metadata.yaml.

"},{"location":"map/autoware_map_loader/#directory-structure-of-these-files","title":"Directory structure of these files","text":"

If you only have one pointcloud map, Autoware will assume the following directory structure by default.

sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n

If you have multiple rosbags, an example directory structure would be as follows. Note that you need to have a metadata when you have multiple pointcloud map files.

sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u2502 \u251c\u2500\u2500 A.pcd\n\u2502 \u251c\u2500\u2500 B.pcd\n\u2502 \u251c\u2500\u2500 C.pcd\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n
"},{"location":"map/autoware_map_loader/#specific-features","title":"Specific features","text":""},{"location":"map/autoware_map_loader/#publish-raw-pointcloud-map-ros-2-topic","title":"Publish raw pointcloud map (ROS 2 topic)","text":"

The node publishes the raw pointcloud map loaded from the .pcd file(s).

"},{"location":"map/autoware_map_loader/#publish-downsampled-pointcloud-map-ros-2-topic","title":"Publish downsampled pointcloud map (ROS 2 topic)","text":"

The node publishes the downsampled pointcloud map loaded from the .pcd file(s). You can specify the downsample resolution by changing the leaf_size parameter.

"},{"location":"map/autoware_map_loader/#publish-metadata-of-pointcloud-map-ros-2-topic","title":"Publish metadata of pointcloud map (ROS 2 topic)","text":"

The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the .yaml file. Please see the description of PointCloudMapMetaData.msg for details.

"},{"location":"map/autoware_map_loader/#send-partial-pointcloud-map-ros-2-service","title":"Send partial pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area. Please see the description of GetPartialPointCloudMap.srv for details.

"},{"location":"map/autoware_map_loader/#send-differential-pointcloud-map-ros-2-service","title":"Send differential pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs. Please see the description of GetDifferentialPointCloudMap.srv for details.

"},{"location":"map/autoware_map_loader/#send-selected-pointcloud-map-ros-2-service","title":"Send selected pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given IDs query from a client node, the node sends a set of pointcloud maps (each of which attached with unique ID) specified by query. Please see the description of GetSelectedPointCloudMap.srv for details.

"},{"location":"map/autoware_map_loader/#parameters","title":"Parameters","text":"Name Type Description Default Range enable_whole_load boolean Enable raw pointcloud map publishing True N/A enable_downsampled_whole_load boolean Enable downsampled pointcloud map publishing False N/A enable_partial_load boolean Enable partial pointcloud map server True N/A enable_selected_load boolean Enable selected pointcloud map server False N/A leaf_size float Downsampling leaf size (only used when enable_downsampled_whole_load is set true) 3.0 N/A pcd_paths_or_directory array Path(s) to pointcloud map file or directory [] N/A pcd_metadata_path string Path to pointcloud metadata file N/A"},{"location":"map/autoware_map_loader/#interfaces","title":"Interfaces","text":""},{"location":"map/autoware_map_loader/#lanelet2_map_loader","title":"lanelet2_map_loader","text":""},{"location":"map/autoware_map_loader/#feature_1","title":"Feature","text":"

lanelet2_map_loader loads Lanelet2 file and publishes the map data as autoware_map_msgs/LaneletMapBin message. The node projects lan/lon coordinates into arbitrary coordinates defined in /map/map_projector_info from map_projection_loader. Please see autoware_map_msgs/msg/MapProjectorInfo.msg for supported projector types.

"},{"location":"map/autoware_map_loader/#how-to-run","title":"How to run","text":"

ros2 run autoware_map_loader lanelet2_map_loader --ros-args -p lanelet2_map_path:=path/to/map.osm

"},{"location":"map/autoware_map_loader/#subscribed-topics","title":"Subscribed Topics","text":""},{"location":"map/autoware_map_loader/#published-topics","title":"Published Topics","text":""},{"location":"map/autoware_map_loader/#parameters_1","title":"Parameters","text":"Name Type Description Default Range allow_unsupported_version boolean Flag to load unsupported format_version anyway. If true, just prints warning. true N/A center_line_resolution float Resolution of the Lanelet center line [m] 5.0 N/A use_waypoints boolean If true, centerline in the Lanelet2 map will be used as a waypoints tag. True N/A lanelet2_map_path string The lanelet2 map path pointing to the .osm file N/A

use_waypoints decides how to handle a centerline. This flag enables to use the overwriteLaneletsCenterlineWithWaypoints function instead of overwriteLaneletsCenterline. Please see the document of the autoware_lanelet2_extension package in detail.

"},{"location":"map/autoware_map_projection_loader/","title":"autoware_map_projection_loader","text":""},{"location":"map/autoware_map_projection_loader/#autoware_map_projection_loader","title":"autoware_map_projection_loader","text":""},{"location":"map/autoware_map_projection_loader/#feature","title":"Feature","text":"

autoware_map_projection_loader is responsible for publishing map_projector_info that defines in which kind of coordinate Autoware is operating. This is necessary information especially when you want to convert from global (geoid) to local coordinate or the other way around.

"},{"location":"map/autoware_map_projection_loader/#map-projector-info-file-specification","title":"Map projector info file specification","text":"

You need to provide a YAML file, namely map_projector_info.yaml under the map_path directory. For pointcloud_map_metadata.yaml, please refer to the Readme of autoware_map_loader.

sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n

There are three types of transformations from latitude and longitude to XYZ coordinate system as shown in the figure below. Please refer to the following details for the necessary parameters for each projector type.

"},{"location":"map/autoware_map_projection_loader/#using-local-coordinate","title":"Using local coordinate","text":"
# map_projector_info.yaml\nprojector_type: Local\n
"},{"location":"map/autoware_map_projection_loader/#limitation","title":"Limitation","text":"

The functionality that requires latitude and longitude will become unavailable.

The currently identified unavailable functionalities are:

"},{"location":"map/autoware_map_projection_loader/#using-mgrs","title":"Using MGRS","text":"

If you want to use MGRS, please specify the MGRS grid as well.

# map_projector_info.yaml\nprojector_type: MGRS\nvertical_datum: WGS84\nmgrs_grid: 54SUE\n
"},{"location":"map/autoware_map_projection_loader/#limitation_1","title":"Limitation","text":"

It cannot be used with maps that span across two or more MGRS grids. Please use it only when it falls within the scope of a single MGRS grid.

"},{"location":"map/autoware_map_projection_loader/#using-localcartesianutm","title":"Using LocalCartesianUTM","text":"

If you want to use local cartesian UTM, please specify the map origin as well.

# map_projector_info.yaml\nprojector_type: LocalCartesianUTM\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/autoware_map_projection_loader/#using-transversemercator","title":"Using TransverseMercator","text":"

If you want to use Transverse Mercator projection, please specify the map origin as well.

# map_projector_info.yaml\nprojector_type: TransverseMercator\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/autoware_map_projection_loader/#published-topics","title":"Published Topics","text":""},{"location":"map/autoware_map_projection_loader/#parameters","title":"Parameters","text":"

Note that these parameters are assumed to be passed from launch arguments, and it is not recommended to directly write them in map_projection_loader.param.yaml.

Name Type Description Default Range map_projector_info_path string The path where map_projector_info.yaml is located $(var map_projector_info_path) N/A lanelet2_map_path string The path where the lanelet2 map file (.osm) is located $(var lanelet2_map_path) N/A"},{"location":"map/autoware_map_tf_generator/","title":"autoware_map_tf_generator","text":""},{"location":"map/autoware_map_tf_generator/#autoware_map_tf_generator","title":"autoware_map_tf_generator","text":""},{"location":"map/autoware_map_tf_generator/#purpose","title":"Purpose","text":"

The nodes in this package broadcast the viewer frame for visualization of the map in RViz.

Note that there is no module to need the viewer frame and this is used only for visualization.

The following are the supported methods to calculate the position of the viewer frame:

"},{"location":"map/autoware_map_tf_generator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"map/autoware_map_tf_generator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"map/autoware_map_tf_generator/#input","title":"Input","text":""},{"location":"map/autoware_map_tf_generator/#autoware_pcd_map_tf_generator","title":"autoware_pcd_map_tf_generator","text":"Name Type Description /map/pointcloud_map sensor_msgs::msg::PointCloud2 Subscribe pointcloud map to calculate position of viewer frames"},{"location":"map/autoware_map_tf_generator/#autoware_vector_map_tf_generator","title":"autoware_vector_map_tf_generator","text":"Name Type Description /map/vector_map autoware_map_msgs::msg::LaneletMapBin Subscribe vector map to calculate position of viewer frames"},{"location":"map/autoware_map_tf_generator/#output","title":"Output","text":"Name Type Description /tf_static tf2_msgs/msg/TFMessage Broadcast viewer frames"},{"location":"map/autoware_map_tf_generator/#parameters","title":"Parameters","text":""},{"location":"map/autoware_map_tf_generator/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"map/autoware_map_tf_generator/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range map_frame string The parent frame name of viewer frame map N/A viewer_frame string Name of viewer frame viewer N/A"},{"location":"map/autoware_map_tf_generator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"map/util/lanelet2_map_preprocessor/","title":"Index","text":"

This package has been moved to https://github.com/autowarefoundation/autoware_tools/tree/main/map/autoware_lanelet2_map_utils .

"},{"location":"perception/autoware_bytetrack/","title":"bytetrack","text":""},{"location":"perception/autoware_bytetrack/#bytetrack","title":"bytetrack","text":""},{"location":"perception/autoware_bytetrack/#purpose","title":"Purpose","text":"

The core algorithm, named ByteTrack, mainly aims to perform multi-object tracking. Because the algorithm associates almost every detection box including ones with low detection scores, the number of false negatives is expected to decrease by using it.

demo video

"},{"location":"perception/autoware_bytetrack/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_bytetrack/#cite","title":"Cite","text":""},{"location":"perception/autoware_bytetrack/#2d-tracking-modification-from-original-codes","title":"2d tracking modification from original codes","text":"

The paper just says that the 2d tracking algorithm is a simple Kalman filter. Original codes use the top-left-corner and aspect ratio and size as the state vector.

This is sometimes unstable because the aspect ratio can be changed by the occlusion. So, we use the top-left and size as the state vector.

Kalman filter settings can be controlled by the parameters in config/bytetrack_node.param.yaml.

"},{"location":"perception/autoware_bytetrack/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_bytetrack/#bytetrack_node","title":"bytetrack_node","text":""},{"location":"perception/autoware_bytetrack/#input","title":"Input","text":"Name Type Description in/rect tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes"},{"location":"perception/autoware_bytetrack/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/objects/debug/uuid tier4_perception_msgs/DynamicObjectArray The universally unique identifiers (UUID) for each object"},{"location":"perception/autoware_bytetrack/#bytetrack_visualizer","title":"bytetrack_visualizer","text":""},{"location":"perception/autoware_bytetrack/#input_1","title":"Input","text":"Name Type Description in/image sensor_msgs/Image or sensor_msgs/CompressedImage The input image on which object detection is performed in/rect tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes in/uuid tier4_perception_msgs/DynamicObjectArray The universally unique identifiers (UUID) for each object"},{"location":"perception/autoware_bytetrack/#output_1","title":"Output","text":"Name Type Description out/image sensor_msgs/Image The image that detection bounding boxes and their UUIDs are drawn"},{"location":"perception/autoware_bytetrack/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_bytetrack/#bytetrack_node_1","title":"bytetrack_node","text":"Name Type Default Value Description track_buffer_length int 30 The frame count that a tracklet is considered to be lost"},{"location":"perception/autoware_bytetrack/#bytetrack_visualizer_1","title":"bytetrack_visualizer","text":"Name Type Default Value Description use_raw bool false The flag for the node to switch sensor_msgs/Image or sensor_msgs/CompressedImage as input"},{"location":"perception/autoware_bytetrack/#assumptionsknown-limits","title":"Assumptions/Known limits","text":""},{"location":"perception/autoware_bytetrack/#reference-repositories","title":"Reference repositories","text":""},{"location":"perception/autoware_bytetrack/#license","title":"License","text":"

The codes under the lib directory are copied from the original codes and modified. The original codes belong to the MIT license stated as follows, while this ported packages are provided with Apache License 2.0:

MIT License

Copyright (c) 2021 Yifu Zhang

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

"},{"location":"perception/autoware_cluster_merger/","title":"autoware cluster merger","text":""},{"location":"perception/autoware_cluster_merger/#autoware-cluster-merger","title":"autoware cluster merger","text":""},{"location":"perception/autoware_cluster_merger/#purpose","title":"Purpose","text":"

autoware_cluster_merger is a package for merging pointcloud clusters as detected objects with feature type.

"},{"location":"perception/autoware_cluster_merger/#inner-working-algorithms","title":"Inner-working / Algorithms","text":"

The clusters of merged topics are simply concatenated from clusters of input topics.

"},{"location":"perception/autoware_cluster_merger/#input-output","title":"Input / Output","text":""},{"location":"perception/autoware_cluster_merger/#input","title":"Input","text":"Name Type Description input/cluster0 tier4_perception_msgs::msg::DetectedObjectsWithFeature pointcloud clusters input/cluster1 tier4_perception_msgs::msg::DetectedObjectsWithFeature pointcloud clusters"},{"location":"perception/autoware_cluster_merger/#output","title":"Output","text":"Name Type Description output/clusters tier4_perception_msgs::msg::DetectedObjectsWithFeature merged clusters"},{"location":"perception/autoware_cluster_merger/#parameters","title":"Parameters","text":"Name Type Description Default value output_frame_id string The header frame_id of output topic. base_link"},{"location":"perception/autoware_cluster_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_cluster_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_cluster_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_cluster_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_cluster_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_compare_map_segmentation/","title":"autoware_compare_map_segmentation","text":""},{"location":"perception/autoware_compare_map_segmentation/#autoware_compare_map_segmentation","title":"autoware_compare_map_segmentation","text":""},{"location":"perception/autoware_compare_map_segmentation/#purpose","title":"Purpose","text":"

The autoware_compare_map_segmentation is a package that filters the ground points from the input pointcloud by using map info (e.g. pcd, elevation map or split map pointcloud from map_loader interface).

"},{"location":"perception/autoware_compare_map_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_compare_map_segmentation/#compare-elevation-map-filter","title":"Compare Elevation Map Filter","text":"

Compare the z of the input points with the value of elevation_map. The height difference is calculated by the binary integration of neighboring cells. Remove points whose height difference is below the height_diff_thresh.

"},{"location":"perception/autoware_compare_map_segmentation/#distance-based-compare-map-filter","title":"Distance Based Compare Map Filter","text":"

This filter compares the input pointcloud with the map pointcloud using the nearestKSearch function of kdtree and removes points that are close to the map point cloud. The map pointcloud can be loaded statically at once at the beginning or dynamically as the vehicle moves.

"},{"location":"perception/autoware_compare_map_segmentation/#voxel-based-approximate-compare-map-filter","title":"Voxel Based Approximate Compare Map Filter","text":"

The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them.

"},{"location":"perception/autoware_compare_map_segmentation/#voxel-based-compare-map-filter","title":"Voxel Based Compare Map Filter","text":"

The filter loads the map pointcloud (static loading whole map at once at beginning or dynamic loading during vehicle moving) and utilizes VoxelGrid to downsample map pointcloud.

For each point of input pointcloud, the filter use getCentroidIndexAt combine with getGridCoordinates function from VoxelGrid class to check if the downsampled map point existing surrounding input points. Remove the input point which has downsampled map point in voxels containing or being close to the point.

"},{"location":"perception/autoware_compare_map_segmentation/#voxel-distance-based-compare-map-filter","title":"Voxel Distance based Compare Map Filter","text":"

This filter is a combination of the distance_based_compare_map_filter and voxel_based_approximate_compare_map_filter. The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid and a k-d tree of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them. For points that do not belong to any voxel grid, they are compared again with the map point cloud using the radiusSearch function of the k-d tree and are removed if they are close enough to the map.

"},{"location":"perception/autoware_compare_map_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_compare_map_segmentation/#compare-elevation-map-filter_1","title":"Compare Elevation Map Filter","text":""},{"location":"perception/autoware_compare_map_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/elevation_map grid_map::msg::GridMap elevation map"},{"location":"perception/autoware_compare_map_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/autoware_compare_map_segmentation/#parameters","title":"Parameters","text":"Name Type Description Default value map_layer_name string elevation map layer name elevation map_frame float frame_id of the map that is temporarily used before elevation_map is subscribed map height_diff_thresh float Remove points whose height difference is below this value [m] 0.15"},{"location":"perception/autoware_compare_map_segmentation/#other-filters","title":"Other Filters","text":""},{"location":"perception/autoware_compare_map_segmentation/#input_1","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/map sensor_msgs::msg::PointCloud2 map (in case static map loading) /localization/kinematic_state nav_msgs::msg::Odometry current ego-vehicle pose (in case dynamic map loading)"},{"location":"perception/autoware_compare_map_segmentation/#output_1","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/autoware_compare_map_segmentation/#parameters_1","title":"Parameters","text":"Name Type Description Default value use_dynamic_map_loading bool map loading mode selection, true for dynamic map loading, false for static map loading, recommended for no-split map pointcloud true distance_threshold float Threshold distance to compare input points with map points [m] 0.5 map_update_distance_threshold float Threshold of vehicle movement distance when map update is necessary (in dynamic map loading) [m] 10.0 map_loader_radius float Radius of map need to be loaded (in dynamic map loading) [m] 150.0 timer_interval_ms int Timer interval to check if the map update is necessary (in dynamic map loading) [ms] 100 publish_debug_pcd bool Enable to publish voxelized updated map in debug/downsampled_map/pointcloud for debugging. It might cause additional computation cost false downsize_ratio_z_axis double Positive ratio to reduce voxel_leaf_size and neighbor point distance threshold in z axis 0.5"},{"location":"perception/autoware_compare_map_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_compare_map_segmentation/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/","title":"autoware_crosswalk_traffic_light_estimator","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#autoware_crosswalk_traffic_light_estimator","title":"autoware_crosswalk_traffic_light_estimator","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#purpose","title":"Purpose","text":"

autoware_crosswalk_traffic_light_estimator estimates pedestrian traffic signals which can be summarized as the following two tasks:

This module works without ~/input/route, but its behavior is outputting the subscribed results as is.

"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#input","title":"Input","text":"Name Type Description ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/route autoware_planning_msgs::msg::LaneletRoute optional: route ~/input/classified/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray classified signals"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray output that contains estimated pedestrian traffic signals"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#parameters","title":"Parameters","text":"Name Type Description Default value use_last_detect_color bool If this parameter is true, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is false, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) true last_detect_color_hold_time double The time threshold to hold for last detect color. The unit is second. 2.0 last_colors_hold_time double The time threshold to hold for history detected pedestrian traffic light color. The unit is second. 1.0"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

When the pedestrian traffic signals are detected by perception pipeline

When the pedestrian traffic signals are NOT detected by perception pipeline

"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#estimate-whether-pedestrian-traffic-signals-are-flashing","title":"Estimate whether pedestrian traffic signals are flashing","text":"
start\nif (the pedestrian traffic light classification result exists)then\n    : update the flashing flag according to the classification result(in_signal) and last_signals\n    if (the traffic light is flashing?)then(yes)\n      : update the traffic light state\n    else(no)\n      : the traffic light state is the same with the classification result\nif (the classification result not exists)\n    : the traffic light state is the same with the estimation\n : output the current traffic light state\nend\n
"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#update-flashing-flag","title":"Update flashing flag","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#update-traffic-light-status","title":"Update traffic light status","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#estimate-the-color-of-pedestrian-traffic-signals","title":"Estimate the color of pedestrian traffic signals","text":"

If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe RED in order to prevent pedestrian from crossing when the following conditions are satisfied.

"},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#situation1","title":"Situation1","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#situation2","title":"Situation2","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_crosswalk_traffic_light_estimator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_detected_object_feature_remover/","title":"autoware_detected_object_feature_remover","text":""},{"location":"perception/autoware_detected_object_feature_remover/#autoware_detected_object_feature_remover","title":"autoware_detected_object_feature_remover","text":""},{"location":"perception/autoware_detected_object_feature_remover/#purpose","title":"Purpose","text":"

The autoware_detected_object_feature_remover is a package to convert topic-type from DetectedObjectWithFeatureArray to DetectedObjects.

"},{"location":"perception/autoware_detected_object_feature_remover/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_detected_object_feature_remover/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_feature_remover/#input","title":"Input","text":"Name Type Description ~/input tier4_perception_msgs::msg::DetectedObjectWithFeatureArray detected objects with feature field"},{"location":"perception/autoware_detected_object_feature_remover/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/autoware_detected_object_feature_remover/#parameters","title":"Parameters","text":"

None

"},{"location":"perception/autoware_detected_object_feature_remover/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_detected_object_validation/","title":"detected_object_validation","text":""},{"location":"perception/autoware_detected_object_validation/#detected_object_validation","title":"detected_object_validation","text":""},{"location":"perception/autoware_detected_object_validation/#purpose","title":"Purpose","text":"

The purpose of this package is to eliminate obvious false positives of DetectedObjects.

"},{"location":"perception/autoware_detected_object_validation/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/autoware_detected_object_validation/#node-parameters","title":"Node Parameters","text":""},{"location":"perception/autoware_detected_object_validation/#object_lanelet_filter","title":"object_lanelet_filter","text":"Name Type Description Default Range filter_target_label.UNKNOWN boolean If true, unknown objects are filtered. 1 N/A filter_target_label.CAR boolean If true, car objects are filtered. 0 N/A filter_target_label.TRUCK boolean If true, truck objects are filtered. 0 N/A filter_target_label.BUS boolean If true, bus objects are filtered. 0 N/A filter_target_label.TRAILER boolean If true, trailer objects are filtered. 0 N/A filter_target_label.MOTORCYCLE boolean If true, motorcycle objects are filtered. 0 N/A filter_target_label.BICYCLE boolean If true, bicycle objects are filtered. 0 N/A filter_target_label.PEDESTRIAN boolean If true, pedestrian objects are filtered. 0 N/A polygon_overlap_filter.enabled boolean If true, objects that are not in the lanelet polygon are filtered. 1 N/A lanelet_direction_filter.enabled boolean If true, objects that are not in the same direction as the lanelet are filtered. 0 N/A lanelet_direction_filter.velocity_yaw_threshold float If the yaw difference between the object and the lanelet is greater than this value, the object is filtered. 0.785398 N/A lanelet_direction_filter.object_speed_threshold float If the object speed is greater than this value, the object is filtered. 3 N/A"},{"location":"perception/autoware_detected_object_validation/#object_position_filter","title":"object_position_filter","text":"Name Type Description Default Range filter_target_label.UNKNOWN boolean Filter for UNKNOWN label 1 N/A filter_target_label.CAR boolean Filter for CAR label 0 N/A filter_target_label.TRUCK boolean Filter for TRUCK label 0 N/A filter_target_label.BUS boolean Filter for BUS label 0 N/A filter_target_label.TRAILER boolean Filter for TRAILER label 0 N/A filter_target_label.MOTORCYCLE boolean Filter for MOTORCYCLE label 0 N/A filter_target_label.BICYCLE boolean Filter for BICYCLE label 0 N/A filter_target_label.PEDESTRIAN boolean Filter for PEDESTRIAN label 0 N/A upper_bound_x float Upper bound for X coordinate 100 N/A lower_bound_x float Lower bound for X coordinate 0 N/A upper_bound_y float Upper bound for Y coordinate 10 N/A lower_bound_y float Lower bound for Y coordinate -10 N/A"},{"location":"perception/autoware_detected_object_validation/#obstacle_pointcloud_based_validator","title":"obstacle_pointcloud_based_validator","text":"Name Type Description Default Range min_points_num array The minimum number of obstacle point clouds in DetectedObjects [10, 10, 10, 10, 10, 10, 10, 10] N/A max_points_num array The max number of obstacle point clouds in DetectedObjects [10, 10, 10, 10, 10, 10, 10, 10] N/A min_points_and_distance_ratio array Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. [800, 800, 800, 800, 800, 800, 800, 800] N/A validate_max_distance_m float The maximum distance from the baselink to the object to be validated 70.0 N/A using_2d_validator boolean The xy-plane projected (2D) obstacle point clouds will be used for validation False N/A enable_debugger boolean Whether to create debug topics or not? False N/A"},{"location":"perception/autoware_detected_object_validation/#occupancy_grid_based_validator","title":"occupancy_grid_based_validator","text":"Name Type Description Default Range mean_threshold float The percentage threshold of allowed non-freespace. 0.6 N/A enable_debug boolean Whether to display debug images or not? 0 N/A"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/","title":"object_lanelet_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#object_lanelet_filter","title":"object_lanelet_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#purpose","title":"Purpose","text":"

The object_lanelet_filter is a node that filters detected object by using vector map. The objects only inside of the vector map will be published.

"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#input","title":"Input","text":"Name Type Description input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map input/object autoware_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#output","title":"Output","text":"Name Type Description output/object autoware_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#parameters","title":"Parameters","text":"Name Type Description debug bool if true, publish debug markers lanelet_extra_margin double if > 0 lanelet polygons are expanded by extra margin, if <= 0 margin is disabled"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered."},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The lanelet filter is performed based on the shape polygon and bounding box of the objects.

"},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_detected_object_validation/object-lanelet-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/","title":"object_position_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#object_position_filter","title":"object_position_filter","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#purpose","title":"Purpose","text":"

The object_position_filter is a node that filters detected object based on x,y values. The objects only inside of the x, y bound will be published.

"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#input","title":"Input","text":"Name Type Description input/object autoware_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#output","title":"Output","text":"Name Type Description output/object autoware_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered. upper_bound_x float 100.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_x float 0.00 Bound for filtering. Only used if filter_by_xy_position is true upper_bound_y float 50.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_y float -50.00 Bound for filtering. Only used if filter_by_xy_position is true"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Filtering is performed based on the center position of the object.

"},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_detected_object_validation/object-position-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/","title":"obstacle pointcloud based validator","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#obstacle-pointcloud-based-validator","title":"obstacle pointcloud based validator","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If the number of obstacle point groups in the DetectedObjects is small, it is considered a false positive and removed. The obstacle point cloud can be a point cloud after compare map filtering or a ground filtered point cloud.

In the debug image above, the red DetectedObject is the validated object. The blue object is the deleted object.

"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/obstacle_pointcloud sensor_msgs::msg::PointCloud2 Obstacle point cloud of dynamic objects"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#parameters","title":"Parameters","text":"Name Type Description using_2d_validator bool The xy-plane projected (2D) obstacle point clouds will be used for validation min_points_num int The minimum number of obstacle point clouds in DetectedObjects max_points_num int The max number of obstacle point clouds in DetectedObjects min_points_and_distance_ratio float Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. enable_debugger bool Whether to create debug topics or not?"},{"location":"perception/autoware_detected_object_validation/obstacle-pointcloud-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only represented objects as BoundingBox or Cylinder are supported.

"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/","title":"occupancy grid based validator","text":""},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#occupancy-grid-based-validator","title":"occupancy grid based validator","text":""},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Compare the occupancy grid map with the DetectedObject, and if a larger percentage of obstacles are in freespace, delete them.

Basically, it takes an occupancy grid map as input and generates a binary image of freespace or other.

A mask image is generated for each DetectedObject and the average value (percentage) in the mask image is calculated. If the percentage is low, it is deleted.

"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/occupancy_grid_map nav_msgs::msg::OccupancyGrid OccupancyGrid with no time series calculation is preferred."},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#parameters","title":"Parameters","text":"Name Type Description mean_threshold float The percentage threshold of allowed non-freespace. enable_debug bool Whether to display debug images or not?"},{"location":"perception/autoware_detected_object_validation/occupancy-grid-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only vehicle represented as BoundingBox are supported.

"},{"location":"perception/autoware_detection_by_tracker/","title":"autoware_detection_by_tracker","text":""},{"location":"perception/autoware_detection_by_tracker/#autoware_detection_by_tracker","title":"autoware_detection_by_tracker","text":""},{"location":"perception/autoware_detection_by_tracker/#purpose","title":"Purpose","text":"

This package feeds back the tracked objects to the detection module to keep it stable and keep detecting objects.

The autoware detection by tracker takes as input an unknown object containing a cluster of points and a tracker. The unknown object is optimized to fit the size of the tracker so that it can continue to be detected.

"},{"location":"perception/autoware_detection_by_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The autoware detection by tracker receives an unknown object containing a point cloud and a tracker, where the unknown object is mainly shape-fitted using euclidean clustering. Shape fitting using euclidean clustering and other methods has a problem called under segmentation and over segmentation.

Adapted from [3]

Simply looking at the overlap between the unknown object and the tracker does not work. We need to take measures for under segmentation and over segmentation.

"},{"location":"perception/autoware_detection_by_tracker/#policy-for-dealing-with-over-segmentation","title":"Policy for dealing with over segmentation","text":"
  1. Merge the unknown objects in the tracker as a single object.
  2. Shape fitting using the tracker information such as angle and size as reference information.
"},{"location":"perception/autoware_detection_by_tracker/#policy-for-dealing-with-under-segmentation","title":"Policy for dealing with under segmentation","text":"
  1. Compare the tracker and unknown objects, and determine that those with large recall and small precision are under segmented objects.
  2. In order to divide the cluster of under segmented objects, it iterate the parameters to make small clusters.
  3. Adjust the parameters several times and adopt the one with the highest IoU.
"},{"location":"perception/autoware_detection_by_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_detection_by_tracker/#input","title":"Input","text":"Name Type Description ~/input/initial_objects tier4_perception_msgs::msg::DetectedObjectsWithFeature unknown objects ~/input/tracked_objects tier4_perception_msgs::msg::TrackedObjects trackers"},{"location":"perception/autoware_detection_by_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::DetectedObjects objects"},{"location":"perception/autoware_detection_by_tracker/#parameters","title":"Parameters","text":"Name Type Description Default value tracker_ignore_label.UNKNOWN bool If true, the node will ignore the tracker if its label is unknown. true tracker_ignore_label.CAR bool If true, the node will ignore the tracker if its label is CAR. false tracker_ignore_label.PEDESTRIAN bool If true, the node will ignore the tracker if its label is pedestrian. false tracker_ignore_label.BICYCLE bool If true, the node will ignore the tracker if its label is bicycle. false tracker_ignore_label.MOTORCYCLE bool If true, the node will ignore the tracker if its label is MOTORCYCLE. false tracker_ignore_label.BUS bool If true, the node will ignore the tracker if its label is bus. false tracker_ignore_label.TRUCK bool If true, the node will ignore the tracker if its label is truck. false tracker_ignore_label.TRAILER bool If true, the node will ignore the tracker if its label is TRAILER. false"},{"location":"perception/autoware_detection_by_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_detection_by_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_detection_by_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_detection_by_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] M. Himmelsbach, et al. \"Tracking and classification of arbitrary objects with bottom-up/top-down detection.\" (2012).

[2] Arya Senna Abdul Rachman, Arya. \"3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.\" (2017).

[3] David Held, et al. \"A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues.\" (2016).

"},{"location":"perception/autoware_detection_by_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_elevation_map_loader/","title":"autoware_elevation_map_loader","text":""},{"location":"perception/autoware_elevation_map_loader/#autoware_elevation_map_loader","title":"autoware_elevation_map_loader","text":""},{"location":"perception/autoware_elevation_map_loader/#purpose","title":"Purpose","text":"

This package provides elevation map for autoware_compare_map_segmentation.

"},{"location":"perception/autoware_elevation_map_loader/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Generate elevation_map from subscribed pointcloud_map and vector_map and publish it. Save the generated elevation_map locally and load it from next time.

The elevation value of each cell is the average value of z of the points of the lowest cluster. Cells with No elevation value can be inpainted using the values of neighboring cells.

"},{"location":"perception/autoware_elevation_map_loader/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_elevation_map_loader/#input","title":"Input","text":"Name Type Description input/pointcloud_map sensor_msgs::msg::PointCloud2 The point cloud map input/vector_map autoware_map_msgs::msg::LaneletMapBin (Optional) The binary data of lanelet2 map input/pointcloud_map_metadata autoware_map_msgs::msg::PointCloudMapMetaData (Optional) The metadata of point cloud map"},{"location":"perception/autoware_elevation_map_loader/#output","title":"Output","text":"Name Type Description output/elevation_map grid_map_msgs::msg::GridMap The elevation map output/elevation_map_cloud sensor_msgs::msg::PointCloud2 (Optional) The point cloud generated from the value of elevation map"},{"location":"perception/autoware_elevation_map_loader/#service","title":"Service","text":"Name Type Description service/get_selected_pcd_map autoware_map_msgs::srv::GetSelectedPointCloudMap (Optional) service to request point cloud map. If pointcloud_map_loader uses selected pointcloud map loading via ROS 2 service, use this."},{"location":"perception/autoware_elevation_map_loader/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_elevation_map_loader/#node-parameters","title":"Node parameters","text":"Name Type Description Default value map_layer_name std::string elevation_map layer name elevation param_file_path std::string GridMap parameters config path_default elevation_map_directory std::string elevation_map file (bag2) path_default map_frame std::string map_frame when loading elevation_map file map use_inpaint bool Whether to inpaint empty cells true inpaint_radius float Radius of a circular neighborhood of each point inpainted that is considered by the algorithm [m] 0.3 use_elevation_map_cloud_publisher bool Whether to publish output/elevation_map_cloud false use_lane_filter bool Whether to filter elevation_map with vector_map false lane_margin float Margin distance from the lane polygon of the area to be included in the inpainting mask [m]. Used only when use_lane_filter=True. 0.0 use_sequential_load bool Whether to get point cloud map by service false sequential_map_load_num int The number of point cloud maps to load at once (only used when use_sequential_load is set true). This should not be larger than number of all point cloud map cells. 1"},{"location":"perception/autoware_elevation_map_loader/#gridmap-parameters","title":"GridMap parameters","text":"

The parameters are described on config/elevation_map_parameters.yaml.

"},{"location":"perception/autoware_elevation_map_loader/#general-parameters","title":"General parameters","text":"Name Type Description Default value pcl_grid_map_extraction/num_processing_threads int Number of threads for processing grid map cells. Filtering of the raw input point cloud is not parallelized. 12"},{"location":"perception/autoware_elevation_map_loader/#grid-map-parameters","title":"Grid map parameters","text":"

See: https://github.com/ANYbotics/grid_map/tree/ros2/grid_map_pcl

Resulting grid map parameters.

Name Type Description Default value pcl_grid_map_extraction/grid_map/min_num_points_per_cell int Minimum number of points in the point cloud that have to fall within any of the grid map cells. Otherwise the cell elevation will be set to NaN. 3 pcl_grid_map_extraction/grid_map/resolution float Resolution of the grid map. Width and length are computed automatically. 0.3 pcl_grid_map_extraction/grid_map/height_type int The parameter that determine the elevation of a cell 0: Smallest value among the average values of each cluster, 1: Mean value of the cluster with the most points 1 pcl_grid_map_extraction/grid_map/height_thresh float Height range from the smallest cluster (Only for height_type 1) 1.0"},{"location":"perception/autoware_elevation_map_loader/#point-cloud-pre-processing-parameters","title":"Point Cloud Pre-processing Parameters","text":""},{"location":"perception/autoware_elevation_map_loader/#rigid-body-transform-parameters","title":"Rigid body transform parameters","text":"

Rigid body transform that is applied to the point cloud before computing elevation.

Name Type Description Default value pcl_grid_map_extraction/cloud_transform/translation float Translation (xyz) that is applied to the input point cloud before computing elevation. 0.0 pcl_grid_map_extraction/cloud_transform/rotation float Rotation (intrinsic rotation, convention X-Y'-Z'') that is applied to the input point cloud before computing elevation. 0.0"},{"location":"perception/autoware_elevation_map_loader/#cluster-extraction-parameters","title":"Cluster extraction parameters","text":"

Cluster extraction is based on pcl algorithms. See https://pointclouds.org/documentation/tutorials/cluster_extraction.html for more details.

Name Type Description Default value pcl_grid_map_extraction/cluster_extraction/cluster_tolerance float Distance between points below which they will still be considered part of one cluster. 0.2 pcl_grid_map_extraction/cluster_extraction/min_num_points int Min number of points that a cluster needs to have (otherwise it will be discarded). 3 pcl_grid_map_extraction/cluster_extraction/max_num_points int Max number of points that a cluster can have (otherwise it will be discarded). 1000000"},{"location":"perception/autoware_elevation_map_loader/#outlier-removal-parameters","title":"Outlier removal parameters","text":"

See https://pointclouds.org/documentation/tutorials/statistical_outlier.html for more explanation on outlier removal.

Name Type Description Default value pcl_grid_map_extraction/outlier_removal/is_remove_outliers float Whether to perform statistical outlier removal. false pcl_grid_map_extraction/outlier_removal/mean_K float Number of neighbors to analyze for estimating statistics of a point. 10 pcl_grid_map_extraction/outlier_removal/stddev_threshold float Number of standard deviations under which points are considered to be inliers. 1.0"},{"location":"perception/autoware_elevation_map_loader/#subsampling-parameters","title":"Subsampling parameters","text":"

See https://pointclouds.org/documentation/tutorials/voxel_grid.html for more explanation on point cloud downsampling.

Name Type Description Default value pcl_grid_map_extraction/downsampling/is_downsample_cloud bool Whether to perform downsampling or not. false pcl_grid_map_extraction/downsampling/voxel_size float Voxel sizes (xyz) in meters. 0.02"},{"location":"perception/autoware_euclidean_cluster/","title":"autoware_euclidean_cluster","text":""},{"location":"perception/autoware_euclidean_cluster/#autoware_euclidean_cluster","title":"autoware_euclidean_cluster","text":""},{"location":"perception/autoware_euclidean_cluster/#purpose","title":"Purpose","text":"

autoware_euclidean_cluster is a package for clustering points into smaller parts to classify objects.

This package has two clustering methods: euclidean_cluster and voxel_grid_based_euclidean_cluster.

"},{"location":"perception/autoware_euclidean_cluster/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_euclidean_cluster/#euclidean_cluster","title":"euclidean_cluster","text":"

pcl::EuclideanClusterExtraction is applied to points. See official document for details.

"},{"location":"perception/autoware_euclidean_cluster/#voxel_grid_based_euclidean_cluster","title":"voxel_grid_based_euclidean_cluster","text":"
  1. A centroid in each voxel is calculated by pcl::VoxelGrid.
  2. The centroids are clustered by pcl::EuclideanClusterExtraction.
  3. The input points are clustered based on the clustered centroids.
"},{"location":"perception/autoware_euclidean_cluster/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_euclidean_cluster/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/autoware_euclidean_cluster/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature cluster pointcloud debug/clusters sensor_msgs::msg::PointCloud2 colored cluster pointcloud for visualization"},{"location":"perception/autoware_euclidean_cluster/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_euclidean_cluster/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/autoware_euclidean_cluster/#euclidean_cluster_1","title":"euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space"},{"location":"perception/autoware_euclidean_cluster/#voxel_grid_based_euclidean_cluster_1","title":"voxel_grid_based_euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space voxel_leaf_size float the voxel leaf size of x and y min_points_number_per_voxel int the minimum number of points for a voxel"},{"location":"perception/autoware_euclidean_cluster/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_euclidean_cluster/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

The use_height option of voxel_grid_based_euclidean_cluster isn't implemented yet.

"},{"location":"perception/autoware_ground_segmentation/","title":"autoware_ground_segmentation","text":""},{"location":"perception/autoware_ground_segmentation/#autoware_ground_segmentation","title":"autoware_ground_segmentation","text":""},{"location":"perception/autoware_ground_segmentation/#purpose","title":"Purpose","text":"

The autoware_ground_segmentation is a node that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each ground segmentation algorithm is in the following links.

Filter Name Description Detail ray_ground_filter A method of removing the ground based on the geometrical relationship between points lined up on radiation link scan_ground_filter Almost the same method as ray_ground_filter, but with slightly improved performance link ransac_ground_filter A method of removing the ground by approximating the ground to a plane link"},{"location":"perception/autoware_ground_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_ground_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"perception/autoware_ground_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/autoware_ground_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id has_static_tf_only bool false flag to listen TF only once max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"perception/autoware_ground_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

autoware::pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"perception/autoware_ground_segmentation/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/","title":"RANSAC Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#ransac-ground-filter","title":"RANSAC Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Apply the input points to the plane, and set the points at a certain distance from the plane as points other than the ground. Normally, whn using this method, the input points is filtered so that it is almost flat before use. Since the drivable area is often flat, there are methods such as filtering by lane.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame string base_link frame has_static_tf_only bool Flag to listen TF only once unit_axis string The axis which we need to search ground plane max_iterations int The maximum number of iterations outlier_threshold double The distance threshold to the model [m] plane_slope_threshold double The slope threshold to prevent mis-fitting [deg] voxel_size_x double voxel size x [m] voxel_size_y double voxel size y [m] voxel_size_z double voxel size z [m] height_threshold double The height threshold from ground plane for no ground points [m] debug bool whether to output debug information"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#referencesexternal-links","title":"References/External links","text":"

https://pcl.readthedocs.io/projects/tutorials/en/latest/planar_segmentation.html

"},{"location":"perception/autoware_ground_segmentation/docs/ransac-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/","title":"Ray Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#ray-ground-filter","title":"Ray Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The points is separated radially (Ray), and the ground is classified for each Ray sequentially from the point close to ego-vehicle based on the geometric information such as the distance and angle between the points.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description input_frame string frame id of input pointcloud output_frame string frame id of output pointcloud has_static_tf_only bool Flag to listen TF only once general_max_slope double The triangle created by general_max_slope is called the global cone. If the point is outside the global cone, it is judged to be a point that is no on the ground initial_max_slope double Generally, the point where the object first hits is far from ego-vehicle because of sensor blind spot, so resolution is different from that point and thereafter, so this parameter exists to set a separate local_max_slope local_max_slope double The triangle created by local_max_slope is called the local cone. This parameter for classification based on the continuity of points min_height_threshold double This parameter is used instead of height_threshold because it's difficult to determine continuity in the local cone when the points are too close to each other. radial_divider_angle double The angle of ray concentric_divider_distance double Only check points which radius is larger than concentric_divider_distance reclass_distance_threshold double To check if point is to far from previous one, if so classify again min_x double The parameter to set vehicle footprint manually max_x double The parameter to set vehicle footprint manually min_y double The parameter to set vehicle footprint manually max_y double The parameter to set vehicle footprint manually"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_ground_segmentation/docs/ray-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/","title":"Scan Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#scan-ground-filter","title":"Scan Ground Filter","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This algorithm works by following steps,

  1. Divide whole pointclouds into groups by azimuth angle (so-called ray)
  2. Sort points by radial distance (xy-distance), on each ray.
  3. Divide pointcloud into grids, on each ray.
  4. Classify the point
    1. Check radial distance to previous pointcloud, if the distance is large and previous pointcloud is \"no ground\" and the height level of current point greater than previous point, the current pointcloud is classified as no ground.
    2. Check vertical angle of the point compared with previous ground grid
    3. Check the height of the point compared with predicted ground level
    4. If vertical angle is greater than local_slope_max and related height to predicted ground level is greater than \"non ground height threshold\", the point is classified as \"non ground\"
    5. If the vertical angle is in range of [-local_slope_max, local_slope_max] or related height to predicted ground level is smaller than non_ground_height_threshold, the point is classified as \"ground\"
    6. If the vertical angle is lower than -local_slope_max or the related height to ground level is greater than detection_range_z_max, the point will be classified as out of range
"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description input_frame string \"base_link\" frame id of input pointcloud output_frame string \"base_link\" frame id of output pointcloud has_static_tf_only bool false Flag to listen TF only once global_slope_max_angle_deg double 8.0 The global angle to classify as the ground or object [deg].A large threshold may reduce false positive of high slope road classification but it may lead to increase false negative of non-ground classification, particularly for small objects. local_slope_max_angle_deg double 10.0 The local angle to classify as the ground or object [deg] when comparing with adjacent point.A small value enhance accuracy classification of object with inclined surface. This should be considered together with split_points_distance_tolerance value. radial_divider_angle_deg double 1.0 The angle which divide the whole pointcloud to sliced group [deg] split_points_distance_tolerance double 0.2 The xy-distance threshold to distinguish far and near [m] split_height_distance double 0.2 The height threshold to distinguish ground and non-ground pointcloud when comparing with adjacent points [m]. A small threshold improves classification of non-ground point, especially for high elevation resolution pointcloud lidar. However, it might cause false positive for small step-like road surface or misaligned multiple lidar configuration. use_virtual_ground_point bool true whether to use the ground center of front wheels as the virtual ground point. detection_range_z_max float 2.5 Maximum height of detection range [m], applied only for elevation_grid_mode center_pcl_shift float 0.0 The x-axis offset of addition LiDARs from vehicle center of mass [m], recommended to use only for additional LiDARs in elevation_grid_mode non_ground_height_threshold float 0.2 Height threshold of non ground objects [m] as split_height_distance and applied only for elevation_grid_mode grid_mode_switch_radius float 20.0 The distance where grid division mode change from by distance to by vertical angle [m], applied only for elevation_grid_mode grid_size_m float 0.5 The first grid size [m], applied only for elevation_grid_mode.A large value enhances the prediction stability for ground surface. suitable for rough surface or multiple lidar configuration. gnd_grid_buffer_size uint16 4 Number of grids using to estimate local ground slope, applied only for elevation_grid_mode low_priority_region_x float -20.0 The non-zero x threshold in back side from which small objects detection is low priority [m] elevation_grid_mode bool true Elevation grid scan mode option use_recheck_ground_cluster bool true Enable recheck ground cluster use_lowest_point bool true to select lowest point for reference in recheck ground cluster, otherwise select middle point"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

The elevation grid idea is referred from \"Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239\"

"},{"location":"perception/autoware_ground_segmentation/docs/scan-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/","title":"autoware_image_projection_based_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/#autoware_image_projection_based_fusion","title":"autoware_image_projection_based_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/#purpose","title":"Purpose","text":"

The autoware_image_projection_based_fusion is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).

"},{"location":"perception/autoware_image_projection_based_fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_image_projection_based_fusion/#sync-algorithm","title":"Sync Algorithm","text":""},{"location":"perception/autoware_image_projection_based_fusion/#matching","title":"matching","text":"

The offset between each camera and the lidar is set according to their shutter timing. After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.

current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0

"},{"location":"perception/autoware_image_projection_based_fusion/#fusion-and-timer","title":"fusion and timer","text":"

The subscription status of the message is signed with 'O'.

1.if a pointcloud message is subscribed under the below condition:

pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O O

If the roi msgs can be matched, fuse them and postprocess the pointcloud message. Otherwise, fuse the matched roi msgs and cache the pointcloud.

2.if a pointcloud message is subscribed under the below condition:

pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O

if the roi msgs can be matched, fuse them and cache the pointcloud.

3.if a pointcloud message is subscribed under the below condition:

pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O O

If the roi msg 3 is subscribed before the next pointcloud message coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. If the roi msg 3 is not subscribed before the next pointcloud message coming or timeout, postprocess the pointcloud message as it is.

The timeout threshold should be set according to the postprocessing time. E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. current default value at autoware.universe for XX1: - timeout_ms: 50.0

"},{"location":"perception/autoware_image_projection_based_fusion/#the-build_only-option","title":"The build_only option","text":"

The pointpainting_fusion node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true\n
"},{"location":"perception/autoware_image_projection_based_fusion/#known-limits","title":"Known Limits","text":"

The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.

"},{"location":"perception/autoware_image_projection_based_fusion/#approximate-camera-projection","title":"Approximate camera projection","text":"

For algorithms like pointpainting_fusion, the computation required to project points onto an unrectified (raw) image can be substantial. To address this, an option is provided to reduce the computational load. Set the approximate_camera_projection parameter to true for each camera (ROIs). If the corresponding point_project_to_unrectified_image parameter is also set to true, the projections will be pre-cached.

The cached projections are calculated using a grid, with the grid size specified by the approximation_grid_width_size and approximation_grid_height_size parameters in the configuration file. The centers of the grid are used as the projected points.

"},{"location":"perception/autoware_image_projection_based_fusion/#detail-description-of-each-fusions-algorithm-is-in-the-following-links","title":"Detail description of each fusion's algorithm is in the following links","text":"Fusion Name Description Detail roi_cluster_fusion Overwrite a classification label of clusters by that of ROIs from a 2D object detector. link roi_detected_object_fusion Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. link pointpainting_fusion Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. link roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects link"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/","title":"pointpainting_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#pointpainting_fusion","title":"pointpainting_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#purpose","title":"Purpose","text":"

The pointpainting_fusion is a package for utilizing the class information detected by a 2D object detection in 3D object detection.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The lidar points are projected onto the output of an image-only 2d object detection network and the class scores are appended to each point. The painted point cloud can then be fed to the centerpoint network.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 painted pointcloud ~/output/objects autoware_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file build_only bool false shutdown the node after TensorRT engine file is built trt_precision string fp16 TensorRT inference precision: fp32 or fp16 post_process_params.score_threshold double 0.4 detected objects with score less than threshold are ignored post_process_params.yaw_norm_thresholds list[double] [0.3, 0.3, 0.3, 0.3, 0.0] An array of distance threshold values of norm of yaw [rad]. post_process_params.iou_nms_search_distance_2d double 10.0 A maximum distance value to search the nearest objects. post_process_params.iou_nms_threshold double 0.1 A threshold value of NMS using IoU score. post_process_params.has_twist boolean false Indicates whether the model outputs twist value. densification_params.world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_params.num_past_frames int 1 the number of past frames to fuse with the current frame"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#referencesexternal-links","title":"References/External links","text":"

[1] Vora, Sourabh, et al. \"PointPainting: Sequential fusion for 3d object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[2] CVPR'20 Workshop on Scalability in Autonomous Driving] Waymo Open Dataset Challenge: https://youtu.be/9g9GsI33ol8?t=535 Ding, Zhuangzhuang, et al. \"1st Place Solution for Waymo Open Dataset Challenge--3D Detection and Domain Adaptation.\" arXiv preprint arXiv:2006.15505 (2020).

"},{"location":"perception/autoware_image_projection_based_fusion/docs/pointpainting-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/","title":"roi_cluster_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#roi_cluster_fusion","title":"roi_cluster_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#purpose","title":"Purpose","text":"

The roi_cluster_fusion is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature clustered pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature labeled cluster pointcloud ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#parameters","title":"Parameters","text":"

The following figure is an inner pipeline overview of RoI cluster fusion node. Please refer to it for your parameter settings.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description fusion_distance double If the detected object's distance to frame_id is less than the threshold, the fusion will be processed trust_object_distance double if the detected object's distance is less than the trust_object_distance, trust_object_iou_mode will be used, otherwise non_trust_object_iou_mode will be used trust_object_iou_mode string select mode from 3 options {iou, iou_x, iou_y} to calculate IoU in range of [0, trust_distance]. iou: IoU along x-axis and y-axis iou_x: IoU along x-axis iou_y: IoU along y-axis non_trust_object_iou_mode string the IOU mode using in range of [trust_distance, fusion_distance] if trust_distance < fusion_distance use_cluster_semantic_type bool if false, the labels of clusters are overwritten by UNKNOWN before fusion only_allow_inside_cluster bool if true, the only clusters contained inside RoIs by a detector roi_scale_factor double the scale factor for offset of detector RoIs if only_allow_inside_cluster=true iou_threshold double the IoU threshold to overwrite a label of clusters with a label of roi unknown_iou_threshold double the IoU threshold to fuse cluster with unknown label of roi remove_unknown bool if true, remove all UNKNOWN labeled objects from output rois_number int the number of input rois debug_mode bool If true, subscribe and publish images for visualization."},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-cluster-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/","title":"roi_detected_object_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#roi_detected_object_fusion","title":"roi_detected_object_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#purpose","title":"Purpose","text":"

The roi_detected_object_fusion is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In what follows, we describe the algorithm utilized by roi_detected_object_fusion (the meaning of each parameter can be found in the Parameters section):

  1. If the existence_probability of a detected object is greater than the threshold, it is accepted without any further processing and published in output.
  2. The remaining detected objects are projected onto image planes, and if the resulting ROIs overlap with the ones from the 2D detector, they are published as fused objects in output. The Intersection over Union (IoU) is used to determine if there are overlaps between the detections from input and the ROIs from input/rois.

The DetectedObject has three possible shape choices/implementations, where the polygon's vertices for each case are defined as follows:

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#input","title":"Input","text":"Name Type Description input autoware_perception_msgs::msg::DetectedObjects input detected objects input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes. input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image. input/image_raw[0-7] sensor_msgs::msg::Image images for visualization."},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#output","title":"Output","text":"Name Type Description output autoware_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization, ~/debug/fused_objects autoware_perception_msgs::msg::DetectedObjects fused detected objects ~/debug/ignored_objects autoware_perception_msgs::msg::DetectedObjects not fused detected objects"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number int the number of input rois debug_mode bool If set to true, the node subscribes to the image topic and publishes an image with debug drawings. passthrough_lower_bound_probability_thresholds vector[double] If the existence_probability of a detected object is greater than the threshold, it is published in output. trust_distances vector[double] If the distance of a detected object from the origin of frame_id is greater than the threshold, it is published in output. min_iou_threshold double If the iou between detected objects and rois is greater than min_iou_threshold, the objects are classified as fused. use_roi_probability float If set to true, the algorithm uses existence_probability of ROIs to match with the that of detected objects. roi_probability_threshold double If the existence_probability of ROIs is greater than the threshold, matched detected objects are published in output. can_assign_matrix vector[int] association matrix between rois and detected_objects to check that two rois on images can be match"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-detected-object-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

POLYGON, which is a shape of a detected object, isn't supported yet.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/","title":"roi_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#roi_pointcloud_fusion","title":"roi_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#purpose","title":"Purpose","text":"

The node roi_pointcloud_fusion is to cluster the pointcloud based on Region Of Interests (ROIs) detected by a 2D object detector, specific for unknown labeled ROI.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 output pointcloud as default of interface output_clusters tier4_perception_msgs::msg::DetectedObjectsWithFeature output clusters debug/clusters sensor_msgs/msg/PointCloud2 colored cluster pointcloud for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid cluster_2d_tolerance double cluster tolerance measured in radial direction rois_number int the number of input rois"},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/roi-pointcloud-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/","title":"segmentation_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#segmentation_pointcloud_fusion","title":"segmentation_pointcloud_fusion","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#purpose","title":"Purpose","text":"

The node segmentation_pointcloud_fusion is a package for filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation model.

"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] sensor_msgs::msg::Image A gray-scale image of semantic segmentation mask, the pixel value is semantic class id input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 output filtered pointcloud"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range filter_semantic_label_target.UNKNOWN boolean If true, UNKNOWN class of semantic will be filtered. 0 N/A filter_semantic_label_target.BUILDING boolean If true, BUILDING class of semantic will be filtered. 1 N/A filter_semantic_label_target.WALL boolean If true, WALL class of semantic will be filtered. 1 N/A filter_semantic_label_target.OBSTACLE boolean If true, OBSTACLE class of semantic will be filtered. 0 N/A filter_semantic_label_target.TRAFFIC_LIGHT boolean If true, TRAFFIC_LIGHT class of semantic will be filtered. 0 N/A filter_semantic_label_target.TRAFFIC_SIGN boolean If true, TRAFFIC_SIGN class of semantic will be filtered. 0 N/A filter_semantic_label_target.PERSON boolean If true, PERSON class of semantic will be filtered. 0 N/A filter_semantic_label_target.VEHICLE boolean If true, VEHICLE class of semantic will be filtered. 0 N/A filter_semantic_label_target.BIKE boolean If true, BIKE class of semantic will be filtered. 0 N/A filter_semantic_label_target.ROAD boolean If true, ROAD class of semantic will be filtered. 1 N/A filter_semantic_label_target.SIDEWALK boolean If true, SIDEWALK class of semantic will be filtered. 0 N/A filter_semantic_label_target.ROAD_PAINT boolean If true, ROAD_PAINT class of semantic will be filtered. 0 N/A filter_semantic_label_target.CURBSTONE boolean If true, CURBSTONE class of semantic will be filtered. 0 N/A filter_semantic_label_target.CROSSWALK boolean If true, CROSSWALK class of semantic will be filtered. 0 N/A filter_semantic_label_target.VEGETATION boolean If true, VEGETATION class of semantic will be filtered. 1 N/A filter_semantic_label_target.SKY boolean If true, SKY class of semantic will be filtered. 0 N/A filter_distance_threshold float A maximum distance of pointcloud to apply filter [m]. 60 \u22650.0 is_publish_debug_mask boolean If true, debug mask image will be published. 0 N/A"},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/","title":"autoware_lidar_apollo_instance_segmentation","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#autoware_lidar_apollo_instance_segmentation","title":"autoware_lidar_apollo_instance_segmentation","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#purpose","title":"Purpose","text":"

This node segments 3D pointcloud data from lidar sensors into obstacles, e.g., cars, trucks, bicycles, and pedestrians based on CNN based model and obstacle clustering method.

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

See the original design by Apollo.

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#input","title":"Input","text":"Name Type Description input/pointcloud sensor_msgs/PointCloud2 Pointcloud data from lidar sensors"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#output","title":"Output","text":"Name Type Description output/labeled_clusters tier4_perception_msgs/DetectedObjectsWithFeature Detected objects with labeled pointcloud cluster. debug/instance_pointcloud sensor_msgs/PointCloud2 Segmented pointcloud for visualization."},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold double 0.8 If the score of a detected object is lower than this value, the object is ignored. range int 60 Half of the length of feature map sides. [m] width int 640 The grid width of feature map. height int 640 The grid height of feature map. engine_file string \"vls-128.engine\" The name of TensorRT engine file for CNN model. prototxt_file string \"vls-128.prototxt\" The name of prototxt file for CNN model. caffemodel_file string \"vls-128.caffemodel\" The name of caffemodel file for CNN model. use_intensity_feature bool true The flag to use intensity feature of pointcloud. use_constant_feature bool false The flag to use direction and distance feature of pointcloud. target_frame string \"base_link\" Pointcloud data is transformed into this frame. z_offset int 2 z offset from target frame. [m] build_only bool false shutdown the node after TensorRT engine file is built"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is no training code for CNN model.

"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#note","title":"Note","text":"

This package makes use of three external codes. The trained files are provided by apollo. The trained files are automatically downloaded when you build.

Original URL

Supported lidars are velodyne 16, 64 and 128, but you can also use velodyne 32 and other lidars with good accuracy.

  1. apollo 3D Obstacle Perception description

    /******************************************************************************\n* Copyright 2017 The Apollo Authors. All Rights Reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*****************************************************************************/\n
  2. tensorRTWrapper : It is used under the lib directory.

    MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
  3. autoware_perception description

    /*\n* Copyright 2018-2019 Autoware Foundation. All rights reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n
"},{"location":"perception/autoware_lidar_apollo_instance_segmentation/#special-thanks","title":"Special thanks","text":""},{"location":"perception/autoware_lidar_centerpoint/","title":"autoware_lidar_centerpoint","text":""},{"location":"perception/autoware_lidar_centerpoint/#autoware_lidar_centerpoint","title":"autoware_lidar_centerpoint","text":""},{"location":"perception/autoware_lidar_centerpoint/#purpose","title":"Purpose","text":"

autoware_lidar_centerpoint is a package for detecting dynamic 3D objects.

"},{"location":"perception/autoware_lidar_centerpoint/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

"},{"location":"perception/autoware_lidar_centerpoint/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_lidar_centerpoint/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/autoware_lidar_centerpoint/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects detected objects debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped cyclic time (msg) debug/processing_time_ms tier4_debug_msgs::msg::Float64Stamped processing time (ms)"},{"location":"perception/autoware_lidar_centerpoint/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_lidar_centerpoint/#ml-model-parameters","title":"ML Model Parameters","text":"

Note that these parameters are associated with ONNX file, predefined during the training phase. Be careful to change ONNX file as well when changing this parameter. Also, whenever you update the ONNX file, do NOT forget to check these values.

Name Type Default Value Description model_params.class_names list[string] [\"CAR\", \"TRUCK\", \"BUS\", \"BICYCLE\", \"PEDESTRIAN\"] list of class names for model outputs model_params.point_feature_size int 4 number of features per point in the point cloud model_params.max_voxel_size int 40000 maximum number of voxels model_params.point_cloud_range list[double] [-76.8, -76.8, -4.0, 76.8, 76.8, 6.0] detection range [min_x, min_y, min_z, max_x, max_y, max_z] [m] model_params.voxel_size list[double] [0.32, 0.32, 10.0] size of each voxel [x, y, z] [m] model_params.downsample_factor int 1 downsample factor for coordinates model_params.encoder_in_feature_size int 9 number of input features to the encoder model_params.has_variance bool false true if the model outputs pose variance as well as pose for each bbox model_params.has_twist bool false true if the model outputs velocity as well as pose for each bbox"},{"location":"perception/autoware_lidar_centerpoint/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file build_only bool false shutdown the node after TensorRT engine file is built trt_precision string fp16 TensorRT inference precision: fp32 or fp16 post_process_params.score_threshold double 0.4 detected objects with score less than threshold are ignored post_process_params.yaw_norm_thresholds list[double] [0.3, 0.3, 0.3, 0.3, 0.0] An array of distance threshold values of norm of yaw [rad]. post_process_params.iou_nms_search_distance_2d double - If two objects are farther than the value, NMS isn't applied. post_process_params.iou_nms_threshold double - IoU threshold for the IoU-based Non Maximum Suppression post_process_params.has_twist boolean false Indicates whether the model outputs twist value. densification_params.world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_params.num_past_frames int 1 the number of past frames to fuse with the current frame"},{"location":"perception/autoware_lidar_centerpoint/#the-build_only-option","title":"The build_only option","text":"

The autoware_lidar_centerpoint node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_tiny model_path:=/home/autoware/autoware_data/lidar_centerpoint model_param_path:=$(ros2 pkg prefix autoware_lidar_centerpoint --share)/config/centerpoint_tiny.param.yaml build_only:=true\n
"},{"location":"perception/autoware_lidar_centerpoint/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_lidar_centerpoint/#trained-models","title":"Trained Models","text":"

You can download the onnx format of trained models by clicking on the links below.

Centerpoint was trained in nuScenes (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. Centerpoint tiny was trained in Argoverse 2 (~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.

"},{"location":"perception/autoware_lidar_centerpoint/#training-centerpoint-model-and-deploying-to-the-autoware","title":"Training CenterPoint Model and Deploying to the Autoware","text":""},{"location":"perception/autoware_lidar_centerpoint/#overview","title":"Overview","text":"

This guide provides instructions on training a CenterPoint model using the mmdetection3d repository and seamlessly deploying it within Autoware.

"},{"location":"perception/autoware_lidar_centerpoint/#installation","title":"Installation","text":""},{"location":"perception/autoware_lidar_centerpoint/#install-prerequisites","title":"Install prerequisites","text":"

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-centerpoint python=3.8 -y\nconda activate train-centerpoint\n

Step 3. Install PyTorch

Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.

conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia\n
"},{"location":"perception/autoware_lidar_centerpoint/#install-mmdetection3d","title":"Install mmdetection3d","text":"

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim\nmim install mmengine\nmim install 'mmcv>=2.0.0rc4'\nmim install 'mmdet>=3.0.0rc5, <3.3.0'\n

Step 2. Install mmdetection3d forked repository

Introduced several valuable enhancements in our fork of the mmdetection3d repository. Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. In addition, we've integrated a PyTorch to ONNX converter and a T4 format reader for added functionality.

git clone https://github.com/autowarefoundation/mmdetection3d.git\ncd mmdetection3d\npip install -v -e .\n
"},{"location":"perception/autoware_lidar_centerpoint/#use-training-repository-with-docker","title":"Use Training Repository with Docker","text":"

Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.

Clone fork of the mmdetection3d repository

git clone https://github.com/autowarefoundation/mmdetection3d.git\n

Build the Docker image by running the following command:

cd mmdetection3d\ndocker build -t mmdetection3d -f docker/Dockerfile .\n

Run the Docker container:

docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d\n
"},{"location":"perception/autoware_lidar_centerpoint/#preparing-nuscenes-dataset-for-training","title":"Preparing NuScenes dataset for training","text":"

Step 1. Download the NuScenes dataset from the official website and extract the dataset to a folder of your choice.

Note: The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.

Step 2. Create a symbolic link to the dataset folder

ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/\n

Step 3. Prepare the NuScenes data by running:

cd mmdetection3d\npython tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes\n
"},{"location":"perception/autoware_lidar_centerpoint/#training-centerpoint-with-nuscenes-dataset","title":"Training CenterPoint with NuScenes Dataset","text":""},{"location":"perception/autoware_lidar_centerpoint/#prepare-the-config-file","title":"Prepare the config file","text":"

The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of this centerpoint configuration file from mmdetection3D. In this custom configuration, the use_voxel_center_z parameter is set as False to deactivate the z coordinate of the voxel center, aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as [32, 32].

The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file. This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.

"},{"location":"perception/autoware_lidar_centerpoint/#start-training","title":"Start training","text":"
python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom\n
"},{"location":"perception/autoware_lidar_centerpoint/#evaluation-of-the-trained-model","title":"Evaluation of the trained model","text":"

For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors: 1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes, including training, evaluation, and fine-tuning of models. It is organized in the T4 format.

"},{"location":"perception/autoware_lidar_centerpoint/#download-the-sample-dataset","title":"Download the sample dataset","text":"
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz\n#Extract the dataset to a folder of your choice\ntar -xvf lidar_detection_sample_dataset.tar.gz\n#Create a symbolic link to the dataset folder\nln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/\n
"},{"location":"perception/autoware_lidar_centerpoint/#prepare-dataset-and-evaluate-trained-model","title":"Prepare dataset and evaluate trained model","text":"

Create .pkl files for training, evaluation, and testing.

The dataset was formatted according to T4Dataset specifications, with 'sample_dataset' designated as one of its versions.

python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2\n

Run evaluation

python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT  --task lidar_det\n

Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at the base link location of the vehicle.

"},{"location":"perception/autoware_lidar_centerpoint/#deploying-centerpoint-model-to-autoware","title":"Deploying CenterPoint model to Autoware","text":""},{"location":"perception/autoware_lidar_centerpoint/#convert-centerpoint-pytorch-model-to-onnx-format","title":"Convert CenterPoint PyTorch model to ONNX Format","text":"

The autoware_lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network, such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository, we have included a script that converts the CenterPoint model to Autoware compatible ONNX format. You can find it in mmdetection3d/projects/AutowareCenterPoint file.

python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models\n
"},{"location":"perception/autoware_lidar_centerpoint/#create-the-config-file-for-the-custom-model","title":"Create the config file for the custom model","text":"

Create a new config file named centerpoint_custom.param.yaml under the config file directory of the autoware_lidar_centerpoint node. Sets the parameters of the config file like point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.

/**:\nros__parameters:\nclass_names: [\"CAR\", \"TRUCK\", \"BUS\", \"BICYCLE\", \"PEDESTRIAN\"]\npoint_feature_size: 4\nmax_voxel_size: 40000\npoint_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0]\nvoxel_size: [0.2, 0.2, 8.0]\ndownsample_factor: 1\nencoder_in_feature_size: 9\n# post-process params\ncircle_nms_dist_threshold: 0.5\niou_nms_search_distance_2d: 10.0\niou_nms_threshold: 0.1\nyaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0]\n
"},{"location":"perception/autoware_lidar_centerpoint/#launch-the-lidar_centerpoint-node","title":"Launch the lidar_centerpoint node","text":"
cd /YOUR/AUTOWARE/PATH/Autoware\nsource install/setup.bash\nros2 launch autoware_lidar_centerpoint lidar_centerpoint.launch.xml  model_name:=centerpoint_custom  model_path:=/PATH/TO/ONNX/FILE/\n
"},{"location":"perception/autoware_lidar_centerpoint/#changelog","title":"Changelog","text":""},{"location":"perception/autoware_lidar_centerpoint/#v1-20220706","title":"v1 (2022/07/06)","text":"Name URLs Description centerpoint pts_voxel_encoder pts_backbone_neck_head There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32] of PillarFeatureNet centerpoint_tiny pts_voxel_encoder pts_backbone_neck_head The same model as default of v0.

These changes are compared with this configuration.

"},{"location":"perception/autoware_lidar_centerpoint/#v0-20211203","title":"v0 (2021/12/03)","text":"Name URLs Description default pts_voxel_encoder pts_backbone_neck_head There are two changes from the original CenterPoint architecture. num_filters=[32] of PillarFeatureNet and ds_layer_strides=[2, 2, 2] of RPN"},{"location":"perception/autoware_lidar_centerpoint/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_lidar_centerpoint/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_lidar_centerpoint/#referencesexternal-links","title":"References/External links","text":"

[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).

[2] Lang, Alex H., et al. \"PointPillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[3] https://github.com/tianweiy/CenterPoint

[4] https://github.com/open-mmlab/mmdetection3d

[5] https://github.com/open-mmlab/OpenPCDet

[6] https://github.com/yukkysaito/autoware_perception

[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars

[8] https://www.nuscenes.org/nuscenes

[9] https://www.argoverse.org/av2.html

"},{"location":"perception/autoware_lidar_centerpoint/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_lidar_centerpoint/#acknowledgment-deepenais-3d-annotation-tools-contribution","title":"Acknowledgment: deepen.ai's 3D Annotation Tools Contribution","text":"

Special thanks to Deepen AI for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset.

"},{"location":"perception/autoware_lidar_centerpoint/#legal-notice","title":"Legal Notice","text":"

The nuScenes dataset is released publicly for non-commercial use under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. Additional Terms of Use can be found at https://www.nuscenes.org/terms-of-use. To inquire about a commercial license please contact nuscenes@motional.com.

"},{"location":"perception/autoware_lidar_transfusion/","title":"autoware_lidar_transfusion","text":""},{"location":"perception/autoware_lidar_transfusion/#autoware_lidar_transfusion","title":"autoware_lidar_transfusion","text":""},{"location":"perception/autoware_lidar_transfusion/#purpose","title":"Purpose","text":"

The autoware_lidar_transfusion package is used for 3D object detection based on lidar data (x, y, z, intensity).

"},{"location":"perception/autoware_lidar_transfusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The implementation bases on TransFusion [1] work. It uses TensorRT library for data process and network inference.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

"},{"location":"perception/autoware_lidar_transfusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_lidar_transfusion/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 Input pointcloud."},{"location":"perception/autoware_lidar_transfusion/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_perception_msgs::msg::DetectedObjects Detected objects. debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped Cyclic time (ms). debug/pipeline_latency_ms tier4_debug_msgs::msg::Float64Stamped Pipeline latency time (ms). debug/processing_time/preprocess_ms tier4_debug_msgs::msg::Float64Stamped Preprocess (ms). debug/processing_time/inference_ms tier4_debug_msgs::msg::Float64Stamped Inference time (ms). debug/processing_time/postprocess_ms tier4_debug_msgs::msg::Float64Stamped Postprocess time (ms). debug/processing_time/total_ms tier4_debug_msgs::msg::Float64Stamped Total processing time (ms)."},{"location":"perception/autoware_lidar_transfusion/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_lidar_transfusion/#transfusion-node","title":"TransFusion node","text":"Name Type Description Default Range trt_precision string A precision of TensorRT engine. fp16 ['fp16', 'fp32'] cloud_capacity integer Capacity of the point cloud buffer (should be set to at least the maximum theoretical number of points). 2000000 \u22651 onnx_path string A path to ONNX model file. $(var model_path)/transfusion.onnx N/A engine_path string A path to TensorRT engine file. $(var model_path)/transfusion.engine N/A densification_num_past_frames integer A number of past frames to be considered as same input frame. 1 \u22650 densification_world_frame_id string A name of frame id where world coordinates system is defined with respect to. map N/A circle_nms_dist_threshold float A distance threshold between detections in NMS. 0.5 \u22650.0 iou_nms_search_distance_2d float A maximum distance value to search the nearest objects. 10.0 \u22650.0 iou_nms_threshold float A threshold value of NMS using IoU score. 0.1 \u22650.0\u22641.0 yaw_norm_thresholds array A thresholds array of direction vectors norm, all of objects with vector norm less than this threshold are ignored. [0.3, 0.3, 0.3, 0.3, 0.0] N/A score_threshold float A threshold value of confidence score, all of objects with score less than this threshold are ignored. 0.2 \u22650.0"},{"location":"perception/autoware_lidar_transfusion/#transfusion-model","title":"TransFusion model","text":"Name Type Description Default Range class_names array An array of class names will be predicted. ['CAR', 'TRUCK', 'BUS', 'BICYCLE', 'PEDESTRIAN'] N/A voxels_num array A maximum number of voxels [min, opt, max]. [5000, 30000, 60000] N/A point_cloud_range array An array of distance ranges of each class. [-76.8, -76.8, -3.0, 76.8, 76.8, 5.0] N/A voxel_size array Voxels size [x, y, z]. [0.3, 0.3, 8.0] N/A num_proposals integer Number of proposals at TransHead. 500 \u22651"},{"location":"perception/autoware_lidar_transfusion/#detection-class-remapper","title":"Detection class remapper","text":"Name Type Description Default Range allow_remapping_by_area_matrix array Whether to allow remapping of classes. The order of 8x8 matrix classes comes from ObjectClassification msg. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] N/A min_area_matrix array Minimum area for specific class to consider class remapping. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.1, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] N/A max_area_matrix array Maximum area for specific class to consider class remapping. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] N/A"},{"location":"perception/autoware_lidar_transfusion/#the-build_only-option","title":"The build_only option","text":"

The autoware_lidar_transfusion node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml build_only:=true\n
"},{"location":"perception/autoware_lidar_transfusion/#the-log_level-option","title":"The log_level option","text":"

The default logging severity level for autoware_lidar_transfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:

ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml log_level:=debug\n
"},{"location":"perception/autoware_lidar_transfusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This library operates on raw cloud data (bytes). It is assumed that the input pointcloud message has following format:

[\n  sensor_msgs.msg.PointField(name='x', offset=0, datatype=7, count=1),\n  sensor_msgs.msg.PointField(name='y', offset=4, datatype=7, count=1),\n  sensor_msgs.msg.PointField(name='z', offset=8, datatype=7, count=1),\n  sensor_msgs.msg.PointField(name='intensity', offset=12, datatype=2, count=1)\n]\n

This input may consist of other fields as well - shown format is required minimum. For debug purposes, you can validate your pointcloud topic using simple command:

ros2 topic echo <input_topic> --field fields\n
"},{"location":"perception/autoware_lidar_transfusion/#trained-models","title":"Trained Models","text":"

You can download the onnx format of trained models by clicking on the links below.

The model was trained in TIER IV's internal database (~11k lidar frames) for 50 epochs.

"},{"location":"perception/autoware_lidar_transfusion/#changelog","title":"Changelog","text":""},{"location":"perception/autoware_lidar_transfusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_lidar_transfusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_lidar_transfusion/#referencesexternal-links","title":"References/External links","text":"

[1] Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu and Chiew-Lan Tai. \"TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers.\" arXiv preprint arXiv:2203.11496 (2022).

[2] https://github.com/wep21/CUDA-TransFusion

[3] https://github.com/open-mmlab/mmdetection3d

[4] https://github.com/open-mmlab/OpenPCDet

[5] https://www.nuscenes.org/nuscenes

"},{"location":"perception/autoware_lidar_transfusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_map_based_prediction/","title":"map_based_prediction","text":""},{"location":"perception/autoware_map_based_prediction/#map_based_prediction","title":"map_based_prediction","text":""},{"location":"perception/autoware_map_based_prediction/#role","title":"Role","text":"

map_based_prediction is a module to predict the future paths (and their probabilities) of other vehicles and pedestrians according to the shape of the map and the surrounding environment.

"},{"location":"perception/autoware_map_based_prediction/#assumptions","title":"Assumptions","text":""},{"location":"perception/autoware_map_based_prediction/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_map_based_prediction/#flow-chart","title":"Flow chart","text":""},{"location":"perception/autoware_map_based_prediction/#path-prediction-for-road-users","title":"Path prediction for road users","text":""},{"location":"perception/autoware_map_based_prediction/#remove-old-object-history","title":"Remove old object history","text":"

Store time-series data of objects to determine the vehicle's route and to detect lane change for several duration. Object Data contains the object's position, speed, and time information.

"},{"location":"perception/autoware_map_based_prediction/#get-current-lanelet-and-update-object-history","title":"Get current lanelet and update Object history","text":"

Search one or more lanelets satisfying the following conditions for each target object and store them in the ObjectData.

"},{"location":"perception/autoware_map_based_prediction/#get-predicted-reference-path","title":"Get predicted reference path","text":"

The conditions for left lane change detection are:

Lane change logics is illustrated in the figure below.An example of how to tune the parameters is described later.

"},{"location":"perception/autoware_map_based_prediction/#tuning-lane-change-detection-logic","title":"Tuning lane change detection logic","text":"

Currently we provide three parameters to tune lane change detection:

You can change these parameters in rosparam in the table below.

param name default value dist_threshold_for_lane_change_detection 1.0 [m] time_threshold_for_lane_change_detection 5.0 [s] cutoff_freq_of_velocity_for_lane_change_detection 0.1 [Hz]"},{"location":"perception/autoware_map_based_prediction/#tuning-threshold-parameters","title":"Tuning threshold parameters","text":"

Increasing these two parameters will slow down and stabilize the lane change estimation.

Normally, we recommend tuning only time_threshold_for_lane_change_detection because it is the more important factor for lane change decision.

"},{"location":"perception/autoware_map_based_prediction/#tuning-lateral-velocity-calculation","title":"Tuning lateral velocity calculation","text":"

Lateral velocity calculation is also a very important factor for lane change decision because it is used in the time domain decision.

The predicted time to reach the lane boundary is calculated by

\\[ t_{predicted} = \\dfrac{d_{lat}}{v_{lat}} \\]

where \\(d_{lat}\\) and \\(v_{lat}\\) represent the lateral distance to the lane boundary and the lateral velocity, respectively.

Lowering the cutoff frequency of the low-pass filter for lateral velocity will make the lane change decision more stable but slower. Our setting is very conservative, so you may increase this parameter if you want to make the lane change decision faster.

For the additional information, here we show how we calculate lateral velocity.

lateral velocity calculation method equation description [applied] time derivative of lateral distance \\(\\dfrac{\\Delta d_{lat}}{\\Delta t}\\) Currently, we use this method to deal with winding roads. Since this time differentiation easily becomes noisy, we also use a low-pass filter to get smoothed velocity. [not applied] Object Velocity Projection to Lateral Direction \\(v_{obj} \\sin(\\theta)\\) Normally, object velocities are less noisy than the time derivative of lateral distance. But the yaw difference \\(\\theta\\) between the lane and object directions sometimes becomes discontinuous, so we did not adopt this method.

Currently, we use the upper method with a low-pass filter to calculate lateral velocity.

"},{"location":"perception/autoware_map_based_prediction/#path-generation","title":"Path generation","text":"

Path generation is generated on the frenet frame. The path is generated by the following steps:

  1. Get the frenet frame of the reference path.
  2. Generate the frenet frame of the current position of the object and the finite position of the object.
  3. Optimize the path in each longitudinal and lateral coordinate of the frenet frame. (Use the quintic polynomial to fit start and end conditions.)
  4. Convert the path to the global coordinate.

See paper [2] for more details.

"},{"location":"perception/autoware_map_based_prediction/#tuning-lateral-path-shape","title":"Tuning lateral path shape","text":"

lateral_control_time_horizon parameter supports the tuning of the lateral path shape. This parameter is used to calculate the time to reach the reference path. The smaller the value, the more the path will be generated to reach the reference path quickly. (Mostly the center of the lane.)

"},{"location":"perception/autoware_map_based_prediction/#pruning-predicted-paths-with-lateral-acceleration-constraint-for-vehicle-obstacles","title":"Pruning predicted paths with lateral acceleration constraint (for vehicle obstacles)","text":"

It is possible to apply a maximum lateral acceleration constraint to generated vehicle paths. This check verifies if it is possible for the vehicle to perform the predicted path without surpassing a lateral acceleration threshold max_lateral_accel when taking a curve. If it is not possible, it checks if the vehicle can slow down on time to take the curve with a deceleration of min_acceleration_before_curve and comply with the constraint. If that is also not possible, the path is eliminated.

Currently we provide three parameters to tune the lateral acceleration constraint:

You can change these parameters in rosparam in the table below.

param name default value check_lateral_acceleration_constraints false [bool] max_lateral_accel 2.0 [m/s^2] min_acceleration_before_curve -2.0 [m/s^2]"},{"location":"perception/autoware_map_based_prediction/#using-vehicle-acceleration-for-path-prediction-for-vehicle-obstacles","title":"Using Vehicle Acceleration for Path Prediction (for Vehicle Obstacles)","text":"

By default, the map_based_prediction module uses the current obstacle's velocity to compute its predicted path length. However, it is possible to use the obstacle's current acceleration to calculate its predicted path's length.

"},{"location":"perception/autoware_map_based_prediction/#decaying-acceleration-model","title":"Decaying Acceleration Model","text":"

Since this module tries to predict the vehicle's path several seconds into the future, it is not practical to consider the current vehicle's acceleration as constant (it is not assumed the vehicle will be accelerating for prediction_time_horizon seconds after detection). Instead, a decaying acceleration model is used. With the decaying acceleration model, a vehicle's acceleration is modeled as:

$\\ a(t) = a_{t0} \\cdot e^{-\\lambda \\cdot t} $

where $\\ a_{t0} $ is the vehicle acceleration at the time of detection $\\ t0 $, and $\\ \\lambda $ is the decay constant $\\ \\lambda = \\ln(2) / hl $ and $\\ hl $ is the exponential's half life.

Furthermore, the integration of $\\ a(t) $ over time gives us equations for velocity, $\\ v(t) $ and distance $\\ x(t) $ as:

$\\ v(t) = v{t0} + a * (1/\\lambda) \\cdot (1 - e^{-\\lambda \\cdot t}) $

and

$\\ x(t) = x{t0} + (v + a{t0} * (1/\\lambda)) \\cdot t + a(1/\u03bb^2)(e^{-\\lambda \\cdot t} - 1) $

With this model, the influence of the vehicle's detected instantaneous acceleration on the predicted path's length is diminished but still considered. This feature also considers that the obstacle might not accelerate past its road's speed limit (multiplied by a tunable factor).

Currently, we provide three parameters to tune the use of obstacle acceleration for path prediction:

You can change these parameters in rosparam in the table below.

Param Name Default Value use_vehicle_acceleration false [bool] acceleration_exponential_half_life 2.5 [s] speed_limit_multiplier 1.5 []"},{"location":"perception/autoware_map_based_prediction/#path-prediction-for-crosswalk-users","title":"Path prediction for crosswalk users","text":"

This module treats Pedestrians and Bicycles as objects using the crosswalk, and outputs prediction path based on map and estimated object's velocity, assuming the object has intention to cross the crosswalk, if the objects satisfies at least one of the following conditions:

If there are a reachable crosswalk entry points within the prediction_time_horizon and the objects satisfies above condition, this module outputs additional predicted path to cross the opposite side via the crosswalk entry point.

This module takes into account the corresponding traffic light information. When RED signal is indicated, we assume the target object will not walk across. In addition, if the target object is stopping (not moving) against GREEN signal, we assume the target object will not walk across either. This prediction comes from the assumption that the object should move if the traffic light is green and the object is intended to cross.

If the target object is inside the road or crosswalk, this module outputs one or two additional prediction path(s) to reach exit point of the crosswalk. The number of prediction paths are depend on whether object is moving or not. If the object is moving, this module outputs one prediction path toward an exit point that existed in the direction of object's movement. One the other hand, if the object has stopped, it is impossible to infer which exit points the object want to go, so this module outputs two prediction paths toward both side exit point.

"},{"location":"perception/autoware_map_based_prediction/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_map_based_prediction/#input","title":"Input","text":"Name Type Description ~/perception/object_recognition/tracking/objects autoware_perception_msgs::msg::TrackedObjects tracking objects without predicted path. ~/vector_map autoware_map_msgs::msg::LaneletMapBin binary data of Lanelet2 Map. ~/perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray rearranged information on the corresponding traffic lights"},{"location":"perception/autoware_map_based_prediction/#output","title":"Output","text":"Name Type Description ~/input/objects autoware_perception_msgs::msg::TrackedObjects tracking objects. Default is set to /perception/object_recognition/tracking/objects ~/output/objects autoware_perception_msgs::msg::PredictedObjects tracking objects with predicted path. ~/objects_path_markers visualization_msgs::msg::MarkerArray marker for visualization. ~/debug/processing_time_ms std_msgs::msg::Float64 processing time of this module. ~/debug/cyclic_time_ms std_msgs::msg::Float64 cyclic time of this module."},{"location":"perception/autoware_map_based_prediction/#parameters","title":"Parameters","text":"Parameter Unit Type Description enable_delay_compensation [-] bool flag to enable the time delay compensation for the position of the object prediction_time_horizon [s] double predict time duration for predicted path lateral_control_time_horizon [s] double time duration for predicted path will reach the reference path (mostly center of the lane) prediction_sampling_delta_time [s] double sampling time for points in predicted path min_velocity_for_map_based_prediction [m/s] double apply map-based prediction to the objects with higher velocity than this value min_crosswalk_user_velocity [m/s] double minimum velocity used when crosswalk user's velocity is calculated max_crosswalk_user_delta_yaw_threshold_for_lanelet [rad] double maximum yaw difference between crosswalk user and lanelet to use in path prediction for crosswalk users dist_threshold_for_searching_lanelet [m] double The threshold of the angle used when searching for the lane to which the object belongs delta_yaw_threshold_for_searching_lanelet [rad] double The threshold of the angle used when searching for the lane to which the object belongs sigma_lateral_offset [m] double Standard deviation for lateral position of objects sigma_yaw_angle_deg [deg] double Standard deviation yaw angle of objects object_buffer_time_length [s] double Time span of object history to store the information history_time_length [s] double Time span of object information used for prediction prediction_time_horizon_rate_for_validate_shoulder_lane_length [-] double prediction path will disabled when the estimated path length exceeds lanelet length. This parameter control the estimated path length"},{"location":"perception/autoware_map_based_prediction/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_map_based_prediction/#reference","title":"Reference","text":"
  1. M. Werling, J. Ziegler, S. Kammel, and S. Thrun, \u201cOptimal trajectory generation for dynamic street scenario in a frenet frame,\u201d IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA, May 2010.
  2. A. Houenou, P. Bonnifait, V. Cherfaoui, and Wen Yao, \u201cVehicle trajectory prediction based on motion model and maneuver recognition,\u201d in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, nov 2013, pp. 4363-4369.
"},{"location":"perception/autoware_multi_object_tracker/","title":"multi_object_tracker","text":""},{"location":"perception/autoware_multi_object_tracker/#multi_object_tracker","title":"multi_object_tracker","text":""},{"location":"perception/autoware_multi_object_tracker/#purpose","title":"Purpose","text":"

The results of the detection are processed by a time series. The main purpose is to give ID and estimate velocity.

"},{"location":"perception/autoware_multi_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This multi object tracker consists of data association and EKF.

"},{"location":"perception/autoware_multi_object_tracker/#data-association","title":"Data association","text":"

The data association performs maximum score matching, called min cost max flow problem. In this package, mussp[1] is used as solver. In addition, when associating observations to tracers, data association have gates such as the area of the object from the BEV, Mahalanobis distance, and maximum distance, depending on the class label.

"},{"location":"perception/autoware_multi_object_tracker/#ekf-tracker","title":"EKF Tracker","text":"

Models for pedestrians, bicycles (motorcycles), cars and unknown are available. The pedestrian or bicycle tracker is running at the same time as the respective EKF model in order to enable the transition between pedestrian and bicycle tracking. For big vehicles such as trucks and buses, we have separate models for passenger cars and large vehicles because they are difficult to distinguish from passenger cars and are not stable. Therefore, separate models are prepared for passenger cars and big vehicles, and these models are run at the same time as the respective EKF models to ensure stability.

"},{"location":"perception/autoware_multi_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_multi_object_tracker/#input","title":"Input","text":"

Multiple inputs are pre-defined in the input channel parameters (described below) and the inputs can be configured

Name Type Description selected_input_channels std::vector<std::string> array of channel names "},{"location":"perception/autoware_multi_object_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::TrackedObjects tracked objects"},{"location":"perception/autoware_multi_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_multi_object_tracker/#input-channel-parameters","title":"Input Channel parameters","text":"Name Type Description Default Range topic string The ROS topic name for the input channel. /perception/object_recognition/detection/objects N/A can_spawn_new_tracker boolean Indicates if the input channel can spawn new trackers. True N/A optional.name string The name of the input channel. detected_objects N/A optional.short_name string The short name of the input channel. all N/A"},{"location":"perception/autoware_multi_object_tracker/#core-parameters","title":"Core Parameters","text":" Name Type Description Default Range car_tracker string Tracker model for car class. multi_vehicle_tracker N/A truck_tracker string Tracker model for truck class. multi_vehicle_tracker N/A bus_tracker string Tracker model for bus class. multi_vehicle_tracker N/A trailer_tracker string Tracker model for trailer class. multi_vehicle_tracker N/A pedestrian_tracker string Tracker model for pedestrian class. pedestrian_and_bicycle_tracker N/A bicycle_tracker string Tracker model for bicycle class. pedestrian_and_bicycle_tracker N/A motorcycle_tracker string Tracker model for motorcycle class. pedestrian_and_bicycle_tracker N/A publish_rate float Timer frequency to output with delay compensation. 10.0 N/A world_frame_id string Object kinematics definition frame. map N/A enable_delay_compensation boolean If True, tracker use timers to schedule publishers and use prediction step to extrapolate object state at desired timestamp. False N/A consider_odometry_uncertainty boolean If True, consider odometry uncertainty in tracking. False N/A tracker_lifetime float Lifetime of the tracker in seconds. 1.0 N/A min_known_object_removal_iou float Minimum IOU between associated objects with known label to remove younger tracker 0.1 N/A min_unknown_object_removal_iou float Minimum IOU between associated objects with unknown label to remove unknown tracker 0.001 N/A distance_threshold float Distance threshold in meters. 5.0 N/A confident_count_threshold.UNKNOWN float Number of measurements to consider tracker as confident for unknown object classes. 3 N/A confident_count_threshold.CAR float Number of measurements to consider tracker as confident for car object classes. 3 N/A confident_count_threshold.TRUCK float Number of measurements to consider tracker as confident for truck object classes. 3 N/A confident_count_threshold.BUS float Number of measurements to consider tracker as confident for bus object classes. 3 N/A confident_count_threshold.TRAILER float Number of measurements to consider tracker as confident for trailer object classes. 3 N/A confident_count_threshold.MOTORBIKE float Number of measurements to consider tracker as confident for motorbike object classes. 3 N/A confident_count_threshold.BICYCLE float Number of measurements to consider tracker as confident for bicycle object classes. 3 N/A confident_count_threshold.PEDESTRIAN float Number of measurements to consider tracker as confident for pedestrian object classes. 3 N/A publish_processing_time boolean Enable to publish debug message of process time information. False N/A publish_tentative_objects boolean Enable to publish tentative tracked objects, which have lower confidence. False N/A publish_debug_markers boolean Enable to publish debug markers, which indicates association of multi-inputs, existence probability of each detection. False N/A diagnostics_warn_delay float Delay threshold for warning diagnostics in seconds. 0.5 N/A diagnostics_error_delay float Delay threshold for error diagnostics in seconds. 1.0 N/A Name Type Description Default Range can_assign_matrix array Assignment table for data association. [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1] N/A max_dist_matrix array Maximum distance table for data association. [4.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 3.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0, 3.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0] N/A max_area_matrix array Maximum area table for data association. [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 12.1, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 36.0, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 60.0, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 60.0, 12.1, 36.0, 60.0, 60.0, 10000.0, 10000.0, 10000.0, 2.5, 10000.0, 10000.0, 10000.0, 10000.0, 2.5, 2.5, 1.0, 2.5, 10000.0, 10000.0, 10000.0, 10000.0, 2.5, 2.5, 1.0, 2.0, 10000.0, 10000.0, 10000.0, 10000.0, 1.5, 1.5, 1.0] N/A min_area_matrix array Minimum area table for data association. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.6, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 6.0, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 10.0, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 10.0, 3.6, 6.0, 10.0, 10.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1, 0.001, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1, 0.001, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1] N/A max_rad_matrix array Maximum angle table for data association. [3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15] N/A min_iou_matrix array A matrix that represents the minimum Intersection over Union (IoU) limit allowed for assignment. [0.0001, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.2, 0.2, 0.2, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.0001] N/A"},{"location":"perception/autoware_multi_object_tracker/#simulation-parameters","title":"Simulation parameters","text":"Name Type Description Default Range car_tracker string Tracker model for car class. pass_through_tracker N/A truck_tracker string Tracker model for truck class. pass_through_tracker N/A bus_tracker string Tracker model for bus class. pass_through_tracker N/A pedestrian_tracker string Tracker model for pedestrian class. pass_through_tracker N/A bicycle_tracker string Tracker model for bicycle class. pass_through_tracker N/A motorcycle_tracker string Tracker model for motorcycle class. pass_through_tracker N/A"},{"location":"perception/autoware_multi_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

See the model explanations.

"},{"location":"perception/autoware_multi_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_multi_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_multi_object_tracker/#evaluation-of-mussp","title":"Evaluation of muSSP","text":"

According to our evaluation, muSSP is faster than normal SSP when the matrix size is more than 100.

Execution time for varying matrix size at 95% sparsity. In real data, the sparsity was often around 95%.

Execution time for varying the sparsity with matrix size 100.

"},{"location":"perception/autoware_multi_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

This package makes use of external code.

Name License Original Repository muSSP Apache-2.0 https://github.com/yu-lab-vt/muSSP

[1] C. Wang, Y. Wang, Y. Wang, C.-t. Wu, and G. Yu, \u201cmuSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking,\u201d NeurIPS, 2019

"},{"location":"perception/autoware_multi_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_multi_object_tracker/models/","title":"Models used in this module","text":""},{"location":"perception/autoware_multi_object_tracker/models/#models-used-in-this-module","title":"Models used in this module","text":""},{"location":"perception/autoware_multi_object_tracker/models/#tracking-model","title":"Tracking model","text":""},{"location":"perception/autoware_multi_object_tracker/models/#ctrv-model-1","title":"CTRV model [1]","text":"

CTRV model is a model that assumes constant turn rate and velocity magnitude.

\\[ \\begin{aligned} x_{k+1} & = x_{k} + v_{k} \\cos(\\psi_k) \\cdot {d t} \\\\ y_{k+1} & = y_{k} + v_{k} \\sin(\\psi_k) \\cdot {d t} \\\\ \\psi_{k+1} & = \\psi_k + \\dot\\psi_{k} \\cdot {d t} \\\\ v_{k+1} & = v_{k} \\\\ \\dot\\psi_{k+1} & = \\dot\\psi_{k} \\\\ \\end{aligned} \\] \\[ A = \\begin{bmatrix} 1 & 0 & -v \\sin(\\psi) \\cdot dt & \\cos(\\psi) \\cdot dt & 0 \\\\ 0 & 1 & v \\cos(\\psi) \\cdot dt & \\sin(\\psi) \\cdot dt & 0 \\\\ 0 & 0 & 1 & 0 & dt \\\\ 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 \\\\ \\end{bmatrix} \\]"},{"location":"perception/autoware_multi_object_tracker/models/#kinematic-bicycle-model-2","title":"Kinematic bicycle model [2]","text":"

Kinematic bicycle model uses slip angle \\(\\beta\\) and velocity \\(v\\) to calculate yaw update. The merit of using this model is that it can prevent unintended yaw rotation when the vehicle is stopped.

\\[ \\begin{aligned} x_{k+1} & = x_{k} + v_{k} \\cos \\left( \\psi_{k}+\\beta_{k} \\right) {d t} -\\frac{1}{2} \\left\\lbrace w_k v_k \\sin \\left(\\psi_{k}+\\beta_{k} \\right) \\right\\rbrace {d t}^2\\\\ y_{k+1} & = y_{k} + v_{k} \\sin \\left( \\psi_{k}+\\beta_{k} \\right) {d t} +\\frac{1}{2} \\left\\lbrace w_k v_k \\cos \\left(\\psi_{k}+\\beta_{k} \\right) \\right\\rbrace {d t}^2\\\\ \\psi_{k+1} & =\\psi_{k} + w_k {d t} \\\\ v_{k+1} & = v_{k} \\\\ \\beta_{k+1} & =\\beta_{k} \\end{aligned} \\] \\[ \\frac{\\partial f}{\\partial \\mathrm x}=\\left[\\begin{array}{ccccc} 1 & 0 & v \\cos (\\psi+\\beta) {d t} - \\frac{1}{2} \\left\\lbrace w v \\cos \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & \\sin (\\psi+\\beta) {d t} - \\left\\lbrace w \\sin \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & -v \\sin (\\psi+\\beta) {d t} - \\frac{v^2}{2l_r} \\left\\lbrace \\cos(\\beta)\\sin(\\psi+\\beta)+\\sin(\\beta)\\cos(\\psi+\\beta) \\right\\rbrace {d t}^2 \\\\ 0 & 1 & v \\sin (\\psi+\\beta) {d t} - \\frac{1}{2} \\left\\lbrace w v \\sin \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & \\cos (\\psi+\\beta) {d t} + \\left\\lbrace w \\cos \\left( \\psi+\\beta \\right) \\right\\rbrace {d t}^2 & v \\cos (\\psi+\\beta) {d t} + \\frac{v^2}{2l_r} \\left\\lbrace \\cos(\\beta)\\cos(\\psi+\\beta)-\\sin(\\beta)\\sin(\\psi+\\beta) \\right\\rbrace {d t}^2 \\\\ 0 & 0 & 1 & \\frac{1}{l_r} \\sin \\beta {d t} & \\frac{v}{l_r} \\cos \\beta d t \\\\ 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 \\end{array}\\right] \\]"},{"location":"perception/autoware_multi_object_tracker/models/#remarks-on-the-output-twist","title":"remarks on the output twist","text":"

Remarks that the velocity \\(v_{k}\\) is the norm of velocity of vehicle, not the longitudinal velocity. So the output twist in the object coordinate \\((x,y)\\) is calculated as follows.

\\[ \\begin{aligned} v_{x} &= v_{k} \\cos \\left(\\beta_{k}\\right) \\\\ v_{y} &= v_{k} \\sin \\left(\\beta_{k}\\right) \\end{aligned} \\]"},{"location":"perception/autoware_multi_object_tracker/models/#anchor-point-based-estimation","title":"Anchor point based estimation","text":"

To separate the estimation of the position and the shape, we use anchor point based position estimation.

"},{"location":"perception/autoware_multi_object_tracker/models/#anchor-point-and-tracking-relationships","title":"Anchor point and tracking relationships","text":"

Anchor point is set when the tracking is initialized. Its position is equal to the center of the bounding box of the first tracking bounding box.

Here show how anchor point is used in tracking.

Raw detection is converted to anchor point coordinate, and tracking

"},{"location":"perception/autoware_multi_object_tracker/models/#manage-anchor-point-offset","title":"Manage anchor point offset","text":"

Anchor point should be kept in the same position of the object. In other words, the offset value must be adjusted so that the input BBOX and the output BBOX's closest plane to the ego vehicle are at the same position.

"},{"location":"perception/autoware_multi_object_tracker/models/#known-limits-drawbacks","title":"Known limits, drawbacks","text":""},{"location":"perception/autoware_multi_object_tracker/models/#references","title":"References","text":"

[1] Schubert, Robin & Richter, Eric & Wanielik, Gerd. (2008). Comparison and evaluation of advanced motion models for vehicle tracking. 1 - 6. 10.1109/ICIF.2008.4632283.

[2] Kong, Jason & Pfeiffer, Mark & Schildbach, Georg & Borrelli, Francesco. (2015). Kinematic and dynamic vehicle models for autonomous driving control design. 1094-1099. 10.1109/IVS.2015.7225830.

"},{"location":"perception/autoware_object_merger/","title":"object_merger","text":""},{"location":"perception/autoware_object_merger/#object_merger","title":"object_merger","text":""},{"location":"perception/autoware_object_merger/#purpose","title":"Purpose","text":"

object_merger is a package for merging detected objects from two methods by data association.

"},{"location":"perception/autoware_object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The successive shortest path algorithm is used to solve the data association problem (the minimum-cost flow problem). The cost is calculated by the distance between two objects and gate functions are applied to reset cost, s.t. the maximum distance, the maximum area and the minimum area.

"},{"location":"perception/autoware_object_merger/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_object_merger/#input","title":"Input","text":"Name Type Description input/object0 autoware_perception_msgs::msg::DetectedObjects detection objects input/object1 autoware_perception_msgs::msg::DetectedObjects detection objects"},{"location":"perception/autoware_object_merger/#output","title":"Output","text":"Name Type Description output/object autoware_perception_msgs::msg::DetectedObjects modified Objects"},{"location":"perception/autoware_object_merger/#parameters","title":"Parameters","text":" Name Type Description Default Range sync_queue_size integer The size of the synchronization queue. 20 N/A precision_threshold_to_judge_overlapped float The precision threshold to judge if objects are overlapped. 0.4 N/A recall_threshold_to_judge_overlapped float The recall threshold to judge if objects are overlapped. 0.5 N/A remove_overlapped_unknown_objects boolean Flag to remove overlapped unknown objects. True N/A base_link_frame_id string The frame ID of the association frame. base_link N/A priority_mode integer Index for the priority_mode. 3 [0, 1, 2, 3] Name Type Description Default Range can_assign_matrix array Assignment table for data association. [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1] N/A max_dist_matrix array Maximum distance table for data association. [4.0, 4.0, 5.0, 5.0, 5.0, 2.0, 2.0, 2.0, 4.0, 2.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 5.0, 5.0, 5.0, 5.0, 5.0, 1.0, 1.0, 1.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 3.0, 2.0, 1.0, 1.0, 1.0, 1.0, 3.0, 3.0, 2.0] N/A max_rad_matrix array Maximum angle table for data association. If value is greater than pi, it will be ignored. [3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 1.047, 1.047, 1.047, 1.047, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15, 3.15] N/A min_iou_matrix array Minimum IoU threshold matrix for data association. If value is negative, it will be ignored. [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.3, 0.2, 0.2, 0.2, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] N/A Name Type Description Default Range distance_threshold_list array Distance threshold for each class used in judging overlap. [9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0] N/A generalized_iou_threshold array Generalized IoU threshold for each class. [-0.1, -0.1, -0.1, -0.6, -0.6, -0.1, -0.1, -0.1] N/A"},{"location":"perception/autoware_object_merger/#tips","title":"Tips","text":""},{"location":"perception/autoware_object_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_object_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_object_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_object_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_object_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

Data association algorithm was the same as that of multi_object_tracker, but the algorithm of multi_object_tracker was already updated.

"},{"location":"perception/autoware_object_range_splitter/","title":"`autoware_object_range_splitter`","text":""},{"location":"perception/autoware_object_range_splitter/#autoware_object_range_splitter","title":"autoware_object_range_splitter","text":""},{"location":"perception/autoware_object_range_splitter/#purpose","title":"Purpose","text":"

autoware_object_range_splitter is a package to divide detected objects into two messages by the distance from the origin.

"},{"location":"perception/autoware_object_range_splitter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_object_range_splitter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_object_range_splitter/#input","title":"Input","text":"Name Type Description input/object autoware_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/autoware_object_range_splitter/#output","title":"Output","text":"Name Type Description output/long_range_object autoware_perception_msgs::msg::DetectedObjects long range detected objects output/short_range_object autoware_perception_msgs::msg::DetectedObjects short range detected objects"},{"location":"perception/autoware_object_range_splitter/#parameters","title":"Parameters","text":"Name Type Description Default Range split_range float object_range_splitter is a package to divide detected objects into two messages by the distance from the origin 30 N/A"},{"location":"perception/autoware_object_range_splitter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_object_range_splitter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_object_range_splitter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_object_range_splitter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_object_range_splitter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_object_velocity_splitter/","title":"autoware_object_velocity_splitter","text":""},{"location":"perception/autoware_object_velocity_splitter/#autoware_object_velocity_splitter","title":"autoware_object_velocity_splitter","text":"

This package contains a object filter module for autoware_perception_msgs/msg/DetectedObject. This package can split DetectedObjects into two messages by object's speed.

"},{"location":"perception/autoware_object_velocity_splitter/#interface","title":"Interface","text":""},{"location":"perception/autoware_object_velocity_splitter/#input","title":"Input","text":""},{"location":"perception/autoware_object_velocity_splitter/#output","title":"Output","text":""},{"location":"perception/autoware_object_velocity_splitter/#parameters","title":"Parameters","text":"

This parameter is velocity threshold to split objects

"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/","title":"autoware_occupancy_grid_map_outlier_filter","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#autoware_occupancy_grid_map_outlier_filter","title":"autoware_occupancy_grid_map_outlier_filter","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#purpose","title":"Purpose","text":"

This node is an outlier filter based on a occupancy grid map. Depending on the implementation of occupancy grid map, it can be called an outlier filter in time series, since the occupancy grid map expresses the occupancy probabilities in time series.

"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"
  1. Use the occupancy grid map to separate point clouds into those with low occupancy probability and those with high occupancy probability.

  2. The point clouds that belong to the low occupancy probability are not necessarily outliers. In particular, the top of the moving object tends to belong to the low occupancy probability. Therefore, if use_radius_search_2d_filter is true, then apply an radius search 2d outlier filter to the point cloud that is determined to have a low occupancy probability.

    1. For each low occupancy probability point, determine the outlier from the radius (radius_search_2d_filter/search_radius) and the number of point clouds. In this case, the point cloud to be referenced is not only low occupancy probability points, but all point cloud including high occupancy probability points.
    2. The number of point clouds can be multiplied by radius_search_2d_filter/min_points_and_distance_ratio and distance from base link. However, the minimum and maximum number of point clouds is limited.

The following video is a sample. Yellow points are high occupancy probability, green points are low occupancy probability which is not an outlier, and red points are outliers. At around 0:15 and 1:16 in the first video, a bird crosses the road, but it is considered as an outlier.

"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs/PointCloud2 Obstacle point cloud with ground removed. ~/input/occupancy_grid_map nav_msgs/OccupancyGrid A map in which the probability of the presence of an obstacle is occupancy probability map"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#output","title":"Output","text":"Name Type Description ~/output/pointcloud sensor_msgs/PointCloud2 Point cloud with outliers removed. trajectory ~/output/debug/outlier/pointcloud sensor_msgs/PointCloud2 Point clouds removed as outliers. ~/output/debug/low_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a low probability of occupancy in the occupancy grid map. However, it is not considered as an outlier. ~/output/debug/high_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a high probability of occupancy in the occupancy grid map. trajectory"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range radius_search_2d_filter.search_radius float Radius when calculating the density 1.0 N/A radius_search_2d_filter.min_points_and_distance_ratio float Threshold value of the number of point clouds per radius when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink 400.0 N/A radius_search_2d_filter.min_points float Minimum number of point clouds per radius 4 N/A radius_search_2d_filter.max_points float Maximum number of point clouds per radius 70 N/A radius_search_2d_filter.max_filter_points_nb float Maximum number of point clouds to be filtered 15000 N/A map_frame string map frame id map N/A base_link_frame string base link frame id base_link N/A cost_threshold float Cost threshold of occupancy grid map (0~100). 100 means 100% probability that there is an obstacle, close to 50 means that it is indistinguishable whether it is an obstacle or free space, 0 means that there is no obstacle 45 N/A use_radius_search_2d_filter boolean Whether or not to apply density-based outlier filters to objects that are judged to have low probability of occupancy on the occupancy grid map True N/A enable_debugger boolean Whether to output the point cloud for debugging False N/A"},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_occupancy_grid_map_outlier_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/","title":"autoware_probabilistic_occupancy_grid_map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#autoware_probabilistic_occupancy_grid_map","title":"autoware_probabilistic_occupancy_grid_map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#purpose","title":"Purpose","text":"

This package outputs the probability of having an obstacle as occupancy grid map.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#settings","title":"Settings","text":"

Occupancy grid map is generated on map_frame, and grid orientation is fixed.

You may need to choose scan_origin_frame and gridmap_origin_frame which means sensor origin and gridmap origin respectively. Especially, set your main LiDAR sensor frame (e.g. velodyne_top in sample_vehicle) as a scan_origin_frame would result in better performance.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#parameters","title":"Parameters","text":" "},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#downsample-input-pointcloudoptional","title":"Downsample input pointcloud(Optional)","text":"

If you set downsample_input_pointcloud to true, the input pointcloud will be downsampled and following topics are also used. This feature is currently only for the pointcloud based occupancy grid map.

# downsampled raw and obstacle pointcloud\n/perception/occupancy_grid_map/obstacle/downsample/pointcloud\n/perception/occupancy_grid_map/raw/downsample/pointcloud\n
# downsampled raw and obstacle pointcloud\n/perception/occupancy_grid_map/obstacle/downsample/pointcloud\n/perception/occupancy_grid_map/<sensor_name>/raw/downsample/pointcloud\n
"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/#test","title":"Test","text":"

This package provides unit tests using gtest. You can run the test by the following command.

colcon test --packages-select autoware_probabilistic_occupancy_grid_map --event-handlers console_direct+\n

Test contains the following.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/","title":"laserscan based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#laserscan-based-occupancy-grid-map","title":"laserscan based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The basic idea is to take a 2D laserscan and ray trace it to create a time-series processed occupancy grid map.

  1. the node take a laserscan and make an occupancy grid map with one frame. ray trace is done by Bresenham's line algorithm.
  2. Optionally, obstacle point clouds and raw point clouds can be received and reflected in the occupancy grid map. The reason is that laserscan only uses the most foreground point in the polar coordinate system, so it throws away a lot of information. As a result, the occupancy grid map is almost an UNKNOWN cell. Therefore, the obstacle point cloud and the raw point cloud are used to reflect what is judged to be the ground and what is judged to be an obstacle in the occupancy grid map. The black and red dots represent raw point clouds, and the red dots represent obstacle point clouds. In other words, the black points are determined as the ground, and the red point cloud is the points determined as obstacles. The gray cells are represented as UNKNOWN cells.

  3. Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

\\[ \\hat{P_{o}} = \\frac{(P_{o} *P_{z})}{(P_{o}* P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1} \\] \\[ \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2} \\]"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/laserscan sensor_msgs::LaserScan laserscan ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame input_obstacle_pointcloud bool whether to use the optional obstacle point cloud? If this is true, ~/input/obstacle_pointcloud topics will be received. input_obstacle_and_raw_pointcloud bool whether to use the optional obstacle and raw point cloud? If this is true, ~/input/obstacle_pointcloud and ~/input/raw_pointcloud topics will be received. use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m]"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

Bresenham's_line_algorithm

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/","title":"pointcloud based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#pointcloud-based-occupancy-grid-map","title":"pointcloud based occupancy grid map","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#1st-step","title":"1st step","text":"

First of all, input obstacle/raw pointcloud are transformed into the polar coordinate centered around scan_origin and divided int circular bins per angle_increment respectively. At this time, each point belonging to each bin is stored as range data. In addition, the x,y information in the map coordinate is also stored for ray-tracing on the map coordinate. The bin contains the following information for each point

The following figure shows each of the bins from side view.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#2nd-step","title":"2nd step","text":"

The ray trace is performed in three steps for each cell. The ray trace is done by Bresenham's line algorithm.

  1. Initialize freespace to the farthest point of each bin.

  2. Fill in the unknown cells. Based on the assumption that UNKNOWN is behind the obstacle, the cells that are more than a distance margin from each obstacle point are filled with UNKNOWN

    There are three reasons for setting a distance margin.

    When the parameter grid_map_type is \"OccupancyGridMapProjectiveBlindSpot\" and the scan_origin is a sensor frame like velodyne_top for instance, for each obstacle pointcloud, if there are no visible raw pointclouds that are located above the projected ray from the scan_origin to that obstacle pointcloud, the cells between the obstacle pointcloud and the projected point are filled with UNKNOWN. Note that the scan_origin should not be base_link if this flag is true because otherwise all the cells behind the obstacle point clouds would be filled with UNKNOWN.

  3. Fill in the occupied cells. Fill in the point where the obstacle point is located with occupied. In addition, If the distance between obstacle points is less than or equal to the distance margin, that interval is filled with OCCUPIED because the input may be inaccurate and obstacle points may not be determined as obstacles.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#3rd-step","title":"3rd step","text":"

Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

\\[ \\hat{P_{o}} = \\frac{(P_{o} *P_{z})}{(P_{o}* P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1} \\] \\[ \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2} \\]"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#related-topics","title":"Related topics","text":"

If you set downsample_input_pointcloud to true, the input pointcloud will be downsampled and following topics are also used.

# downsampled raw and obstacle pointcloud\n/perception/occupancy_grid_map/obstacle/downsample/pointcloud\n/perception/occupancy_grid_map/raw/downsample/pointcloud\n
"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m] grid_map_type string The type of grid map for estimating UNKNOWN region behind obstacle point clouds scan_origin string The origin of the scan. It should be a sensor frame. pub_debug_grid bool Whether to publish debug grid maps downsample_input_pointcloud bool Whether to downsample the input pointclouds. The downsampled pointclouds are used for the ray tracing. downsample_voxel_size double The voxel size for the downsampled pointclouds."},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#how-to-debug","title":"How to debug","text":"

If grid_map_type is \"OccupancyGridMapProjectiveBlindSpot\" and pub_debug_grid is true, it is possible to check the each process of grid map generation by running

ros2 launch autoware_probabilistic_occupancy_grid_map debug.launch.xml\n

and visualizing the following occupancy grid map topics (which are listed in config/grid_map_param.yaml):

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/","title":"synchronized OGM fusion","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#synchronized-ogm-fusion","title":"synchronized OGM fusion","text":"

For simplicity, we use OGM as the meaning of the occupancy grid map.

This package is used to fuse the OGMs from synchronized sensors. Especially for the lidar.

Here shows the example OGM for the this synchronized OGM fusion.

left lidar OGM right lidar OGM top lidar OGM

OGM fusion with asynchronous sensor outputs is not suitable for this package. Asynchronous OGM fusion is under construction.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#processing-flow","title":"Processing flow","text":"

The processing flow of this package is shown in the following figure.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#io","title":"I/O","text":"Input topic name Type Description input_ogm_topics list of nav_msgs::msg::OccupancyGrid List of input topics for Occupancy Grid Maps. This parameter is given in list, so Output topic name Type Description ~/output/occupancy_grid_map nav_msgs::msg::OccupancyGrid Output topic name of the fused Occupancy Grid Map. ~/debug/single_frame_map nav_msgs::msg::OccupancyGrid (debug topic) Output topic name of the single frame fused Occupancy Grid Map."},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#parameters","title":"Parameters","text":"

Synchronized OGM fusion node parameters are shown in the following table. Main parameters to be considered in the fusion node is shown as bold.

Ros param name Sample value Description input_ogm_topics [\"topic1\", \"topic2\"] List of input topics for Occupancy Grid Maps input_ogm_reliabilities [0.8, 0.2] Weights for the reliability of each input topic fusion_method \"overwrite\" Method of fusion (\"overwrite\", \"log-odds\", \"dempster-shafer\") match_threshold_sec 0.01 Matching threshold in milliseconds timeout_sec 0.1 Timeout duration in seconds input_offset_sec [0.0, 0.0] Offset time in seconds for each input topic mapframe \"map\" Frame name for the fused map baselink_frame \"base_link\" Frame name for the base link gridmap_origin_frame \"base_link\" Frame name for the origin of the grid map fusion_map_length_x 100.0 Length of the fused map along the X-axis fusion_map_length_y 100.0 Length of the fused map along the Y-axis fusion_map_resolution 0.5 Resolution of the fused map

Since this node assumes that the OGMs from synchronized sensors are generated in the same time, we need to tune the match_threshold_sec, timeout_sec and input_offset_sec parameters to successfully fuse the OGMs.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#fusion-methods","title":"Fusion methods","text":"

For the single frame fusion, the following fusion methods are supported.

Fusion Method in parameter Description overwrite The value of the cell in the fused OGM is overwritten by the value of the cell in the OGM with the highest priority. We set priority as Occupied > Free > Unknown. log-odds The value of the cell in the fused OGM is calculated by the log-odds ratio method, which is known as a Bayesian fusion method. The log-odds of a probability \\(p\\) can be written as \\(l_p = \\log(\\frac{p}{1-p})\\). And the fused log-odds is calculated by the sum of log-odds. \\(l_f = \\Sigma l_p\\) dempster-shafer The value of the cell in the fused OGM is calculated by the Dempster-Shafer theory[1]. This is also popular method to handle multiple evidences. This package applied conflict escape logic in [2] for the performance. See references for the algorithm details.

For the multi frame fusion, currently only supporting log-odds fusion method.

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#how-to-use","title":"How to use","text":""},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#launch-fusion-node","title":"launch fusion node","text":"

The minimum node launch will be like the following.

<?xml version=\"1.0\"?>\n<launch>\n<arg name=\"output_topic\" default=\"~/output/occupancy_grid_map\"/>\n<arg name=\"fusion_node_param_path\" default=\"$(find-pkg-share autoware_probabilistic_occupancy_grid_map)/config/synchronized_grid_map_fusion_node.param.yaml\"/>\n\n<node name=\"synchronized_grid_map_fusion_node\" exec=\"synchronized_grid_map_fusion_node\" pkg=\"autoware_probabilistic_occupancy_grid_map\" output=\"screen\">\n<remap from=\"~/output/occupancy_grid_map\" to=\"$(var output_topic)\"/>\n<param from=\"$(var fusion_node_param_path)\"/>\n</node>\n</launch>\n
"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#optional-generate-ogms-in-each-sensor-frame","title":"(Optional) Generate OGMs in each sensor frame","text":"

You need to generate OGMs in each sensor frame before achieving grid map fusion.

autoware_probabilistic_occupancy_grid_map package supports to generate OGMs for the each from the point cloud data.

Example launch.xml (click to expand)
<include file=\"$(find-pkg-share tier4_perception_launch)/launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml\">\n<arg name=\"input/obstacle_pointcloud\" value=\"/perception/obstacle_segmentation/single_frame/pointcloud\"/>\n<arg name=\"input/raw_pointcloud\" value=\"/sensing/lidar/right/outlier_filtered/pointcloud_synchronized\"/>\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/right_lidar/map\"/>\n<arg name=\"map_frame\" value=\"base_link\"/>\n<arg name=\"scan_origin\" value=\"velodyne_right\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"param_file\" value=\"$(find-pkg-share autoware_probabilistic_occupancy_grid_map)/config/pointcloud_based_occupancy_grid_map_fusion.param.yaml\"/>\n</include>\n\n\nThe minimum parameter for the OGM generation in each frame is shown in the following table.\n\n|Parameter|Description|\n|--|--|\n|`input/obstacle_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is segmented as the obstacle.|\n|`input/raw_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is not segmented as the obstacle. |\n|`output`| The output topic of the OGM. |\n|`map_frame`| The tf frame for the OGM center origin. |\n|`scan_origin`| The tf frame for the sensor origin. |\n|`method`| The method for the OGM generation. Currently we support `pointcloud_based_occupancy_grid_map` and `laser_scan_based_occupancy_grid_map`. The pointcloud based method is recommended. |\n|`param_file`| The parameter file for the OGM generation. See [example parameter file](config/pointcloud_based_occupancy_grid_map_for_fusion.param.yaml) |\n

We recommend to use same map_frame, size and resolutions for the OGMs from synchronized sensors. Also, remember to set enable_single_frame_mode and filter_obstacle_pointcloud_by_raw_pointcloud to true in the autoware_probabilistic_occupancy_grid_map package (you do not need to set these parameters if you use the above example config file).

"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#run-both-ogm-generation-node-and-fusion-node","title":"Run both OGM generation node and fusion node","text":"

We prepared the launch file to run both OGM generation node and fusion node in grid_map_fusion_with_synchronized_pointclouds.launch.py

You can include this launch file like the following.

<include file=\"$(find-pkg-share autoware_probabilistic_occupancy_grid_map)/launch/grid_map_fusion_with_synchronized_pointclouds.launch.py\">\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/fusion/map\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"fusion_config_file\" value=\"$(var fusion_config_file)\"/>\n<arg name=\"ogm_config_file\" value=\"$(var ogm_config_file)\"/>\n</include>\n

The minimum parameter for the launch file is shown in the following table.

Parameter Description output The output topic of the finally fused OGM. method The method for the OGM generation. Currently we support pointcloud_based_occupancy_grid_map and laser_scan_based_occupancy_grid_map. The pointcloud based method is recommended. fusion_config_file The parameter file for the grid map fusion. See example parameter file ogm_config_file The parameter file for the OGM generation. See example parameter file"},{"location":"perception/autoware_probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#references","title":"References","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/","title":"autoware_radar_crossing_objects_noise_filter","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#autoware_radar_crossing_objects_noise_filter","title":"autoware_radar_crossing_objects_noise_filter","text":"

This package contains a radar noise filter module for autoware_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#design","title":"Design","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#background","title":"Background","text":"

This package aim to filter the noise objects which cross from the ego vehicle. The reason why these objects are noise is as below.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#1-the-objects-with-doppler-velocity-can-be-trusted-more-than-those-with-vertical-velocity-to-it","title":"1. The objects with doppler velocity can be trusted more than those with vertical velocity to it","text":"

Radars can get velocity information of objects as doppler velocity, but cannot get vertical velocity to doppler velocity directory. Some radars can output the objects with not only doppler velocity but also vertical velocity by estimation. If the vertical velocity estimation is poor, it leads to output noise objects. In other words, the above situation is that the objects which has vertical twist viewed from ego vehicle can tend to be noise objects.

The example is below figure. Velocity estimation fails on static objects, resulting in ghost objects crossing in front of ego vehicles.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#2-turning-around-by-ego-vehicle-affect-the-output-from-radar","title":"2. Turning around by ego vehicle affect the output from radar","text":"

When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if radar_tracks_msgs_converter compensates by the ego vehicle twist. So if an object detected by radars has circular motion viewing from base_link, it is likely that the speed is estimated incorrectly and that the object is a static object.

The example is below figure. When the ego vehicle turn right, the surrounding objects have left circular motion.

"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#algorithm","title":"Algorithm","text":"

To filter the objects crossing to ego vehicle, this package filter the objects as below algorithm.

  // If velocity of an object is rather than the velocity_threshold,\n// and crossing_yaw is near to vertical\n// angle_threshold < crossing_yaw < pi - angle_threshold\nif (\nvelocity > node_param_.velocity_threshold &&\nabs(std::cos(crossing_yaw)) < abs(std::cos(node_param_.angle_threshold))) {\n// Object is noise object;\n} else {\n// Object is not noise object;\n}\n
"},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#interface","title":"Interface","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#input","title":"Input","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#output","title":"Output","text":""},{"location":"perception/autoware_radar_crossing_objects_noise_filter/#parameters","title":"Parameters","text":"

This parameter is the angle threshold to filter. It has condition that 0 < angle_threshold < pi / 2. If the crossing angle is larger than this parameter, it can be a candidate for noise object. In other words, if it is smaller than this parameter, it is a filtered object. If this parameter is set smaller, more objects are considered noise. In detail, see algorithm chapter.

This parameter is the velocity threshold to filter. If velocity of an object is larger than this parameter, it can be a candidate for noise object. In other words, if velocity of an object is smaller than this parameter, it is a filtered object. If this parameter is set smaller, more objects are considered noise. In detail, see algorithm chapter.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/","title":"`autoware_radar_fusion_to_detected_object`","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#autoware_radar_fusion_to_detected_object","title":"autoware_radar_fusion_to_detected_object","text":"

This package contains a sensor fusion module for radar-detected objects and 3D detected objects.

The fusion node can:

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#design","title":"Design","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#background","title":"Background","text":"

This package is the fusion with LiDAR-based 3D detection output and radar data. LiDAR based 3D detection can estimate position and size of objects with high precision, but it cannot estimate velocity of objects. Radar data can estimate doppler velocity of objects, but it cannot estimate position and size of objects with high precision This fusion package is aim to fuse these characteristic data, and to estimate position, size, velocity of objects with high precision.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#algorithm","title":"Algorithm","text":"

The document of core algorithm is here

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#interface-for-core-algorithm","title":"Interface for core algorithm","text":"

The parameters for core algorithm can be set as core_params.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#parameters-for-sensor-fusion","title":"Parameters for sensor fusion","text":"

This parameter is the distance to extend the 2D bird's-eye view bounding box on each side. This parameter is used as a threshold to find radar centroids falling inside the extended box.

This parameter is the object's velocity threshold to decide to split for two objects from radar information. Note that this feature is not currently implemented.

This parameter is the yaw orientation threshold. If the difference of yaw degree between from a LiDAR-based detection object and radar velocity, radar information is attached to output objects.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#weight-parameters-for-velocity-estimation","title":"Weight parameters for velocity estimation","text":"

To tune these weight parameters, please see document in detail.

This parameter is the twist coefficient of average twist of radar data in velocity estimation.

This parameter is the twist coefficient of median twist of radar data in velocity estimation.

This parameter is the twist coefficient of radar data nearest to the center of bounding box in velocity estimation.

This parameter is the twist coefficient of target value weighted average in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects.

This parameter is the twist coefficient of top target value radar data in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#parameters-for-fixed-object-information","title":"Parameters for fixed object information","text":"

This parameter is the flag whether convert doppler velocity to twist using the yaw information of a detected object.

This parameter is the threshold to filter output objects. If the probability of an output object is lower than this parameter, and the output object does not have radar points/objects, then delete the object.

This parameter is the flag to use probability compensation. If this parameter is true, compensate probability of objects to threshold probability.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#interface-for-autoware_radar_object_fusion_to_detected_object","title":"Interface for autoware_radar_object_fusion_to_detected_object","text":"

Sensor fusion with radar objects and a detected object.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_fusion_to_detected_object radar_object_to_detected_object.launch.xml\n
"},{"location":"perception/autoware_radar_fusion_to_detected_object/#input","title":"Input","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#output","title":"Output","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/#parameters","title":"Parameters","text":"

The parameters for core algorithm can be set as node_params.

This parameter is update rate for the onTimer function. This parameter should be same as the frame rate of input topics.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/#interface-for-radar_scan_fusion_to_detected_object-tbd","title":"Interface for radar_scan_fusion_to_detected_object (TBD)","text":"

Under implement

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/","title":"Algorithm","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#common-algorithm","title":"Common Algorithm","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#1-link-between-3d-bounding-box-and-radar-data","title":"1. Link between 3d bounding box and radar data","text":"

Choose radar pointcloud/objects within 3D bounding box from lidar-base detection with margin space from bird's-eye view.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#2-feature-support-split-the-object-going-in-a-different-direction","title":"2. [Feature support] Split the object going in a different direction","text":""},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#3-estimate-twist-of-object","title":"3. Estimate twist of object","text":"

Estimate twist from chosen radar pointcloud/objects using twist and target value (Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects). First, the estimation function calculate

Second, the estimation function calculate weighted average of these list. Third, twist information of estimated twist is attached to an object.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#4-feature-support-option-convert-doppler-velocity-to-twist","title":"4. [Feature support] [Option] Convert doppler velocity to twist","text":"

If the twist information of radars is doppler velocity, convert from doppler velocity to twist using yaw angle of DetectedObject. Because radar pointcloud has only doppler velocity information, radar pointcloud fusion should use this feature. On the other hand, because radar objects have twist information, radar object fusion should not use this feature.

"},{"location":"perception/autoware_radar_fusion_to_detected_object/docs/algorithm/#5-delete-objects-with-low-probability","title":"5. Delete objects with low probability","text":""},{"location":"perception/autoware_radar_object_clustering/","title":"`autoware_radar_object_clustering`","text":""},{"location":"perception/autoware_radar_object_clustering/#autoware_radar_object_clustering","title":"autoware_radar_object_clustering","text":"

This package contains a radar object clustering for autoware_perception_msgs/msg/DetectedObject input.

This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.

"},{"location":"perception/autoware_radar_object_clustering/#design","title":"Design","text":""},{"location":"perception/autoware_radar_object_clustering/#background","title":"Background","text":"

In radars with object output, there are cases that multiple detection results are obtained from one object, especially for large vehicles such as trucks and trailers. Its multiple detection results cause separation of objects in tracking module. Therefore, by this package the multiple detection results are clustered into one object in advance.

"},{"location":"perception/autoware_radar_object_clustering/#algorithm","title":"Algorithm","text":""},{"location":"perception/autoware_radar_object_clustering/#1-sort-by-distance-from-base_link","title":"1. Sort by distance from base_link","text":"

At first, to prevent changing the result from depending on the order of objects in DetectedObjects, input objects are sorted by distance from base_link. In addition, to apply matching in closeness order considering occlusion, objects are sorted in order of short distance in advance.

"},{"location":"perception/autoware_radar_object_clustering/#2-clustering","title":"2. Clustering","text":"

If two radar objects are near, and yaw angle direction and velocity between two radar objects is similar (the degree of these is defined by parameters), then these are clustered. Note that radar characteristic affect parameters for this matching. For example, if resolution of range distance or angle is low and accuracy of velocity is high, then distance_threshold parameter should be bigger and should set matching that strongly looks at velocity similarity.

After grouping for all radar objects, if multiple radar objects are grouping, the kinematics of the new clustered object is calculated from average of that and label and shape of the new clustered object is calculated from top confidence in radar objects.

"},{"location":"perception/autoware_radar_object_clustering/#3-fixed-label-correction","title":"3. Fixed label correction","text":"

When the label information from radar outputs lack accuracy, is_fixed_label parameter is recommended to set true. If the parameter is true, the label of a clustered object is overwritten by the label set by fixed_label parameter. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to VEHICLE.

"},{"location":"perception/autoware_radar_object_clustering/#4-fixed-size-correction","title":"4. Fixed size correction","text":"

When the size information from radar outputs lack accuracy, is_fixed_size parameter is recommended to set true. If the parameter is true, the size of a clustered object is overwritten by the label set by size_x, size_y, and size_z parameters. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to size_x, size_y, size_z, as average of vehicle size. Note that to use for multi_objects_tracker, the size parameters need to exceed min_area_matrix parameters of it.

"},{"location":"perception/autoware_radar_object_clustering/#limitation","title":"Limitation","text":"

For now, size estimation for clustered object is not implemented. So is_fixed_size parameter is recommended to set true, and size parameters is recommended to set to value near to average size of vehicles.

"},{"location":"perception/autoware_radar_object_clustering/#interface","title":"Interface","text":""},{"location":"perception/autoware_radar_object_clustering/#input","title":"Input","text":""},{"location":"perception/autoware_radar_object_clustering/#output","title":"Output","text":""},{"location":"perception/autoware_radar_object_clustering/#parameter","title":"Parameter","text":"

These parameter are thresholds for angle, distance, and velocity to judge whether radar detections come from one object in \"clustering\" processing, which is written in detail at algorithm section. If all of the difference in angle/distance/velocity from two objects is less than the thresholds, then the two objects are merged to one clustered object. If these parameter is larger, more objects are merged to one clustered object.

These are used in isSameObject function as below.

bool RadarObjectClusteringNode::isSameObject(\nconst DetectedObject & object_1, const DetectedObject & object_2)\n{\nconst double angle_diff = std::abs(autoware::universe_utils::normalizeRadian(\ntf2::getYaw(object_1.kinematics.pose_with_covariance.pose.orientation) -\ntf2::getYaw(object_2.kinematics.pose_with_covariance.pose.orientation)));\nconst double velocity_diff = std::abs(\nobject_1.kinematics.twist_with_covariance.twist.linear.x -\nobject_2.kinematics.twist_with_covariance.twist.linear.x);\nconst double distance = autoware::universe_utils::calcDistance2d(\nobject_1.kinematics.pose_with_covariance.pose.position,\nobject_2.kinematics.pose_with_covariance.pose.position);\n\nif (\ndistance < node_param_.distance_threshold && angle_diff < node_param_.angle_threshold &&\nvelocity_diff < node_param_.velocity_threshold) {\nreturn true;\n} else {\nreturn false;\n}\n}\n

is_fixed_label is the flag to use fixed label. If it is true, the label of a clustered object is overwritten by the label set by fixed_label parameter. If the radar objects do not have label information, then it is recommended to use fixed label.

is_fixed_size is the flag to use fixed size parameters. If it is true, the size of a clustered object is overwritten by the label set by size_x, size_y, and size_z parameters.

"},{"location":"perception/autoware_radar_object_tracker/","title":"`autoware_radar_object_tracker`","text":""},{"location":"perception/autoware_radar_object_tracker/#autoware_radar_object_tracker","title":"autoware_radar_object_tracker","text":""},{"location":"perception/autoware_radar_object_tracker/#purpose","title":"Purpose","text":"

This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.

"},{"location":"perception/autoware_radar_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This radar object tracker is a combination of data association and tracking algorithms.

"},{"location":"perception/autoware_radar_object_tracker/#data-association","title":"Data Association","text":"

The data association algorithm matches detected objects to existing tracks.

"},{"location":"perception/autoware_radar_object_tracker/#tracker-models","title":"Tracker Models","text":"

The tracker models used in this package vary based on the class of the detected object. See more details in the models.md.

"},{"location":"perception/autoware_radar_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_radar_object_tracker/#input","title":"Input","text":"Name Type Description ~/input autoware_perception_msgs::msg::DetectedObjects Detected objects /vector/map autoware_map_msgs::msg::LaneletMapBin Map data"},{"location":"perception/autoware_radar_object_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_perception_msgs::msg::TrackedObjects Tracked objects"},{"location":"perception/autoware_radar_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_radar_object_tracker/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description publish_rate double 10.0 The rate at which to publish the output messages world_frame_id string \"map\" The frame ID of the world coordinate system enable_delay_compensation bool false Whether to enable delay compensation. If set to true, output topic is published by timer with publish_rate. tracking_config_directory string \"./config/tracking/\" The directory containing the tracking configuration files enable_logging bool false Whether to enable logging logging_file_path string \"/tmp/association_log.json\" The path to the file where logs should be written tracker_lifetime double 1.0 The lifetime of the tracker in seconds use_distance_based_noise_filtering bool true Whether to use distance based filtering minimum_range_threshold double 70.0 Minimum distance threshold for filtering in meters use_map_based_noise_filtering bool true Whether to use map based filtering max_distance_from_lane double 5.0 Maximum distance from lane for filtering in meters max_angle_diff_from_lane double 0.785398 Maximum angle difference from lane for filtering in radians max_lateral_velocity double 5.0 Maximum lateral velocity for filtering in m/s can_assign_matrix array An array of integers used in the data association algorithm max_dist_matrix array An array of doubles used in the data association algorithm max_area_matrix array An array of doubles used in the data association algorithm min_area_matrix array An array of doubles used in the data association algorithm max_rad_matrix array An array of doubles used in the data association algorithm min_iou_matrix array An array of doubles used in the data association algorithm

See more details in the models.md.

"},{"location":"perception/autoware_radar_object_tracker/#tracker-parameters","title":"Tracker parameters","text":"

Currently, this package supports the following trackers:

Default settings for each tracker are defined in the ./config/tracking/, and described in models.md.

"},{"location":"perception/autoware_radar_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_radar_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_radar_object_tracker/models/","title":"models","text":""},{"location":"perception/autoware_radar_object_tracker/models/#models","title":"models","text":"

Tracking models can be chosen from the ros parameter ~tracking_model:

Each model has its own parameters, which can be set in the ros parameter server.

"},{"location":"perception/autoware_radar_object_tracker/models/#linear-constant-acceleration-model","title":"linear constant acceleration model","text":" \\[ \\begin{bmatrix} x_{k+1} \\\\ y_{k+1} \\\\ v_{x_{k+1}} \\\\ v_{y_{k+1}} \\\\ a_{x_{k+1}} \\\\ a_{y_{k+1}} \\end{bmatrix} = \\begin{bmatrix} 1 & 0 & dt & 0 & \\frac{1}{2}dt^2 & 0 \\\\ 0 & 1 & 0 & dt & 0 & \\frac{1}{2}dt^2 \\\\ 0 & 0 & 1 & 0 & dt & 0 \\\\ 0 & 0 & 0 & 1 & 0 & dt \\\\ 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 \\\\ \\end{bmatrix} \\begin{bmatrix} x_k \\\\ y_k \\\\ v_{x_k} \\\\ v_{y_k} \\\\ a_{x_k} \\\\ a_{y_k} \\end{bmatrix} + noise \\] "},{"location":"perception/autoware_radar_object_tracker/models/#constant-turn-rate-and-velocity-model","title":"constant turn rate and velocity model","text":"

Just idea, not implemented yet.

\\[ \\begin{align} x_{k+1} &= x_k + \\frac{v_k}{\\omega_k} (sin(\\theta_k + \\omega_k dt) - sin(\\theta_k)) \\\\ y_{k+1} &= y_k + \\frac{v_k}{\\omega_k} (cos(\\theta_k) - cos(\\theta_k + \\omega_k dt)) \\\\ v_{k+1} &= v_k \\\\ \\theta_{k+1} &= \\theta_k + \\omega_k dt \\\\ \\omega_{k+1} &= \\omega_k \\end{align} \\]"},{"location":"perception/autoware_radar_object_tracker/models/#noise-filtering","title":"Noise filtering","text":"

Radar sensors often have noisy measurement. So we use the following filter to reduce the false positive objects.

The figure below shows the current noise filtering process.

"},{"location":"perception/autoware_radar_object_tracker/models/#minimum-range-filter","title":"minimum range filter","text":"

In most cases, Radar sensors are used with other sensors such as LiDAR and Camera, and Radar sensors are used to detect objects far away. So we can filter out objects that are too close to the sensor.

use_distance_based_noise_filtering parameter is used to enable/disable this filter, and minimum_range_threshold parameter is used to set the threshold.

"},{"location":"perception/autoware_radar_object_tracker/models/#lanelet-based-filter","title":"lanelet based filter","text":"

With lanelet map information, We can filter out false positive objects that are not likely important obstacles.

We filter out objects that satisfy the following conditions:

Each condition can be set by the following parameters:

"},{"location":"perception/autoware_radar_tracks_msgs_converter/","title":"radar_tracks_msgs_converter","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#radar_tracks_msgs_converter","title":"radar_tracks_msgs_converter","text":"

This package converts from radar_msgs/msg/RadarTracks into autoware_perception_msgs/msg/DetectedObject and autoware_perception_msgs/msg/TrackedObject.

"},{"location":"perception/autoware_radar_tracks_msgs_converter/#design","title":"Design","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#background","title":"Background","text":"

Autoware uses radar_msgs/msg/RadarTracks.msg as radar objects input data. To use radar objects data for Autoware perception module easily, radar_tracks_msgs_converter converts message type from radar_msgs/msg/RadarTracks.msg to autoware_perception_msgs/msg/DetectedObject. In addition, because many detection module have an assumption on base_link frame, radar_tracks_msgs_converter provide the functions of transform frame_id.

"},{"location":"perception/autoware_radar_tracks_msgs_converter/#note","title":"Note","text":"

Radar_tracks_msgs_converter converts the label from radar_msgs/msg/RadarTrack.msg to Autoware label. Label id is defined as below.

RadarTrack Autoware UNKNOWN 32000 0 CAR 32001 1 TRUCK 32002 2 BUS 32003 3 TRAILER 32004 4 MOTORCYCLE 32005 5 BICYCLE 32006 6 PEDESTRIAN 32007 7

Additional vendor-specific classifications are permitted starting from 32000 in radar_msgs/msg/RadarTrack.msg. Autoware objects label is defined in ObjectClassification

"},{"location":"perception/autoware_radar_tracks_msgs_converter/#interface","title":"Interface","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#input","title":"Input","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#output","title":"Output","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_radar_tracks_msgs_converter/#parameter-summary","title":"Parameter Summary","text":"Name Type Description Default Range update_rate_hz float The update rate [hz] of the output topic 20.0 \u22650.0 new_frame_id string The header frame_id of the output topic base_link N/A use_twist_compensation boolean Flag to enable the linear compensation of ego vehicle's twist false N/A use_twist_yaw_compensation boolean Flag to enable the compensation of yaw rotation of ego vehicle's twist false N/A static_object_speed_threshold float Threshold to treat detected objects as static objects 1.0 N/A"},{"location":"perception/autoware_radar_tracks_msgs_converter/#parameter-description","title":"Parameter Description","text":"

This parameter is update rate for the onTimer function. This parameter should be same as the frame rate of input topics.

This parameter is the header frame_id of the output topic.

This parameter is the flag to use the compensation to linear of ego vehicle's twist. If the parameter is true, then the twist of the output objects' topic is compensated by the ego vehicle linear motion.

This parameter is the flag to use the compensation to yaw rotation of ego vehicle's twist. If the parameter is true, then the ego motion compensation will also consider yaw motion of the ego vehicle.

This parameter is the threshold to determine the flag is_stationary. If the velocity is lower than this parameter, the flag is_stationary of DetectedObject is set to true and dealt as a static object.

"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/","title":"low_intensity_cluster_filter","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#low_intensity_cluster_filter","title":"low_intensity_cluster_filter","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#purpose","title":"Purpose","text":"

The low_intensity_cluster_filter is a node that filters clusters based on the intensity of their pointcloud.

Mainly this focuses on filtering out unknown objects with very low intensity pointcloud, such as fail detection of unknown objects caused by raindrop or water splash from ego or other fast moving vehicles.

"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#input","title":"Input","text":"Name Type Description input/object tier4_perception_msgs::msg::DetectedObjectsWithFeature input detected objects"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#output","title":"Output","text":"Name Type Description output/object tier4_perception_msgs::msg::DetectedObjectsWithFeature filtered detected objects"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered. max_x float 60.00 Maximum of x of the filter effective range min_x float -20.00 Minimum of x of the filter effective range max_y float 20.00 Maximum of y of the filter effective range min_y float -20.00 Minium of y of the filter effective range intensity_threshold float 1.0 The threshold of average intensity for filter existence_probability_threshold float 0.2 The existence probability threshold to apply the filter"},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_raindrop_cluster_filter/raindrop_cluster_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_shape_estimation/","title":"autoware_shape_estimation","text":""},{"location":"perception/autoware_shape_estimation/#autoware_shape_estimation","title":"autoware_shape_estimation","text":""},{"location":"perception/autoware_shape_estimation/#purpose","title":"Purpose","text":"

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

"},{"location":"perception/autoware_shape_estimation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_shape_estimation/#fitting-algorithms","title":"Fitting algorithms","text":" "},{"location":"perception/autoware_shape_estimation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_shape_estimation/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster"},{"location":"perception/autoware_shape_estimation/#output","title":"Output","text":"Name Type Description output/objects autoware_perception_msgs::msg::DetectedObjects detected objects with refined shape"},{"location":"perception/autoware_shape_estimation/#parameters","title":"Parameters","text":"Name Type Description Default Range use_corrector boolean The flag to apply rule-based corrector. true N/A use_filter boolean The flag to apply rule-based filter true N/A use_vehicle_reference_yaw boolean The flag to use vehicle reference yaw for corrector false N/A use_vehicle_reference_shape_size boolean The flag to use vehicle reference shape size false N/A use_boost_bbox_optimizer boolean The flag to use boost bbox optimizer false N/A model_params.use_ml_shape_estimator boolean The flag to apply use ml bounding box estimator. true N/A model_params.minimum_points integer The minimum number of points to fit a bounding box. 16 N/A model_params.precision string The precision of the model. fp32 N/A model_params.batch_size integer The batch size of the model. 32 N/A model_params.build_only boolean The flag to build the model only. false N/A"},{"location":"perception/autoware_shape_estimation/#ml-based-shape-implementation","title":"ML Based Shape Implementation","text":"

The model takes a point cloud and object label(provided by camera detections/Apollo instance segmentation) as an input and outputs the 3D bounding box of the object.

ML based shape estimation algorithm uses a PointNet model as a backbone to estimate the 3D bounding box of the object. The model is trained on the NuScenes dataset with vehicle labels (Car, Truck, Bus, Trailer).

The implemented model is concatenated with STN (Spatial Transformer Network) to learn the transformation of the input point cloud to the canonical space and PointNet to predict the 3D bounding box of the object. Bounding box estimation part of Frustum PointNets for 3D Object Detection from RGB-D Data paper used as a reference.

The model predicts the following outputs for each object:

"},{"location":"perception/autoware_shape_estimation/#training-ml-based-shape-estimation-model","title":"Training ML Based Shape Estimation Model","text":"

To train the model, you need ground truth 3D bounding box annotations. When using the mmdetection3d repository for training a 3D object detection algorithm, these ground truth annotations are saved and utilized for data augmentation. These annotations are used as an essential dataset for training the shape estimation model effectively.

"},{"location":"perception/autoware_shape_estimation/#preparing-the-dataset","title":"Preparing the Dataset","text":""},{"location":"perception/autoware_shape_estimation/#install-mmdetection3d-prerequisites","title":"Install MMDetection3D prerequisites","text":"

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name train-shape-estimation python=3.8 -y\nconda activate train-shape-estimation\n

Step 3. Install PyTorch

conda install pytorch torchvision -c pytorch\n
"},{"location":"perception/autoware_shape_estimation/#install-mmdetection3d","title":"Install mmdetection3d","text":"

Step 1. Install MMEngine, MMCV, and MMDetection using MIM

pip install -U openmim\nmim install mmengine\nmim install 'mmcv>=2.0.0rc4'\nmim install 'mmdet>=3.0.0rc5, <3.3.0'\n

Step 2. Install Autoware's MMDetection3D fork

git clone https://github.com/autowarefoundation/mmdetection3d.git\ncd mmdetection3d\npip install -v -e .\n
"},{"location":"perception/autoware_shape_estimation/#preparing-nuscenes-dataset-for-training","title":"Preparing NuScenes dataset for training","text":"

Step 1. Download the NuScenes dataset from the official website and extract the dataset to a folder of your choice.

Note: The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.

Step 2. Create a symbolic link to the dataset folder

ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/\n

Step 3. Prepare the NuScenes data by running:

cd mmdetection3d\npython tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --only-gt-database True\n
"},{"location":"perception/autoware_shape_estimation/#clone-bounding-box-estimator-model","title":"Clone Bounding Box Estimator model","text":"
git clone https://github.com/autowarefoundation/bbox_estimator.git\n
"},{"location":"perception/autoware_shape_estimation/#split-the-dataset-into-training-and-validation-sets","title":"Split the dataset into training and validation sets","text":"
cd bbox_estimator\npython3 utils/split_dbinfos.py --dataset_path /path/to/mmdetection3d/data/nuscenes --classes 'car' 'truck' 'bus' 'trailer'  --train_ratio 0.8\n
"},{"location":"perception/autoware_shape_estimation/#training-and-deploying-the-model","title":"Training and Deploying the model","text":""},{"location":"perception/autoware_shape_estimation/#training-the-model","title":"Training the model","text":"
# Detailed training options can be found in the training script\n# For more details, run `python3 train.py --help`\npython3 train.py --dataset_path /path/to/mmdetection3d/data/nuscenes\n
"},{"location":"perception/autoware_shape_estimation/#deploying-the-model","title":"Deploying the model","text":"
# Convert the trained model to ONNX format\npython3 onnx_converter.py --weight_path /path/to/best_checkpoint.pth --output_path /path/to/output.onnx\n

Give the output path of the ONNX model to the model_path parameter in the shape_estimation node launch file.

"},{"location":"perception/autoware_shape_estimation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD

"},{"location":"perception/autoware_shape_estimation/#referencesexternal-links","title":"References/External links","text":"

L-shape fitting implementation of the paper:

@conference{Zhang-2017-26536,\nauthor = {Xiao Zhang and Wenda Xu and Chiyu Dong and John M. Dolan},\ntitle = {Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners},\nbooktitle = {2017 IEEE Intelligent Vehicles Symposium},\nyear = {2017},\nmonth = {June},\nkeywords = {autonomous driving, laser scanner, perception, segmentation},\n}\n

Frustum PointNets for 3D Object Detection from RGB-D Data:

@inproceedings{qi2018frustum,\ntitle={Frustum pointnets for 3d object detection from rgb-d data},\nauthor={Qi, Charles R and Liu, Wei and Wu, Chenxia and Su, Hao and Guibas, Leonidas J},\nbooktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},\npages={918--927},\nyear={2018}\n}```\n
"},{"location":"perception/autoware_simple_object_merger/","title":"autoware_simple_object_merger","text":""},{"location":"perception/autoware_simple_object_merger/#autoware_simple_object_merger","title":"autoware_simple_object_merger","text":"

This package can merge multiple topics of autoware_perception_msgs/msg/DetectedObject with low calculation cost.

"},{"location":"perception/autoware_simple_object_merger/#design","title":"Design","text":""},{"location":"perception/autoware_simple_object_merger/#background","title":"Background","text":"

Object_merger is mainly used for merge process with DetectedObjects. There are 2 characteristics in Object_merger. First, object_merger solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost. Second, object_merger can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node. To merge 6 DetectedObjects topics, 6 object_merger nodes need to stand for now.

Therefore, autoware_simple_object_merger aim to merge multiple DetectedObjects with low calculation cost. The package do not use data association algorithm to reduce the computational cost, and it can handle more than 2 topics in one node to prevent launching a large number of nodes.

"},{"location":"perception/autoware_simple_object_merger/#use-case","title":"Use case","text":"

autoware_simple_object_merger can be used for multiple radar detection. By combining them into one topic from multiple radar topics, the pipeline for faraway detection with radar can be simpler.

"},{"location":"perception/autoware_simple_object_merger/#limitation","title":"Limitation","text":"

Merged objects will not be published until all topic data is received when initializing. In addition, to care sensor data drops and delayed, this package has a parameter to judge timeout. When the latest time of the data of a topic is older than the timeout parameter, it is not merged for output objects. For now specification of this package, if all topic data is received at first and after that the data drops, and the merged objects are published without objects which is judged as timeout.The timeout parameter should be determined by sensor cycle time.

Because this package does not have matching processing, there are overlapping objects depending on the input objects. So output objects can be used only when post-processing is used. For now, clustering processing can be used as post-processing.

"},{"location":"perception/autoware_simple_object_merger/#interface","title":"Interface","text":""},{"location":"perception/autoware_simple_object_merger/#input","title":"Input","text":"

Input topics is defined by the parameter of input_topics (List[string]). The type of input topics is std::vector<autoware_perception_msgs/msg/DetectedObjects.msg>.

"},{"location":"perception/autoware_simple_object_merger/#output","title":"Output","text":""},{"location":"perception/autoware_simple_object_merger/#parameters","title":"Parameters","text":"

This parameter is update rate for the onTimer function. This parameter should be same as the frame rate of input topics.

This parameter is the header frame_id of the output topic. If output topics use for perception module, it should be set for \"base_link\"

This parameter is the threshold for timeout judgement. If the time difference between the first topic of input_topics and an input topic is exceeded to this parameter, then the objects of topic is not merged to output objects.

  for (size_t i = 0; i < input_topic_size; i++) {\ndouble time_diff = rclcpp::Time(objects_data_.at(i)->header.stamp).seconds() -\nrclcpp::Time(objects_data_.at(0)->header.stamp).seconds();\nif (std::abs(time_diff) < node_param_.timeout_threshold) {\n// merge objects\n}\n}\n

This parameter is the name of input topics. For example, when this packages use for radar objects,

input_topics:\n[\n\"/sensing/radar/front_center/detected_objects\",\n\"/sensing/radar/front_left/detected_objects\",\n\"/sensing/radar/rear_left/detected_objects\",\n\"/sensing/radar/rear_center/detected_objects\",\n\"/sensing/radar/rear_right/detected_objects\",\n\"/sensing/radar/front_right/detected_objects\",\n]\n

can be set in config yaml file. For now, the time difference is calculated by the header time between the first topic of input_topics and the input topics, so the most important objects to detect should be set in the first of input_topics list.

"},{"location":"perception/autoware_tensorrt_classifier/","title":"TensorRT Classification for Efficient Dynamic Batched Inference","text":""},{"location":"perception/autoware_tensorrt_classifier/#tensorrt-classification-for-efficient-dynamic-batched-inference","title":"TensorRT Classification for Efficient Dynamic Batched Inference","text":""},{"location":"perception/autoware_tensorrt_classifier/#purpose","title":"Purpose","text":"

This package classifies arbitrary categories using TensorRT for efficient and faster inference. Specifically, this optimizes preprocessing for efficient inference on embedded platform. Moreover, we support dynamic batched inference in GPUs and DLAs.

"},{"location":"perception/autoware_tensorrt_common/","title":"autoware_tensorrt_common","text":""},{"location":"perception/autoware_tensorrt_common/#autoware_tensorrt_common","title":"autoware_tensorrt_common","text":"

This package provides a high-level API to work with TensorRT. This library simplifies the process of loading, building, and executing TensorRT inference engines using ONNX models. It also includes utilities for profiling and managing TensorRT execution contexts, making it easier to integrate TensorRT-based packages in Autoware.

"},{"location":"perception/autoware_tensorrt_common/#usage","title":"Usage","text":"

Here is an example usage of the library. For the full API documentation, please refer to the doxygen documentation (see header file).

#include <autoware/tensorrt_common/tensorrt_common.hpp>\n\n#include <memory>\n#include <utility>\n#include <vector>\n\nusing autoware::tensorrt_common::TrtCommon;\nusing autoware::tensorrt_common::TrtCommonConfig;\nusing autoware::tensorrt_common::TensorInfo;\nusing autoware::tensorrt_common::NetworkIO;\nusing autoware::tensorrt_common::ProfileDims;\n\nstd::unique_ptr<TrtCommon> trt_common_;\n
"},{"location":"perception/autoware_tensorrt_common/#create-a-tensorrt-common-instance-and-setup-engine","title":"Create a tensorrt common instance and setup engine","text":"
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig(\"/path/to/onnx/model.onnx\"));\ntrt_common_->setup();\n
trt_common_ = std::make_unique<TrtCommon>(TrtCommonConfig(\"/path/to/onnx/model.onnx\", \"fp16\", \"/path/to/engine/model.engine\", (1ULL << 30U), -1, false));\n\nstd::vector<NetworkIO> network_io{\nNetworkIO(\"sample_input\", {3, {-1, 64, 512}}), NetworkIO(\"sample_output\", {1, {50}})};\nstd::vector<ProfileDims> profile_dims{\nProfileDims(\"sample_input\", {3, {1, 64, 512}}, {3, {3, 64, 512}}, {3, {9, 64, 512}})};\n\nauto network_io_ptr = std::make_unique<std::vector<NetworkIO>>(network_io);\nauto profile_dims_ptr = std::make_unique<std::vector<ProfileDims>>(profile_dims);\n\ntrt_common_->setup(std::move(profile_dims_ptr), std::move(network_io_ptr));\n

By defining network IO names and dimensions, an extra shapes validation will be performed after building / loading engine. This is useful to ensure the model is compatible with current code for preprocessing as it might consists of operations dependent on tensor shapes.

Profile dimension is used to specify the min, opt, and max dimensions for dynamic shapes.

Network IO or / and profile dimensions can be omitted if not needed.

"},{"location":"perception/autoware_tensorrt_common/#setting-input-and-output-tensors","title":"Setting input and output tensors","text":"
bool success = true;\nsuccess &= trt_common_->setTensor(\"sample_input\", sample_input_d_.get(), nvinfer1::Dims{3, {var_size, 64, 512}});\nsuccess &= trt_common_->setTensor(\"sample_output\", sample_output_d_.get());\nreturn success;\n
"},{"location":"perception/autoware_tensorrt_common/#execute-inference","title":"Execute inference","text":"
auto success = trt_common_->enqueueV3(stream_);\nreturn success;\n
"},{"location":"perception/autoware_tensorrt_yolox/","title":"autoware_tensorrt_yolox","text":""},{"location":"perception/autoware_tensorrt_yolox/#autoware_tensorrt_yolox","title":"autoware_tensorrt_yolox","text":""},{"location":"perception/autoware_tensorrt_yolox/#purpose","title":"Purpose","text":"

This package detects target objects e.g., cars, trucks, bicycles, and pedestrians and segment target objects such as cars, trucks, buses and pedestrian, building, vegetation, road, sidewalk on a image based on YOLOX model with multi-header structure.

"},{"location":"perception/autoware_tensorrt_yolox/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_tensorrt_yolox/#cite","title":"Cite","text":"

Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, \"YOLOX: Exceeding YOLO Series in 2021\", arXiv preprint arXiv:2107.08430, 2021 [ref]

"},{"location":"perception/autoware_tensorrt_yolox/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_tensorrt_yolox/#input","title":"Input","text":"Name Type Description in/image sensor_msgs/Image The input image"},{"location":"perception/autoware_tensorrt_yolox/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/image sensor_msgs/Image The image with 2D bounding boxes for visualization out/mask sensor_msgs/Image The semantic segmentation mask out/color_mask sensor_msgs/Image The colorized image of semantic segmentation mask for visualization"},{"location":"perception/autoware_tensorrt_yolox/#parameters","title":"Parameters","text":"Name Type Description Default Range model_path string Path to onnx model. \\((var data_path)/tensorrt_yolox/\\)(var model_name).onnx N/A label_path string Path to label file. $(var data_path)/tensorrt_yolox/label.txt N/A score_threshold float A threshold value of existence probability score, all of objects with score less than this threshold are ignored. 0.35 \u22650.0\u22641.0 nms_threshold float A threshold value of NMS. 0.7 \u22650.0\u22641.0 precision string Operation precision to be used on inference. Valid value is one of: [fp32, fp16, int8]. int8 N/A calibration_algorithm string Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: [Entropy, (Legacy Percentile), MinMax]. Entropy dla_core_id float If positive ID value is specified, the node assign inference task to the DLA core. -1 N/A quantize_first_layer boolean If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8. False N/A quantize_last_layer boolean If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8. False N/A profile_per_layer boolean If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. False N/A clip_value float If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration. 6.0 N/A preprocess_on_gpu boolean If true, pre-processing is performed on GPU. True N/A gpu_id integer GPU ID for selecting CUDA device 0 N/A calibration_image_list_path string Path to a file which contains path to images. Those images will be used for int8 quantization. N/A Name Type Description Default Range ----------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------- --------------- model_path string Path to onnx model. \\((var data_path)/tensorrt_yolox/\\)(var model_name).onnx N/A label_path string Path to label file. $(var data_path)/tensorrt_yolox/label.txt N/A score_threshold float A threshold value of existence probability score, all of objects with score less than this threshold are ignored. 0.35 \u22650.0\u22641.0 nms_threshold float A threshold value of NMS. 0.7 \u22650.0\u22641.0 precision string Operation precision to be used on inference. Valid value is one of: [fp32, fp16, int8]. fp16 N/A calibration_algorithm string Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: [Entropy, (Legacy Percentile), MinMax]. MinMax dla_core_id float If positive ID value is specified, the node assign inference task to the DLA core. -1 N/A quantize_first_layer boolean If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8. False N/A quantize_last_layer boolean If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8. False N/A profile_per_layer boolean If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. False N/A clip_value float If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration. 0.0 N/A preprocess_on_gpu boolean If true, pre-processing is performed on GPU. True N/A gpu_id integer GPU ID for selecting CUDA device 0 N/A calibration_image_list_path string Path to a file which contains path to images. Those images will be used for int8 quantization. N/A"},{"location":"perception/autoware_tensorrt_yolox/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The label contained in detected 2D bounding boxes (i.e., out/objects) will be either one of the followings:

If other labels (case insensitive) are contained in the file specified via the label_file parameter, those are labeled as UNKNOWN, while detected rectangles are drawn in the visualization result (out/image).

The semantic segmentation mask is a gray image whose each pixel is index of one following class:

index semantic name 0 road 1 building 2 wall 3 obstacle 4 traffic_light 5 traffic_sign 6 person 7 vehicle 8 bike 9 road 10 sidewalk 11 roadPaint 12 curbstone 13 crosswalk_others 14 vegetation 15 sky"},{"location":"perception/autoware_tensorrt_yolox/#onnx-model","title":"Onnx model","text":"

A sample model (named yolox-tiny.onnx) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts. To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference, EfficientNMS_TRT module is attached after the ordinal YOLOX (tiny) network. The EfficientNMS_TRT module contains fixed values for score_threshold and nms_threshold in it, hence these parameters are ignored when users specify ONNX models including this module.

This package accepts both EfficientNMS_TRT attached ONNXs and models published from the official YOLOX repository (we referred to them as \"plain\" models).

In addition to yolox-tiny.onnx, a custom model named yolox-sPlus-opt-pseudoV2-T4-960x960-T4-seg16cls is either available. This model is multi-header structure model which is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny. To get better results with this model, users are recommended to use some specific running arguments such as precision:=int8, calibration_algorithm:=Entropy, clip_value:=6.0. Users can refer launch/yolox_sPlus_opt.launch.xml to see how this model can be used. Beside detection result, this model also output image semantic segmentation result for pointcloud filtering purpose.

All models are automatically converted to TensorRT format. These converted files will be saved in the same directory as specified ONNX files with .engine filename extension and reused from the next run. The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run

"},{"location":"perception/autoware_tensorrt_yolox/#package-acceptable-model-generation","title":"Package acceptable model generation","text":"

To convert users' own model that saved in PyTorch's pth format into ONNX, users can exploit the converter offered by the official repository. For the convenience, only procedures are described below. Please refer the official document for more detail.

"},{"location":"perception/autoware_tensorrt_yolox/#for-plain-models","title":"For plain models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n
"},{"location":"perception/autoware_tensorrt_yolox/#for-efficientnms_trt-embedded-models","title":"For EfficientNMS_TRT embedded models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\npip3 install git+ssh://git@github.com/wep21/yolox_onnx_modifier.git --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n  --decode_in_inference\n
  3. Embed EfficientNMS_TRT to the end of YOLOX

    yolox_onnx_modifier YOUR_YOLOX.onnx -o YOUR_YOLOX_WITH_NMS.onnx\n
"},{"location":"perception/autoware_tensorrt_yolox/#label-file","title":"Label file","text":"

A sample label file (named label.txt) and semantic segmentation color map file (name semseg_color_map.csv) are also downloaded automatically during env preparation process (NOTE: This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).

This file represents the correspondence between class index (integer outputted from YOLOX network) and class label (strings making understanding easier). This package maps class IDs (incremented from 0) with labels according to the order in this file.

"},{"location":"perception/autoware_tensorrt_yolox/#reference-repositories","title":"Reference repositories","text":""},{"location":"perception/autoware_tracking_object_merger/","title":"Tracking Object Merger","text":""},{"location":"perception/autoware_tracking_object_merger/#tracking-object-merger","title":"Tracking Object Merger","text":""},{"location":"perception/autoware_tracking_object_merger/#purpose","title":"Purpose","text":"

This package try to merge two tracking objects from different sensor.

"},{"location":"perception/autoware_tracking_object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Merging tracking objects from different sensor is a combination of data association and state fusion algorithms.

Detailed process depends on the merger policy.

"},{"location":"perception/autoware_tracking_object_merger/#decorative_tracker_merger","title":"decorative_tracker_merger","text":"

In decorative_tracker_merger, we assume there are dominant tracking objects and sub tracking objects. The name decorative means that sub tracking objects are used to complement the main objects.

Usually the dominant tracking objects are from LiDAR and sub tracking objects are from Radar or Camera.

Here show the processing pipeline.

"},{"location":"perception/autoware_tracking_object_merger/#time-sync","title":"time sync","text":"

Sub object(Radar or Camera) often has higher frequency than dominant object(LiDAR). So we need to sync the time of sub object to dominant object.

"},{"location":"perception/autoware_tracking_object_merger/#data-association","title":"data association","text":"

In the data association, we use the following rules to determine whether two tracking objects are the same object.

"},{"location":"perception/autoware_tracking_object_merger/#tracklet-update","title":"tracklet update","text":"

Sub tracking objects are merged into dominant tracking objects.

Depends on the tracklet input sensor state, we update the tracklet state with different rules.

state\\priority 1st 2nd 3rd Kinematics except velocity LiDAR Radar Camera Forward velocity Radar LiDAR Camera Object classification Camera LiDAR Radar"},{"location":"perception/autoware_tracking_object_merger/#tracklet-management","title":"tracklet management","text":"

We use the existence_probability to manage tracklet.

These parameter can be set in config/decorative_tracker_merger.param.yaml.

tracker_state_parameter:\nremove_probability_threshold: 0.3\npublish_probability_threshold: 0.6\ndefault_lidar_existence_probability: 0.7\ndefault_radar_existence_probability: 0.6\ndefault_camera_existence_probability: 0.6\ndecay_rate: 0.1\nmax_dt: 1.0\n
"},{"location":"perception/autoware_tracking_object_merger/#inputparameters","title":"input/parameters","text":"topic name message type description ~input/main_object autoware_perception_msgs::TrackedObjects Dominant tracking objects. Output will be published with this dominant object stamps. ~input/sub_object autoware_perception_msgs::TrackedObjects Sub tracking objects. output/object autoware_perception_msgs::TrackedObjects Merged tracking objects. debug/interpolated_sub_object autoware_perception_msgs::TrackedObjects Interpolated sub tracking objects.

Default parameters are set in config/decorative_tracker_merger.param.yaml.

parameter name description default value base_link_frame_id base link frame id. This is used to transform the tracking object. \"base_link\" time_sync_threshold time sync threshold. If the time difference between two tracking objects is smaller than this value, we consider these two tracking objects are the same object. 0.05 sub_object_timeout_sec sub object timeout. If the sub object is not updated for this time, we consider this object is not exist. 0.5 main_sensor_type main sensor type. This is used to determine the dominant tracking object. \"lidar\" sub_sensor_type sub sensor type. This is used to determine the sub tracking object. \"radar\" tracker_state_parameter tracker state parameter. This is used to manage the tracklet. "},{"location":"perception/autoware_tracking_object_merger/#tuning","title":"tuning","text":"

As explained in tracklet management, this tracker merger tend to maintain the both input tracking objects.

If there are many false positive tracking objects,

"},{"location":"perception/autoware_tracking_object_merger/#equivalent_tracker_merger","title":"equivalent_tracker_merger","text":"

This is future work.

"},{"location":"perception/autoware_traffic_light_arbiter/","title":"autoware_traffic_light_arbiter","text":""},{"location":"perception/autoware_traffic_light_arbiter/#autoware_traffic_light_arbiter","title":"autoware_traffic_light_arbiter","text":""},{"location":"perception/autoware_traffic_light_arbiter/#purpose","title":"Purpose","text":"

This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach.

"},{"location":"perception/autoware_traffic_light_arbiter/#trafficlightarbiter","title":"TrafficLightArbiter","text":"

A node that merges traffic light/signal state from image recognition and external (e.g., V2X) systems to provide to a planning component.

"},{"location":"perception/autoware_traffic_light_arbiter/#signal-match-validator","title":"Signal Match Validator","text":"

When enable_signal_matching is set to true, this node validates the match between perception signals and external signals. The table below outlines how the matching process determines the output based on the combination of perception and external signal colors. Each cell represents the outcome when a specific color from a perception signal (columns) intersects with a color from an external signal (rows).

External \\ Perception RED AMBER GREEN UNKNOWN Not Received RED RED UNKNOWN UNKNOWN UNKNOWN UNKNOWN AMBER UNKNOWN AMBER UNKNOWN UNKNOWN UNKNOWN GREEN UNKNOWN UNKNOWN GREEN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN Not Received UNKNOWN UNKNOWN UNKNOWN UNKNOWN UNKNOWN"},{"location":"perception/autoware_traffic_light_arbiter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_arbiter/#input","title":"Input","text":"Name Type Description ~/sub/vector_map autoware_map_msgs::msg::LaneletMapBin The vector map to get valid traffic signal ids. ~/sub/perception_traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The traffic signals from the image recognition pipeline. ~/sub/external_traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The traffic signals from an external system."},{"location":"perception/autoware_traffic_light_arbiter/#output","title":"Output","text":"Name Type Description ~/pub/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The merged traffic signal state."},{"location":"perception/autoware_traffic_light_arbiter/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_arbiter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description external_delay_tolerance double 5.0 The duration in seconds an external message is considered valid for merging in comparison with current time external_time_tolerance double 5.0 The duration in seconds an external message is considered valid for merging in comparison with a timestamp of perception message perception_time_tolerance double 1.0 The duration in seconds a perception message is considered valid for merging in comparison with a timestamp of external message external_priority bool false Whether or not externals signals take precedence over perception-based ones. If false, the merging uses confidence as a criteria enable_signal_matching bool false Decide whether to validate the match between perception signals and external signals. If set to true, verify that the colors match and only publish them if they are identical"},{"location":"perception/autoware_traffic_light_classifier/","title":"autoware_traffic_light_classifier","text":""},{"location":"perception/autoware_traffic_light_classifier/#autoware_traffic_light_classifier","title":"autoware_traffic_light_classifier","text":""},{"location":"perception/autoware_traffic_light_classifier/#purpose","title":"Purpose","text":"

autoware_traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

"},{"location":"perception/autoware_traffic_light_classifier/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If height and width of ~/input/rois is 0, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, CIRCLE, and 0.0. If ~/input/rois is judged as backlight, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, UNKNOWN, and 0.0.

"},{"location":"perception/autoware_traffic_light_classifier/#cnn_classifier","title":"cnn_classifier","text":"

Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.

Name Input Size Test Accuracy EfficientNet-b1 128 x 128 99.76% MobileNet-v2 224 x 224 99.81%

For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here:

Name Input Size Test Accuracy EfficientNet-b1 128 x 128 97.89% MobileNet-v2 224 x 224 99.10%"},{"location":"perception/autoware_traffic_light_classifier/#hsv_classifier","title":"hsv_classifier","text":"

Traffic light colors (green, yellow and red) are classified in HSV model.

"},{"location":"perception/autoware_traffic_light_classifier/#about-label","title":"About Label","text":"

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as \"red-circle, red-cross\".

These colors and shapes are assigned to the message as follows:

"},{"location":"perception/autoware_traffic_light_classifier/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_classifier/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs::msg::Image input image ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights"},{"location":"perception/autoware_traffic_light_classifier/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals tier4_perception_msgs::msg::TrafficLightArray classified signals ~/output/debug/image sensor_msgs::msg::Image image for debugging"},{"location":"perception/autoware_traffic_light_classifier/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_classifier/#node-parameters","title":"Node Parameters","text":"Name Type Description classifier_type int If the value is 1, cnn_classifier is used data_path str Packages data and artifacts directory path backlight_threshold double If the intensity of light is grater than this threshold, the color and shape of the corresponding ROI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to 0.0. The value should be set in the range of [0.0, 1.0]. If you wouldn't like to use this feature, please set it to 1.0. classify_traffic_light_type int If the value is 0, vehicular signals are classified. If the value is 1, pedestrian signals are classified."},{"location":"perception/autoware_traffic_light_classifier/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/autoware_traffic_light_classifier/#cnn_classifier_1","title":"cnn_classifier","text":"Name Type Description classifier_label_path str path to the model file classifier_model_path str path to the label file classifier_precision str TensorRT precision, fp16 or int8 classifier_mean vector\\ 3-channel input image mean classifier_std vector\\ 3-channel input image std apply_softmax bool whether or not apply softmax"},{"location":"perception/autoware_traffic_light_classifier/#hsv_classifier_1","title":"hsv_classifier","text":"Name Type Description green_min_h int the minimum hue of green color green_min_s int the minimum saturation of green color green_min_v int the minimum value (brightness) of green color green_max_h int the maximum hue of green color green_max_s int the maximum saturation of green color green_max_v int the maximum value (brightness) of green color yellow_min_h int the minimum hue of yellow color yellow_min_s int the minimum saturation of yellow color yellow_min_v int the minimum value (brightness) of yellow color yellow_max_h int the maximum hue of yellow color yellow_max_s int the maximum saturation of yellow color yellow_max_v int the maximum value (brightness) of yellow color red_min_h int the minimum hue of red color red_min_s int the minimum saturation of red color red_min_v int the minimum value (brightness) of red color red_max_h int the maximum hue of red color red_max_s int the maximum saturation of red color red_max_v int the maximum value (brightness) of red color"},{"location":"perception/autoware_traffic_light_classifier/#training-traffic-light-classifier-model","title":"Training Traffic Light Classifier Model","text":""},{"location":"perception/autoware_traffic_light_classifier/#overview","title":"Overview","text":"

This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.

"},{"location":"perception/autoware_traffic_light_classifier/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/autoware_traffic_light_classifier/#use-sample-dataset","title":"Use Sample Dataset","text":"

Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.

"},{"location":"perception/autoware_traffic_light_classifier/#use-your-custom-dataset","title":"Use Your Custom Dataset","text":"

To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;

DATASET_ROOT\n    \u251c\u2500\u2500 TRAIN\n    \u2502    \u251c\u2500\u2500 RED\n    \u2502    \u2502   \u251c\u2500\u2500 001.png\n    \u2502    \u2502   \u251c\u2500\u2500 002.png\n    \u2502    \u2502   \u2514\u2500\u2500 ...\n    \u2502    \u2502\n    \u2502    \u251c\u2500\u2500 GREEN\n    \u2502    \u2502    \u251c\u2500\u2500 001.png\n    \u2502    \u2502    \u251c\u2500\u2500 002.png\n    \u2502    \u2502    \u2514\u2500\u2500...\n    \u2502    \u2502\n    \u2502    \u251c\u2500\u2500 YELLOW\n    \u2502    \u2502    \u251c\u2500\u2500 001.png\n    \u2502    \u2502    \u251c\u2500\u2500 002.png\n    \u2502    \u2502    \u2514\u2500\u2500...\n    \u2502    \u2514\u2500\u2500 ...\n    \u2502\n    \u251c\u2500\u2500 VAL\n    \u2502       \u2514\u2500\u2500...\n    \u2502\n    \u2502\n    \u2514\u2500\u2500 TEST\n           \u2514\u2500\u2500 ...\n
"},{"location":"perception/autoware_traffic_light_classifier/#installation","title":"Installation","text":""},{"location":"perception/autoware_traffic_light_classifier/#prerequisites","title":"Prerequisites","text":"

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name tl-classifier python=3.8 -y\nconda activate tl-classifier\n

Step 3. Install PyTorch

Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware

conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia\n
"},{"location":"perception/autoware_traffic_light_classifier/#install-mmlabmmpretrain","title":"Install mmlab/mmpretrain","text":"

Step 1. Install mmpretrain from source

cd ~/\ngit clone https://github.com/open-mmlab/mmpretrain.git\ncd mmpretrain\npip install -U openmim && mim install -e .\n
"},{"location":"perception/autoware_traffic_light_classifier/#training","title":"Training","text":"

MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.

In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.

"},{"location":"perception/autoware_traffic_light_classifier/#create-a-config-file","title":"Create a config file","text":"

Generate a configuration file for your preferred model within the configs folder

touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n

Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences

# Inherit model, schedule and default_runtime from base model\n_base_ = [\n    '../_base_/models/mobilenet_v2_1x.py',\n    '../_base_/schedules/imagenet_bs256_epochstep.py',\n    '../_base_/default_runtime.py'\n]\n\n# Set the number of classes to the model\n# You can also change other model parameters here\n# For detailed descriptions of model parameters, please refer to link below\n# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]\nmodel = dict(head=dict(num_classes=3, topk=(1, 3)))\n\n# Set max epochs and validation interval\ntrain_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)\n\n# Set optimizer and lr scheduler\noptim_wrapper = dict(\n    optimizer=dict(type='SGD', lr=0.001, momentum=0.9))\nparam_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)\n\ndataset_type = 'CustomDataset'\ndata_root = \"/PATH/OF/YOUR/DATASET\"\n\n# Customize data preprocessing and dataloader pipeline for training set\n# These parameters calculated for the sample dataset\ndata_preprocessor = dict(\n    mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],\n    std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],\n    num_classes=3,\n    to_rgb=True,\n)\n\n# Customize data preprocessing and dataloader pipeline for train set\n# For detailed descriptions of data pipeline, please refer to link below\n# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]\ntrain_pipeline = [\n    dict(type='LoadImageFromFile'),\n    dict(type='Resize', scale=224),\n    dict(type='RandomFlip', prob=0.5, direction='horizontal'),\n    dict(type='PackInputs'),\n]\ntrain_dataloader = dict(\n    dataset=dict(\n        type=dataset_type,\n        data_root=data_root,\n        ann_file='',\n        data_prefix='train',\n        with_label=True,\n        pipeline=train_pipeline,\n    ),\n    num_workers=8,\n    batch_size=32,\n    sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\n# Customize data preprocessing and dataloader pipeline for test set\ntest_pipeline = [\n    dict(type='LoadImageFromFile'),\n    dict(type='Resize', scale=224),\n    dict(type='PackInputs'),\n]\n\n# Customize data preprocessing and dataloader pipeline for validation set\nval_cfg = dict()\nval_dataloader = dict(\n    dataset=dict(\n        type=dataset_type,\n        data_root=data_root,\n        ann_file='',\n        data_prefix='val',\n        with_label=True,\n        pipeline=test_pipeline,\n    ),\n    num_workers=8,\n    batch_size=32,\n    sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\nval_evaluator = dict(topk=(1, 3,), type='Accuracy')\n\ntest_dataloader = val_dataloader\ntest_evaluator = val_evaluator\n
"},{"location":"perception/autoware_traffic_light_classifier/#start-training","title":"Start training","text":"
cd ~/mmpretrain\npython tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n

Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom folder.

"},{"location":"perception/autoware_traffic_light_classifier/#convert-pytorch-model-to-onnx-model","title":"Convert PyTorch model to ONNX model","text":""},{"location":"perception/autoware_traffic_light_classifier/#install-mmdeploy","title":"Install mmdeploy","text":"

The 'mmdeploy' toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.

# Activate your conda environment\nconda activate tl-classifier\n\n# Install mmenigne and mmcv\nmim install mmengine\nmim install \"mmcv>=2.0.0rc2\"\n\n# Install mmdeploy\npip install mmdeploy==1.2.0\n\n# Support onnxruntime\npip install mmdeploy-runtime==1.2.0\npip install mmdeploy-runtime-gpu==1.2.0\npip install onnxruntime-gpu==1.8.1\n\n#Clone mmdeploy repository\ncd ~/\ngit clone -b main https://github.com/open-mmlab/mmdeploy.git\n
"},{"location":"perception/autoware_traffic_light_classifier/#convert-pytorch-model-to-onnx-model_1","title":"Convert PyTorch model to ONNX model","text":"
cd ~/mmdeploy\n\n# Run deploy.py script\n# deploy.py script takes 5 main arguments with these order; config file path, train config file path,\n# checkpoint file path, demo image path, and work directory path\npython tools/deploy.py \\\n~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\\\n~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \\\n~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \\\n/SAMPLE/IAMGE/DIRECTORY \\\n--work-dir mmdeploy_model/mobilenet_v2\n

Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2 folder.

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w...). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.

"},{"location":"perception/autoware_traffic_light_classifier/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_traffic_light_classifier/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_traffic_light_classifier/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_traffic_light_classifier/#referencesexternal-links","title":"References/External links","text":"

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

[2] Tan, Mingxing, and Quoc Le. \"EfficientNet: Rethinking model scaling for convolutional neural networks.\" International conference on machine learning. PMLR, 2019.

"},{"location":"perception/autoware_traffic_light_classifier/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/autoware_traffic_light_fine_detector/","title":"autoware_traffic_light_fine_detector","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#autoware_traffic_light_fine_detector","title":"autoware_traffic_light_fine_detector","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#purpose","title":"Purpose","text":"

It is a package for traffic light detection using YOLOX-s.

"},{"location":"perception/autoware_traffic_light_fine_detector/#training-information","title":"Training Information","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#pretrained-model","title":"Pretrained Model","text":"

The model is based on YOLOX and the pretrained model could be downloaded from here.

"},{"location":"perception/autoware_traffic_light_fine_detector/#training-data","title":"Training Data","text":"

The model was fine-tuned on around 17,000 TIER IV internal images of Japanese traffic lights.

"},{"location":"perception/autoware_traffic_light_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":"

You can download the ONNX file using these instructions. Please visit autoware-documentation for more information.

"},{"location":"perception/autoware_traffic_light_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Based on the camera image and the global ROI array detected by map_based_detector node, a CNN-based detection method enables highly accurate traffic light detection. If can not detect traffic light, x_offset, y_offset, height and width of output ROI become 0. ROIs detected from YOLOX will be selected by a combination of expect/rois. At this time, evaluate the whole as ROIs, not just the ROI alone.

"},{"location":"perception/autoware_traffic_light_fine_detector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs::msg::Image The full size camera image ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray The array of ROIs detected by map_based_detector ~/expect/rois tier4_perception_msgs::msg::TrafficLightRoiArray The array of ROIs detected by map_based_detector without any offset, used to select the best detection results"},{"location":"perception/autoware_traffic_light_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois tier4_perception_msgs::msg::TrafficLightRoiArray The detected accurate rois ~/debug/exe_time_ms tier4_debug_msgs::msg::Float32Stamped The time taken for inference"},{"location":"perception/autoware_traffic_light_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description fine_detector_score_thresh double 0.3 If the objectness score is less than this value, the object is ignored fine_detector_nms_thresh double 0.65 IoU threshold to perform Non-Maximum Suppression"},{"location":"perception/autoware_traffic_light_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path string \"$(env HOME)/autoware_data\" packages data and artifacts directory path fine_detector_model_path string \"\" The onnx file name for yolo model fine_detector_label_path string \"\" The label file with label names for detected objects written on it fine_detector_precision string \"fp16\" The inference mode: \"fp32\", \"fp16\" approximate_sync bool false Flag for whether to ues approximate sync policy gpu_id integer 0 ID for the selecting CUDA GPU device"},{"location":"perception/autoware_traffic_light_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_traffic_light_fine_detector/#reference-repositories","title":"Reference repositories","text":"

YOLOX github repository

"},{"location":"perception/autoware_traffic_light_map_based_detector/","title":"autoware_traffic_light_map_based_detector","text":""},{"location":"perception/autoware_traffic_light_map_based_detector/#autoware_traffic_light_map_based_detector","title":"autoware_traffic_light_map_based_detector","text":""},{"location":"perception/autoware_traffic_light_map_based_detector/#overview","title":"Overview","text":"

autoware_traffic_light_map_based_detector calculates where the traffic lights will appear in the image based on the HD map.

Calibration and vibration errors can be entered as parameters, and the size of the detected RegionOfInterest will change according to the error.

If the node receives route information, it only looks at traffic lights on that route. If the node receives no route information, it looks at them within a radius of max_detection_range and the angle between the traffic light and the camera is less than car_traffic_light_max_angle_range or pedestrian_traffic_light_max_angle_range.

"},{"location":"perception/autoware_traffic_light_map_based_detector/#input-topics","title":"Input topics","text":"Name Type Description ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/camera_info sensor_msgs::msg::CameraInfo target camera parameter ~/input/route autoware_planning_msgs::msg::LaneletRoute optional: route"},{"location":"perception/autoware_traffic_light_map_based_detector/#output-topics","title":"Output topics","text":"Name Type Description ~/output/rois tier4_perception_msgs::msg::TrafficLightRoiArray location of traffic lights in image corresponding to the camera info ~/expect/rois tier4_perception_msgs::msg::TrafficLightRoiArray location of traffic lights in image without any offset ~/debug/markers visualization_msgs::msg::MarkerArray markers which show a line that combines from camera to each traffic light"},{"location":"perception/autoware_traffic_light_map_based_detector/#node-parameters","title":"Node parameters","text":"Parameter Type Description max_vibration_pitch double Maximum error in pitch direction. If -5~+5, it will be 10. max_vibration_yaw double Maximum error in yaw direction. If -5~+5, it will be 10. max_vibration_height double Maximum error in height direction. If -5~+5, it will be 10. max_vibration_width double Maximum error in width direction. If -5~+5, it will be 10. max_vibration_depth double Maximum error in depth direction. If -5~+5, it will be 10. max_detection_range double Maximum detection range in meters. Must be positive. min_timestamp_offset double Minimum timestamp offset when searching for corresponding tf. max_timestamp_offset double Maximum timestamp offset when searching for corresponding tf. timestamp_sample_len double Sampling length between min_timestamp_offset and max_timestamp_offset. car_traffic_light_max_angle_range double Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive. pedestrian_traffic_light_max_angle_range double Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive."},{"location":"perception/autoware_traffic_light_multi_camera_fusion/","title":"autoware_traffic_light_multi_camera_fusion","text":""},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#autoware_traffic_light_multi_camera_fusion","title":"autoware_traffic_light_multi_camera_fusion","text":""},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#overview","title":"Overview","text":"

autoware_traffic_light_multi_camera_fusion performs traffic light signal fusion which can be summarized as the following two tasks:

  1. Multi-Camera-Fusion: fusion each traffic light signal detected by different cameras.
  2. Group-Fusion: Fusion each traffic light signal within the same group, which means traffic lights share the same regulatory element ID defined in lanelet2 map.

The fusion method is below.

  1. Use the results of the new timestamp if the results are from the same sensor
  2. Use the results that are not elements.size() == 1 && color == UNKNOWN && shape == UNKNOWN
  3. Use the results that each vertex of ROI is not at the edge of the image
  4. Use the results of high confidence
"},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#input-topics","title":"Input topics","text":"

For every camera, the following three topics are subscribed:

Name Type Description ~/<camera_namespace>/camera_info sensor_msgs::msg::CameraInfo camera info from map_based_detector ~/<camera_namespace>/detection/rois tier4_perception_msgs::msg::TrafficLightRoiArray detection roi from fine_detector ~/<camera_namespace>/classification/traffic_signals tier4_perception_msgs::msg::TrafficLightArray classification result from classifier

You don't need to configure these topics manually. Just provide the camera_namespaces parameter and the node will automatically extract the <camera_namespace> and create the subscribers.

"},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#output-topics","title":"Output topics","text":"Name Type Description ~/output/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray traffic light signal fusion result"},{"location":"perception/autoware_traffic_light_multi_camera_fusion/#node-parameters","title":"Node parameters","text":"Parameter Type Description camera_namespaces vector\\ Camera Namespaces to be fused message_lifespan double The maximum timestamp span to be fused approximate_sync bool Whether work in Approximate Synchronization Mode perform_group_fusion bool Whether perform Group Fusion"},{"location":"perception/autoware_traffic_light_occlusion_predictor/","title":"autoware_traffic_light_occlusion_predictor","text":""},{"location":"perception/autoware_traffic_light_occlusion_predictor/#autoware_traffic_light_occlusion_predictor","title":"autoware_traffic_light_occlusion_predictor","text":""},{"location":"perception/autoware_traffic_light_occlusion_predictor/#overview","title":"Overview","text":"

autoware_traffic_light_occlusion_predictor receives the detected traffic lights rois and calculates the occlusion ratios of each roi with point cloud. If that rois is judged as occlusion, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, UNKNOWN, and 0.0. This node publishes when each car and pedestrian traffic_signals is arrived and processed.

For each traffic light roi, hundreds of pixels would be selected and projected into the 3D space. Then from the camera point of view, the number of projected pixels that are occluded by the point cloud is counted and used for calculating the occlusion ratio for the roi. As shown in follow image, the red pixels are occluded and the occlusion ratio is the number of red pixels divided by the total pixel numbers.

If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0.

"},{"location":"perception/autoware_traffic_light_occlusion_predictor/#input-topics","title":"Input topics","text":"Name Type Description ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/car/traffic_signals tier4_perception_msgs::msg::TrafficLightArray vehicular traffic light signals ~/input/pedestrian/traffic_signals tier4_perception_msgs::msg::TrafficLightArray pedestrian traffic light signals ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray traffic light detections ~/input/camera_info sensor_msgs::msg::CameraInfo target camera parameter ~/input/cloud sensor_msgs::msg::PointCloud2 LiDAR point cloud"},{"location":"perception/autoware_traffic_light_occlusion_predictor/#output-topics","title":"Output topics","text":"Name Type Description ~/output/traffic_signals tier4_perception_msgs::msg::TrafficLightArray traffic light signals which occluded image results overwritten"},{"location":"perception/autoware_traffic_light_occlusion_predictor/#node-parameters","title":"Node parameters","text":"Parameter Type Description azimuth_occlusion_resolution_deg double azimuth resolution of LiDAR point cloud (degree) elevation_occlusion_resolution_deg double elevation resolution of LiDAR point cloud (degree) max_valid_pt_dist double The points within this distance would be used for calculation max_image_cloud_delay double The maximum delay between LiDAR point cloud and camera image max_wait_t double The maximum time waiting for the LiDAR point cloud max_occlusion_ratio int The maximum occlusion ratio for setting signal as unknown"},{"location":"perception/autoware_traffic_light_visualization/","title":"autoware_traffic_light_visualization","text":""},{"location":"perception/autoware_traffic_light_visualization/#autoware_traffic_light_visualization","title":"autoware_traffic_light_visualization","text":""},{"location":"perception/autoware_traffic_light_visualization/#purpose","title":"Purpose","text":"

The autoware_traffic_light_visualization is a package that includes two visualizing nodes:

"},{"location":"perception/autoware_traffic_light_visualization/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/autoware_traffic_light_visualization/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_map_visualizer","title":"traffic_light_map_visualizer","text":""},{"location":"perception/autoware_traffic_light_visualization/#input","title":"Input","text":"Name Type Description ~/input/tl_state tier4_perception_msgs::msg::TrafficLightGroupArray status of traffic lights ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"perception/autoware_traffic_light_visualization/#output","title":"Output","text":"Name Type Description ~/output/traffic_light visualization_msgs::msg::MarkerArray marker array that indicates status of traffic lights"},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_roi_visualizer","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/autoware_traffic_light_visualization/#input_1","title":"Input","text":"Name Type Description ~/input/tl_state tier4_perception_msgs::msg::TrafficLightArray status of traffic lights ~/input/image sensor_msgs::msg::Image the image captured by perception cameras ~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_fine_detector ~/input/rough/rois (option) tier4_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_map_based_detector"},{"location":"perception/autoware_traffic_light_visualization/#output_1","title":"Output","text":"Name Type Description ~/output/image sensor_msgs::msg::Image output image with ROIs"},{"location":"perception/autoware_traffic_light_visualization/#parameters","title":"Parameters","text":""},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_map_visualizer_1","title":"traffic_light_map_visualizer","text":"

None

"},{"location":"perception/autoware_traffic_light_visualization/#traffic_light_roi_visualizer_1","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/autoware_traffic_light_visualization/#node-parameters","title":"Node Parameters","text":"Name Type Description Default Range enable_fine_detection boolean whether to visualize result of the traffic light fine detection false N/A use_image_transport boolean whether to apply image transport to compress the output debugging image in the traffic light fine detection true N/A"},{"location":"perception/autoware_traffic_light_visualization/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/autoware_traffic_light_visualization/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/perception_utils/","title":"perception_utils","text":""},{"location":"perception/perception_utils/#perception_utils","title":"perception_utils","text":""},{"location":"perception/perception_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions that are useful across the perception module.

"},{"location":"planning/","title":"Planning Components","text":""},{"location":"planning/#planning-components","title":"Planning Components","text":""},{"location":"planning/#getting-started","title":"Getting Started","text":"

The Autoware.Universe Planning Modules represent a cutting-edge component within the broader open-source autonomous driving software stack. These modules play a pivotal role in autonomous vehicle navigation, skillfully handling route planning, dynamic obstacle avoidance, and real-time adaptation to varied traffic conditions.

"},{"location":"planning/#planning-module","title":"Planning Module","text":"

The Module in the Planning Component refers to the various components that collectively form the planning system of the software. These modules cover a range of functionalities necessary for autonomous vehicle planning. Autoware's planning modules are modularized, meaning users can customize which functions are enabled by changing the configuration. This modular design allows for flexibility and adaptability to different scenarios and requirements in autonomous vehicle operations.

"},{"location":"planning/#how-to-enable-or-disable-planning-module","title":"How to Enable or Disable Planning Module","text":"

Enabling and disabling modules involves managing settings in key configuration and launch files.

"},{"location":"planning/#key-files-for-configuration","title":"Key Files for Configuration","text":"

The default_preset.yaml file acts as the primary configuration file, where planning modules can be disabled or enabled. Furthermore, users can also set the type of motion planner across various motion planners. For example:

Note

Click here to view the default_preset.yaml.

The launch files reference the settings defined in default_preset.yaml to apply the configurations when the behavior path planner's node is running. For instance, the parameter avoidance.enable_module in

<param name=\"avoidance.enable_module\" value=\"$(var launch_avoidance_module)\"/>\n

corresponds to launch_avoidance_module from default_preset.yaml.

"},{"location":"planning/#parameter-configuration","title":"Parameter Configuration","text":"

There are multiple parameters available for configuration, and users have the option to modify them in here. It's important to note that not all parameters are adjustable via rqt_reconfigure. To ensure the changes are effective, modify the parameters and then restart Autoware. Additionally, detailed information about each parameter is available in the corresponding documents under the planning tab.

"},{"location":"planning/#integrating-a-custom-module-into-autoware-a-step-by-step-guide","title":"Integrating a Custom Module into Autoware: A Step-by-Step Guide","text":"

This guide outlines the steps for integrating your custom module into Autoware:

- arg:\nname: launch_intersection_module\ndefault: \"true\"\n
<arg name=\"launch_intersection_module\" default=\"true\"/>\n\n<let\nname=\"behavior_velocity_planner_launch_modules\"\nvalue=\"$(eval &quot;'$(var behavior_velocity_planner_launch_modules)' + 'behavior_velocity_planner::IntersectionModulePlugin, '&quot;)\"\nif=\"$(var launch_intersection_module)\"\n/>\n
<arg name=\"behavior_velocity_planner_intersection_module_param_path\" value=\"$(var behavior_velocity_config_path)/intersection.param.yaml\"/>\n
<param from=\"$(var behavior_velocity_planner_intersection_module_param_path)\"/>\n

Note

Depending on the specific module you wish to add, the relevant files and steps may vary. This guide provides a general overview and serves as a starting point. It's important to adapt these instructions to the specifics of your module.

"},{"location":"planning/#join-our-community-driven-effort","title":"Join Our Community-Driven Effort","text":"

Autoware thrives on community collaboration. Every contribution, big or small, is invaluable to us. Whether it's reporting bugs, suggesting improvements, offering new ideas, or anything else you can think of \u2013 we welcome all contributions with open arms.

"},{"location":"planning/#how-to-contribute","title":"How to Contribute?","text":"

Ready to contribute? Great! To get started, simply visit our Contributing Guidelines where you'll find all the information you need to jump in. This includes instructions on submitting bug reports, proposing feature enhancements, and even contributing to the codebase.

"},{"location":"planning/#join-our-planning-control-working-group-meetings","title":"Join Our Planning & Control Working Group Meetings","text":"

The Planning & Control working group is an integral part of our community. We meet bi-weekly to discuss our current progress, upcoming challenges, and brainstorm new ideas. These meetings are a fantastic opportunity to directly contribute to our discussions and decision-making processes.

Meeting Details:

Interested in joining our meetings? We\u2019d love to have you! For more information on how to participate, visit the following link: How to participate in the working group.

"},{"location":"planning/#citations","title":"Citations","text":"

Occasionally, we publish papers specific to the Planning Component in Autoware. We encourage you to explore these publications and find valuable insights for your work. If you find them useful and incorporate any of our methodologies or algorithms in your projects, citing our papers would be immensely helpful. This support allows us to reach a broader audience and continue contributing to the field.

If you use the Jerk Constrained Velocity Planning algorithm in the Motion Velocity Smoother module in the Planning Component, we kindly request you cite the relevant paper.

Y. Shimizu, T. Horibe, F. Watanabe and S. Kato, \"Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach,\" 2022 International Conference on Robotics and Automation (ICRA)

@inproceedings{shimizu2022,\n  author={Shimizu, Yutaka and Horibe, Takamasa and Watanabe, Fumiya and Kato, Shinpei},\n  booktitle={2022 International Conference on Robotics and Automation (ICRA)},\n  title={Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach},\n  year={2022},\n  pages={5814-5820},\n  doi={10.1109/ICRA46639.2022.9812155}}\n
"},{"location":"planning/autoware_costmap_generator/","title":"costmap_generator","text":""},{"location":"planning/autoware_costmap_generator/#costmap_generator","title":"costmap_generator","text":""},{"location":"planning/autoware_costmap_generator/#costmap_generator_node","title":"costmap_generator_node","text":"

This node reads PointCloud and/or DynamicObjectArray and creates an OccupancyGrid and GridMap. VectorMap(Lanelet2) is optional.

"},{"location":"planning/autoware_costmap_generator/#input-topics","title":"Input topics","text":"Name Type Description ~input/objects autoware_perception_msgs::PredictedObjects predicted objects, for obstacles areas ~input/points_no_ground sensor_msgs::PointCloud2 ground-removed points, for obstacle areas which can't be detected as objects ~input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map, for drivable areas ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/autoware_costmap_generator/#output-topics","title":"Output topics","text":"Name Type Description ~output/grid_map grid_map_msgs::GridMap costmap as GridMap, values are from 0.0 to 1.0 ~output/occupancy_grid nav_msgs::OccupancyGrid costmap as OccupancyGrid, values are from 0 to 100"},{"location":"planning/autoware_costmap_generator/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/autoware_costmap_generator/#how-to-launch","title":"How to launch","text":"
  1. Execute the command source install/setup.bash to setup the environment

  2. Run ros2 launch costmap_generator costmap_generator.launch.xml to launch the node

"},{"location":"planning/autoware_costmap_generator/#parameters","title":"Parameters","text":"Name Type Description update_rate double timer's update rate activate_by_scenario bool if true, activate by scenario = parking. Otherwise, activate if vehicle is inside parking lot. use_objects bool whether using ~input/objects or not use_points bool whether using ~input/points_no_ground or not use_wayarea bool whether using wayarea from ~input/vector_map or not use_parkinglot bool whether using parkinglot from ~input/vector_map or not costmap_frame string created costmap's coordinate vehicle_frame string vehicle's coordinate map_frame string map's coordinate grid_min_value double minimum cost for gridmap grid_max_value double maximum cost for gridmap grid_resolution double resolution for gridmap grid_length_x int size of gridmap for x direction grid_length_y int size of gridmap for y direction grid_position_x int offset from coordinate in x direction grid_position_y int offset from coordinate in y direction maximum_lidar_height_thres double maximum height threshold for pointcloud data (relative to the vehicle_frame) minimum_lidar_height_thres double minimum height threshold for pointcloud data (relative to the vehicle_frame) expand_rectangle_size double expand object's rectangle with this value size_of_expansion_kernel int kernel size for blurring effect on object's costmap"},{"location":"planning/autoware_costmap_generator/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_external_velocity_limit_selector/","title":"External Velocity Limit Selector","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#external-velocity-limit-selector","title":"External Velocity Limit Selector","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#purpose","title":"Purpose","text":"

The external_velocity_limit_selector_node is a node that keeps consistency of external velocity limits. This module subscribes

  1. velocity limit command sent by API,
  2. velocity limit command sent by Autoware internal modules.

VelocityLimit.msg contains not only max velocity but also information about the acceleration/jerk constraints on deceleration. The external_velocity_limit_selector_node integrates the lowest velocity limit and the highest jerk constraint to calculate the hardest velocity limit that protects all the deceleration points and max velocities sent by API and Autoware internal modules.

"},{"location":"planning/autoware_external_velocity_limit_selector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

WIP

"},{"location":"planning/autoware_external_velocity_limit_selector/#inputs","title":"Inputs","text":"Name Type Description ~input/velocity_limit_from_api tier4_planning_msgs::VelocityLimit velocity limit from api ~input/velocity_limit_from_internal tier4_planning_msgs::VelocityLimit velocity limit from autoware internal modules ~input/velocity_limit_clear_command_from_internal tier4_planning_msgs::VelocityLimitClearCommand velocity limit clear command"},{"location":"planning/autoware_external_velocity_limit_selector/#outputs","title":"Outputs","text":"Name Type Description ~output/max_velocity tier4_planning_msgs::VelocityLimit current information of the hardest velocity limit"},{"location":"planning/autoware_external_velocity_limit_selector/#parameters","title":"Parameters","text":"Parameter Type Description max_velocity double default max velocity [m/s] normal.min_acc double minimum acceleration [m/ss] normal.max_acc double maximum acceleration [m/ss] normal.min_jerk double minimum jerk [m/sss] normal.max_jerk double maximum jerk [m/sss] limit.min_acc double minimum acceleration to be observed [m/ss] limit.max_acc double maximum acceleration to be observed [m/ss] limit.min_jerk double minimum jerk to be observed [m/sss] limit.max_jerk double maximum jerk to be observed [m/sss]"},{"location":"planning/autoware_external_velocity_limit_selector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/autoware_external_velocity_limit_selector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_freespace_planner/","title":"The `autoware_freespace_planner`","text":""},{"location":"planning/autoware_freespace_planner/#the-autoware_freespace_planner","title":"The autoware_freespace_planner","text":""},{"location":"planning/autoware_freespace_planner/#freespace_planner_node","title":"freespace_planner_node","text":"

freespace_planner_node is a global path planner node that plans trajectory in the space having static/dynamic obstacles. This node is currently based on Hybrid A* search algorithm in freespace_planning_algorithms package. Other algorithms such as rrt* will be also added and selectable in the future.

Note Due to the constraint of trajectory following, the output trajectory will be split to include only the single direction path. In other words, the output trajectory doesn't include both forward and backward trajectories at once.

"},{"location":"planning/autoware_freespace_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/route autoware_planning_msgs::Route route and goal pose ~input/occupancy_grid nav_msgs::OccupancyGrid costmap, for drivable areas ~input/odometry nav_msgs::Odometry vehicle velocity, for checking whether vehicle is stopped ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/autoware_freespace_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_planning_msgs::Trajectory trajectory to be followed is_completed bool (implemented as rosparam) whether all split trajectory are published"},{"location":"planning/autoware_freespace_planner/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/autoware_freespace_planner/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in freespace_planner.launch or add args when executing roslaunch
  2. roslaunch freespace_planner freespace_planner.launch
"},{"location":"planning/autoware_freespace_planner/#parameters","title":"Parameters","text":"Name Type Description Default Range planning_algorithm string Planning algorithm to use, options: astar, rrtstar. astar ['astar', 'rrtstar'] waypoints_velocity float velocity in output trajectory (currently, only constant velocity is supported. 5.0 N/A update_rate float timer's update rate 10.0 N/A th_arrived_distance_m float Threshold for considering the vehicle has arrived at its goal. 1.0 N/A th_stopped_time_sec float Time threshold for considering the vehicle as stopped. 1.0 N/A th_stopped_velocity_mps float Velocity threshold for considering the vehicle as stopped. 0.01 N/A th_course_out_distance_m float Threshold distance for considering the vehicle has deviated from its course. 1.0 N/A th_obstacle_time_sec float Time threshold for checking obstacles along the trajectory. 1.0 N/A vehicle_shape_margin_m float Margin around the vehicle shape for planning. 1.0 N/A replan_when_obstacle_found boolean Replan path when an obstacle is found. True N/A replan_when_course_out boolean Replan path when the vehicle deviates from its course. True N/A time_limit float Time limit for the planner. 30000.0 N/A max_turning_ratio float Maximum turning ratio, relative to the actual turning limit of the vehicle. 0.5 N/A turning_steps float Number of steps for turning. 1 N/A theta_size float Number of discretized angles for search. 144 N/A angle_goal_range float Range of acceptable angles at the goal. 6.0 N/A lateral_goal_range float Lateral range of acceptable goal positions. 0.5 N/A longitudinal_goal_range float Longitudinal range of acceptable goal positions. 2.0 N/A curve_weight float Weight for curves in the cost function. 0.5 N/A reverse_weight float Weight for reverse movement in the cost function. 1.0 N/A direction_change_weight float Weight for direction changes in the cost function. 1.5 N/A obstacle_threshold float Threshold for considering an obstacle in the costmap. 100 N/A astar.search_method string Search method to use, options: forward, backward. forward ['forward', 'backward'] astar.only_behind_solutions boolean Consider only solutions behind the vehicle. False N/A astar.use_back boolean Allow reverse motion in A* search. True N/A astar.adapt_expansion_distance boolean Allow varying A* expansion distance based on space configuration. True N/A astar.expansion_distance float Distance for expanding nodes in A* search. 0.5 N/A astar.near_goal_distance float Distance threshold to consider near goal. 4.0 N/A astar.distance_heuristic_weight float Weight for the distance heuristic in A* search. 1.0 N/A astar.smoothness_weight float Weight for the smoothness (change in curvature) in A* search. 0.5 N/A astar.obstacle_distance_weight float Weight for distance to obstacle in A* search. 0.5 N/A astar.goal_lat_distance_weight float Weight for lateral distance from original goal. 0.5 N/A rrtstar.enable_update boolean Enable updates in RRT*. True N/A rrtstar.use_informed_sampling boolean Use informed sampling in RRT*. True N/A rrtstar.max_planning_time float Maximum planning time for RRT*. 150.0 N/A rrtstar.neighbor_radius float Radius for neighboring nodes in RRT*. 8.0 N/A rrtstar.margin float Margin for RRT* sampling. 0.1 N/A"},{"location":"planning/autoware_freespace_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description planning_algorithms string algorithms used in the node vehicle_shape_margin_m float collision margin in planning algorithm update_rate double timer's update rate waypoints_velocity double velocity in output trajectory (currently, only constant velocity is supported) th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped th_course_out_distance_m double threshold distance to check if vehicle is out of course th_obstacle_time_sec double threshold time to check if obstacle is on the trajectory vehicle_shape_margin_m double vehicle margin replan_when_obstacle_found bool whether replanning when obstacle has found on the trajectory replan_when_course_out bool whether replanning when vehicle is out of course"},{"location":"planning/autoware_freespace_planner/#planner-common-parameters","title":"Planner common parameters","text":"Parameter Type Description time_limit double time limit of planning maximum_turning_ratio double max ratio of actual turning range to use turning_steps double number of turning steps within turning range theta_size double the number of angle's discretization lateral_goal_range double goal range of lateral position longitudinal_goal_range double goal range of longitudinal position angle_goal_range double goal range of angle curve_weight double additional cost factor for curve actions reverse_weight double additional cost factor for reverse actions direction_change_weight double additional cost factor for switching direction obstacle_threshold double threshold for regarding a certain grid as obstacle"},{"location":"planning/autoware_freespace_planner/#a-search-parameters","title":"A* search parameters","text":"Parameter Type Description search_method string method of searching, start to goal or vice versa only_behind_solutions bool whether restricting the solutions to be behind the goal use_back bool whether using backward trajectory adapt_expansion_distance bool if true, adapt expansion distance based on environment expansion_distance double length of expansion for node transitions near_goal_distance double near goal distance threshold distance_heuristic_weight double heuristic weight for estimating node's cost smoothness_weight double cost factor for change in curvature obstacle_distance_weight double cost factor for distance to obstacle goal_lat_distance_weight double cost factor for lateral distance from goal"},{"location":"planning/autoware_freespace_planner/#rrt-search-parameters","title":"RRT* search parameters","text":"Parameter Type Description max planning time double maximum planning time [msec] (used only when enable_update is set true) enable_update bool whether update after feasible solution found until max_planning time elapse use_informed_sampling bool Use informed RRT* (of Gammell et al.) neighbor_radius double neighbor radius of RRT* algorithm margin double safety margin ensured in path's collision checking in RRT* algorithm"},{"location":"planning/autoware_freespace_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_freespace_planning_algorithms/","title":"freespace planning algorithms","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#freespace-planning-algorithms","title":"freespace planning algorithms","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#role","title":"Role","text":"

This package is for development of path planning algorithms in free space.

"},{"location":"planning/autoware_freespace_planning_algorithms/#implemented-algorithms","title":"Implemented algorithms","text":"

Please see rrtstar.md for a note on the implementation for informed-RRT*.

NOTE: As for RRT*, one can choose whether update after feasible solution found in RRT*. If not doing so, the algorithm is the almost (but exactly because of rewiring procedure) same as vanilla RRT. If you choose update, then you have option if the sampling after feasible solution found is \"informed\". If set true, then the algorithm is equivalent to informed RRT\\* of Gammell et al. 2014.

"},{"location":"planning/autoware_freespace_planning_algorithms/#algorithm-selection","title":"Algorithm selection","text":"

There is a trade-off between algorithm speed and resulting solution quality. When we sort the algorithms by the spectrum of (high quality solution/ slow) -> (low quality solution / fast) it would be A* -> informed RRT* -> RRT. Note that in almost all case informed RRT* is better than RRT* for solution quality given the same computational time budget. So, RRT* is omitted in the comparison.

Some selection criteria would be:

"},{"location":"planning/autoware_freespace_planning_algorithms/#guide-to-implement-a-new-algorithm","title":"Guide to implement a new algorithm","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#running-the-standalone-tests-and-visualization","title":"Running the standalone tests and visualization","text":"

Building the package with ros-test and run tests:

colcon build --packages-select autoware_freespace_planning_algorithms\ncolcon test --packages-select autoware_freespace_planning_algorithms\n

Inside the test, simulation results are stored in /tmp/fpalgos-{algorithm_type}-case{scenario_number} as a rosbag. Loading these resulting files, by using test/debug_plot.py, one can create plots visualizing the path and obstacles as shown in the figures below. The created figures are then again saved in /tmp.

"},{"location":"planning/autoware_freespace_planning_algorithms/#a-single-curvature-case","title":"A* (single curvature case)","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#informed-rrt-with-200-msec-time-budget","title":"informed RRT* with 200 msec time budget","text":""},{"location":"planning/autoware_freespace_planning_algorithms/#rrt-without-update-almost-same-as-rrt","title":"RRT* without update (almost same as RRT)","text":"

The black cells, green box, and red box, respectively, indicate obstacles, start configuration, and goal configuration. The sequence of the blue boxes indicate the solution path.

"},{"location":"planning/autoware_freespace_planning_algorithms/#extension-to-python-module-only-a-supported","title":"Extension to Python module (only A* supported)","text":"

There is an implementation of the extension to the python module. You can try A* search via Python by setting follows:

Then, you can get

The example code is scripts/example/example.py. Note that you need to build this package and source the setup shell script in advance.

"},{"location":"planning/autoware_freespace_planning_algorithms/#license-notice","title":"License notice","text":"

Files src/reeds_shepp.cpp and include/astar_search/reeds_shepp.h are fetched from pyReedsShepp. Note that the implementation in pyReedsShepp is also heavily based on the code in ompl. Both pyReedsShepp and ompl are distributed under 3-clause BSD license.

"},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/","title":"Rrtstar","text":""},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/#note-on-implementation-of-informed-rrt","title":"Note on implementation of informed RRT*","text":""},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/#preliminary-knowledge-on-informed-rrt","title":"Preliminary knowledge on informed-RRT*","text":"

Let us define \\(f(x)\\) as minimum cost of the path when path is constrained to pass through \\(x\\) (so path will be \\(x_{\\mathrm{start}} \\to \\mathrm{x} \\to \\mathrm{x_{\\mathrm{goal}}}\\)). Also, let us define \\(c_{\\mathrm{best}}\\) as the current minimum cost of the feasible paths. Let us define a set $ X(f) = \\left{ x \\in X | f(x) < c*{\\mathrm{best}} \\right} $. If we could sample a new point from \\(X_f\\) instead of \\(X\\) as in vanilla RRT*, chance that \\(c*{\\mathrm{best}}\\) is updated is increased, thus the convergence rate is improved.

In most case, \\(f(x)\\) is unknown, thus it is straightforward to approximate the function \\(f\\) by a heuristic function \\(\\hat{f}\\). A heuristic function is admissible if \\(\\forall x \\in X, \\hat{f}(x) < f(x)\\), which is sufficient condition of conversion to optimal path. The good heuristic function \\(\\hat{f}\\) has two properties: 1) it is an admissible tight lower bound of \\(f\\) and 2) sampling from \\(X(\\hat{f})\\) is easy.

According to Gammell et al [1], a good heuristic function when path is always straight is \\(\\hat{f}(x) = ||x_{\\mathrm{start}} - x|| + ||x - x_{\\mathrm{goal}}||\\). If we don't assume any obstacle information the heuristic is tightest. Also, \\(X(\\hat{f})\\) is hyper-ellipsoid, and hence sampling from it can be done analytically.

"},{"location":"planning/autoware_freespace_planning_algorithms/rrtstar/#modification-to-fit-reeds-sheep-path-case","title":"Modification to fit reeds-sheep path case","text":"

In the vehicle case, state is \\(x = (x_{1}, x_{2}, \\theta)\\). Unlike normal informed-RRT* where we can connect path by a straight line, here we connect the vehicle path by a reeds-sheep path. So, we need some modification of the original algorithm a bit. To this end, one might first consider a heuristic function \\(\\hat{f}_{\\mathrm{RS}}(x) = \\mathrm{RS}(x_{\\mathrm{start}}, x) + \\mathrm{RS}(x, x_{\\mathrm{goal}}) < f(x)\\) where \\(\\mathrm{RS}\\) computes reeds-sheep distance. Though it is good in the sense of tightness, however, sampling from \\(X(\\hat{f}_{RS})\\) is really difficult. Therefore, we use \\(\\hat{f}_{euc} = ||\\mathrm{pos}(x_{\\mathrm{start}}) - \\mathrm{pos}(x)|| + ||\\mathrm{pos}(x)- \\mathrm{pos}(x_{\\mathrm{goal}})||\\), which is admissible because \\(\\forall x \\in X, \\hat{f}_{euc}(x) < \\hat{f}_{\\mathrm{RS}}(x) < f(x)\\). Here, \\(\\mathrm{pos}\\) function returns position \\((x_{1}, x_{2})\\) of the vehicle.

Sampling from \\(X(\\hat{f}_{\\mathrm{euc}})\\) is easy because \\(X(\\hat{f}_{\\mathrm{euc}}) = \\mathrm{Ellipse} \\times (-\\pi, \\pi]\\). Here \\(\\mathrm{Ellipse}\\)'s focal points are \\(x_{\\mathrm{start}}\\) and \\(x_{\\mathrm{goal}}\\) and conjugate diameters is $\\sqrt{c^{2}{\\mathrm{best}} - ||\\mathrm{pos}(x}) - \\mathrm{pos}(x_{\\mathrm{goal}}))|| } $ (similar to normal informed-rrtstar's ellipsoid). Please notice that \\(\\theta\\) can be arbitrary because \\(\\hat{f}_{\\mathrm{euc}}\\) is independent of \\(\\theta\\).

[1] Gammell et al., \"Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic.\" IROS (2014)

"},{"location":"planning/autoware_mission_planner/","title":"Mission Planner","text":""},{"location":"planning/autoware_mission_planner/#mission-planner","title":"Mission Planner","text":""},{"location":"planning/autoware_mission_planner/#purpose","title":"Purpose","text":"

Mission Planner calculates a route that navigates from the current ego pose to the goal pose following the given check points. The route is made of a sequence of lanes on a static map. Dynamic objects (e.g. pedestrians and other vehicles) and dynamic map information (e.g. road construction which blocks some lanes) are not considered during route planning. Therefore, the output topic is only published when the goal pose or check points are given and will be latched until the new goal pose or check points are given.

The core implementation does not depend on a map format. Any planning algorithms can be added as plugin modules. In current Autoware.universe, only the plugin for Lanelet2 map format is supported.

This package also manages routes for MRM. The route_selector node duplicates the mission_planner interface and provides it for normal and MRM respectively. It distributes route requests and planning results according to current MRM operation state.

"},{"location":"planning/autoware_mission_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/autoware_mission_planner/#parameters","title":"Parameters","text":"Name Type Description map_frame string The frame name for map arrival_check_angle_deg double Angle threshold for goal check arrival_check_distance double Distance threshold for goal check arrival_check_duration double Duration threshold for goal check goal_angle_threshold double Max goal pose angle for goal approve enable_correct_goal_pose bool Enabling correction of goal pose according to the closest lanelet orientation reroute_time_threshold double If the time to the rerouting point at the current velocity is greater than this threshold, rerouting is possible minimum_reroute_length double Minimum Length for publishing a new route consider_no_drivable_lanes bool This flag is for considering no_drivable_lanes in planning or not. allow_reroute_in_autonomous_mode bool This is a flag to allow reroute in autonomous driving mode. If false, reroute fails. If true, only safe reroute is allowed"},{"location":"planning/autoware_mission_planner/#services","title":"Services","text":"Name Type Description /planning/mission_planning/mission_planner/clear_route tier4_planning_msgs/srv/ClearRoute route clear request /planning/mission_planning/mission_planner/set_waypoint_route tier4_planning_msgs/srv/SetWaypointRoute route request with lanelet waypoints. /planning/mission_planning/mission_planner/set_lanelet_route tier4_planning_msgs/srv/SetLaneletRoute route request with pose waypoints. /planning/mission_planning/route_selector/main/clear_route tier4_planning_msgs/srv/ClearRoute main route clear request /planning/mission_planning/route_selector/main/set_waypoint_route tier4_planning_msgs/srv/SetWaypointRoute main route request with lanelet waypoints. /planning/mission_planning/route_selector/main/set_lanelet_route tier4_planning_msgs/srv/SetLaneletRoute main route request with pose waypoints. /planning/mission_planning/route_selector/mrm/clear_route tier4_planning_msgs/srv/ClearRoute mrm route clear request /planning/mission_planning/route_selector/mrm/set_waypoint_route tier4_planning_msgs/srv/SetWaypointRoute mrm route request with lanelet waypoints. /planning/mission_planning/route_selector/mrm/set_lanelet_route tier4_planning_msgs/srv/SetLaneletRoute mrm route request with pose waypoints."},{"location":"planning/autoware_mission_planner/#subscriptions","title":"Subscriptions","text":"Name Type Description input/vector_map autoware_map_msgs/msg/LaneletMapBin vector map of Lanelet2 input/modified_goal geometry_msgs/PoseWithUuidStamped modified goal pose input/operation_mode_state autoware_adapi_v1_msgs/OperationModeState operation mode state input/odometry nav_msgs/msg/Odometry vehicle odometry"},{"location":"planning/autoware_mission_planner/#publications","title":"Publications","text":"Name Type Description /planning/mission_planning/state tier4_planning_msgs/msg/RouteState route state /planning/mission_planning/route autoware_planning_msgs/LaneletRoute route /planning/mission_planning/route_selector/main/state tier4_planning_msgs/msg/RouteState main route state /planning/mission_planning/route_selector/main/route autoware_planning_msgs/LaneletRoute main route /planning/mission_planning/route_selector/mrm/state tier4_planning_msgs/msg/RouteState mrm route state /planning/mission_planning/route_selector/mrm/route autoware_planning_msgs/LaneletRoute mrm route ~/debug/route_marker visualization_msgs/msg/MarkerArray route marker for debug ~/debug/goal_footprint visualization_msgs/msg/MarkerArray goal footprint for debug"},{"location":"planning/autoware_mission_planner/#route-section","title":"Route section","text":"

Route section, whose type is autoware_planning_msgs/LaneletSegment, is a \"slice\" of a road that bundles lane changeable lanes. Note that the most atomic unit of route is autoware_planning_msgs/LaneletPrimitive, which has the unique id of a lane in a vector map and its type. Therefore, route message does not contain geometric information about the lane since we did not want to have planning module\u2019s message to have dependency on map data structure.

The ROS message of route section contains following three elements for each route section.

"},{"location":"planning/autoware_mission_planner/#goal-validation","title":"Goal Validation","text":"

The mission planner has control mechanism to validate the given goal pose and create a route. If goal pose angle between goal pose lanelet and goal pose' yaw is greater than goal_angle_threshold parameter, the goal is rejected. Another control mechanism is the creation of a footprint of the goal pose according to the dimensions of the vehicle and checking whether this footprint is within the lanelets. If goal footprint exceeds lanelets, then the goal is rejected.

At the image below, there are sample goal pose validation cases.

"},{"location":"planning/autoware_mission_planner/#implementation","title":"Implementation","text":""},{"location":"planning/autoware_mission_planner/#mission-planner_1","title":"Mission Planner","text":"

Two callbacks (goal and check points) are a trigger for route planning. Routing graph, which plans route in Lanelet2, must be created before those callbacks, and this routing graph is created in vector map callback.

plan route is explained in detail in the following section.

"},{"location":"planning/autoware_mission_planner/#route-planner","title":"Route Planner","text":"

plan route is executed with check points including current ego pose and goal pose.

plan path between each check points firstly calculates closest lanes to start and goal pose. Then routing graph of Lanelet2 plans the shortest path from start and goal pose.

initialize route lanelets initializes route handler, and calculates route_lanelets. route_lanelets, all of which will be registered in route sections, are lanelets next to the lanelets in the planned path, and used when planning lane change. To calculate route_lanelets,

  1. All the neighbor (right and left) lanes for the planned path which is lane-changeable is memorized as route_lanelets.
  2. All the neighbor (right and left) lanes for the planned path which is not lane-changeable is memorized as candidate_lanelets.
  3. If the following and previous lanelets of each candidate_lanelets are route_lanelets, the candidate_lanelet is registered as route_lanelets

get preferred lanelets extracts preferred_primitive from route_lanelets with the route handler.

create route sections extracts primitives from route_lanelets for each route section with the route handler, and creates route sections.

"},{"location":"planning/autoware_mission_planner/#rerouting","title":"Rerouting","text":"

Reroute here means changing the route while driving. Unlike route setting, it is required to keep a certain distance from vehicle to the point where the route is changed. If the ego vehicle is not on autonomous driving state, the safety checking process will be skipped.

And there are three use cases that require reroute.

"},{"location":"planning/autoware_mission_planner/#route-change-api","title":"Route change API","text":"

It is used when changing the destination while driving or when driving a divided loop route. When the vehicle is driving on a MRM route, normal rerouting by this interface is not allowed.

"},{"location":"planning/autoware_mission_planner/#emergency-route","title":"Emergency route","text":"

The interface for the MRM that pulls over the road shoulder. It has to be stopped as soon as possible, so a reroute is required. The MRM route has priority over the normal route. And if MRM route is cleared, try to return to the normal route also with a rerouting safety check.

"},{"location":"planning/autoware_mission_planner/#goal-modification","title":"Goal modification","text":"

This is a goal change to pull over, avoid parked vehicles, and so on by a planning component. If the modified goal is outside the calculated route, a reroute is required. This goal modification is executed by checking the local environment and path safety as the vehicle actually approaches the destination. And this modification is allowed for both normal_route and mrm_route. The new route generated here is sent to the AD API so that it can also be referenced by the application. Note, however, that the specifications here are subject to change in the future.

"},{"location":"planning/autoware_mission_planner/#rerouting-limitations","title":"Rerouting Limitations","text":""},{"location":"planning/autoware_mission_planner/#limitations","title":"Limitations","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/","title":"Objects Of Interest Marker Interface","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#objects-of-interest-marker-interface","title":"Objects Of Interest Marker Interface","text":"

Warning

Under Construction

"},{"location":"planning/autoware_objects_of_interest_marker_interface/#purpose","title":"Purpose","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_objects_of_interest_marker_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_obstacle_cruise_planner/","title":"Obstacle Cruise Planner","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#obstacle-cruise-planner","title":"Obstacle Cruise Planner","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#overview","title":"Overview","text":"

The autoware_obstacle_cruise_planner package has following modules.

"},{"location":"planning/autoware_obstacle_cruise_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::Trajectory input trajectory ~/input/objects autoware_perception_msgs::PredictedObjects dynamic objects ~/input/odometry nav_msgs::msg::Odometry ego odometry"},{"location":"planning/autoware_obstacle_cruise_planner/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory autoware_planning_msgs::Trajectory output trajectory ~/output/velocity_limit tier4_planning_msgs::VelocityLimit velocity limit for cruising ~/output/clear_velocity_limit tier4_planning_msgs::VelocityLimitClearCommand clear command for velocity limit"},{"location":"planning/autoware_obstacle_cruise_planner/#design","title":"Design","text":"

Design for the following functions is defined here.

A data structure for cruise and stop planning is as follows. This planner data is created first, and then sent to the planning algorithm.

struct PlannerData\n{\nrclcpp::Time current_time;\nautoware_planning_msgs::msg::Trajectory traj;\ngeometry_msgs::msg::Pose current_pose;\ndouble ego_vel;\ndouble current_acc;\nstd::vector<Obstacle> target_obstacles;\n};\n
struct Obstacle\n{\nrclcpp::Time stamp;  // This is not the current stamp, but when the object was observed.\ngeometry_msgs::msg::Pose pose;  // interpolated with the current stamp\nbool orientation_reliable;\nTwist twist;\nbool twist_reliable;\nObjectClassification classification;\nstd::string uuid;\nShape shape;\nstd::vector<PredictedPath> predicted_paths;\n};\n
"},{"location":"planning/autoware_obstacle_cruise_planner/#behavior-determination-against-obstacles","title":"Behavior determination against obstacles","text":"

Obstacles for cruising, stopping and slowing down are selected in this order based on their pose and velocity. The obstacles not in front of the ego will be ignored.

The behavior determination flowchart is shown below.

"},{"location":"planning/autoware_obstacle_cruise_planner/#determine-cruise-vehicles","title":"Determine cruise vehicles","text":"

The obstacles meeting the following condition are determined as obstacles for cruising.

Parameter Type Description common.cruise_obstacle_type.inside.unknown bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.car bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.truck bool flag to consider unknown objects for cruising ... bool ... common.cruise_obstacle_type.outside.unknown bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.car bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.truck bool flag to consider unknown objects for cruising ... bool ... behavior_determination.cruise.max_lat_margin double maximum lateral margin for cruise obstacles behavior_determination.obstacle_velocity_threshold_from_cruise_to_stop double maximum obstacle velocity for cruise obstacle inside the trajectory behavior_determination.cruise.outside_obstacle.obstacle_velocity_threshold double maximum obstacle velocity for cruise obstacle outside the trajectory behavior_determination.cruise.outside_obstacle.ego_obstacle_overlap_time_threshold double maximum overlap time of the collision between the ego and obstacle"},{"location":"planning/autoware_obstacle_cruise_planner/#yield-for-vehicles-that-might-cut-in-into-the-egos-lane","title":"Yield for vehicles that might cut in into the ego's lane","text":"

It is also possible to yield (cruise) behind vehicles in neighbor lanes if said vehicles might cut in the ego vehicle's current lane.

The obstacles meeting the following condition are determined as obstacles for yielding (cruising).

If the above conditions are met, the ego vehicle will cruise behind the moving obstacle, yielding to it so it can cut in into the ego's lane to avoid the stopped obstacle.

"},{"location":"planning/autoware_obstacle_cruise_planner/#determine-stop-vehicles","title":"Determine stop vehicles","text":"

Among obstacles which are not for cruising, the obstacles meeting the following condition are determined as obstacles for stopping.

Parameter Type Description common.stop_obstacle_type.unknown bool flag to consider unknown objects for stopping common.stop_obstacle_type.car bool flag to consider unknown objects for stopping common.stop_obstacle_type.truck bool flag to consider unknown objects for stopping ... bool ... behavior_determination.stop.max_lat_margin double maximum lateral margin for stop obstacles behavior_determination.crossing_obstacle.obstacle_velocity_threshold double maximum crossing obstacle velocity to ignore behavior_determination.obstacle_velocity_threshold_from_stop_to_cruise double maximum obstacle velocity for stop"},{"location":"planning/autoware_obstacle_cruise_planner/#determine-slow-down-vehicles","title":"Determine slow down vehicles","text":"

Among obstacles which are not for cruising and stopping, the obstacles meeting the following condition are determined as obstacles for slowing down.

Parameter Type Description common.slow_down_obstacle_type.unknown bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.car bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.truck bool flag to consider unknown objects for slowing down ... bool ... behavior_determination.slow_down.max_lat_margin double maximum lateral margin for slow down obstacles"},{"location":"planning/autoware_obstacle_cruise_planner/#note","title":"NOTE","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#1-crossing-obstacles","title":"*1: Crossing obstacles","text":"

Crossing obstacle is the object whose orientation's yaw angle against the ego's trajectory is smaller than behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold.

Parameter Type Description behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold double maximum angle against the ego's trajectory to judge the obstacle is crossing the trajectory [rad]"},{"location":"planning/autoware_obstacle_cruise_planner/#2-enough-collision-time-margin","title":"*2: Enough collision time margin","text":"

We predict the collision area and its time by the ego with a constant velocity motion and the obstacle with its predicted path. Then, we calculate a collision time margin which is the difference of the time when the ego will be inside the collision area and the obstacle will be inside the collision area. When this time margin is smaller than behavior_determination.stop.crossing_obstacle.collision_time_margin, the margin is not enough.

Parameter Type Description behavior_determination.stop.crossing_obstacle.collision_time_margin double maximum collision time margin of the ego and obstacle"},{"location":"planning/autoware_obstacle_cruise_planner/#stop-planning","title":"Stop planning","text":"Parameter Type Description common.min_strong_accel double ego's minimum acceleration to stop [m/ss] common.safe_distance_margin double distance with obstacles for stop [m] common.terminal_safe_distance_margin double terminal_distance with obstacles for stop, which cannot be exceed safe distance margin [m]

The role of the stop planning is keeping a safe distance with static vehicle objects or dynamic/static non vehicle objects.

The stop planning just inserts the stop point in the trajectory to keep a distance with obstacles. The safe distance is parameterized as common.safe_distance_margin. When it stops at the end of the trajectory, and obstacle is on the same point, the safe distance becomes terminal_safe_distance_margin.

When inserting the stop point, the required acceleration for the ego to stop in front of the stop point is calculated. If the acceleration is less than common.min_strong_accel, the stop planning will be cancelled since this package does not assume a strong sudden brake for emergency.

"},{"location":"planning/autoware_obstacle_cruise_planner/#cruise-planning","title":"Cruise planning","text":"Parameter Type Description common.safe_distance_margin double minimum distance with obstacles for cruise [m]

The role of the cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition. This includes not only cruising a front vehicle, but also reacting a cut-in and cut-out vehicle.

The safe distance is calculated dynamically based on the Responsibility-Sensitive Safety (RSS) by the following equation.

\\[ d_{rss} = v_{ego} t_{idling} + \\frac{1}{2} a_{ego} t_{idling}^2 + \\frac{v_{ego}^2}{2 a_{ego}} - \\frac{v_{obstacle}^2}{2 a_{obstacle}}, \\]

assuming that \\(d_{rss}\\) is the calculated safe distance, \\(t_{idling}\\) is the idling time for the ego to detect the front vehicle's deceleration, \\(v_{ego}\\) is the ego's current velocity, \\(v_{obstacle}\\) is the front obstacle's current velocity, \\(a_{ego}\\) is the ego's acceleration, and \\(a_{obstacle}\\) is the obstacle's acceleration. These values are parameterized as follows. Other common values such as ego's minimum acceleration is defined in common.param.yaml.

Parameter Type Description common.idling_time double idling time for the ego to detect the front vehicle starting deceleration [s] common.min_ego_accel_for_rss double ego's acceleration for RSS [m/ss] common.min_object_accel_for_rss double front obstacle's acceleration for RSS [m/ss]

The detailed formulation is as follows.

\\[ \\begin{align} d_{error} & = d - d_{rss} \\\\ d_{normalized} & = lpf(d_{error} / d_{obstacle}) \\\\ d_{quad, normalized} & = sign(d_{normalized}) *d_{normalized}*d_{normalized} \\\\ v_{pid} & = pid(d_{quad, normalized}) \\\\ v_{add} & = v_{pid} > 0 ? v_{pid}* w_{acc} : v_{pid} \\\\ v_{target} & = max(v_{ego} + v_{add}, v_{min, cruise}) \\end{align} \\] Variable Description d actual distance to obstacle d_{rss} ideal distance to obstacle based on RSS v_{min, cruise} min_cruise_target_vel w_{acc} output_ratio_during_accel lpf(val) apply low-pass filter to val pid(val) apply pid to val"},{"location":"planning/autoware_obstacle_cruise_planner/#block-diagram","title":"Block diagram","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#slow-down-planning","title":"Slow down planning","text":"Parameter Type Description slow_down.labels vector(string) A vector of labels for customizing obstacle-label-based slow down behavior. Each label represents an obstacle type that will be treated differently when applying slow down. The possible labels are (\"default\" (Mandatory), \"unknown\",\"car\",\"truck\",\"bus\",\"trailer\",\"motorcycle\",\"bicycle\" or \"pedestrian\") slow_down.default.static.min_lat_velocity double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_velocity double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.min_lat_margin double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_margin double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.moving.min_lat_velocity double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_velocity double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.min_lat_margin double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_margin double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving (optional) slow_down.\"label\".(static & moving).min_lat_velocity double minimum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value (optional) slow_down.\"label\".(static & moving).max_lat_velocity double maximum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value (optional) slow_down.\"label\".(static & moving).min_lat_margin double minimum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value (optional) slow_down.\"label\".(static & moving).max_lat_margin double maximum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels. Requires a static and a moving value

The role of the slow down planning is inserting slow down velocity in the trajectory where the trajectory points are close to the obstacles. The parameters can be customized depending on the obstacle type (see slow_down.labels), making it possible to adjust the slow down behavior depending if the obstacle is a pedestrian, bicycle, car, etc. Each obstacle type has a static and a moving parameter set, so it is possible to customize the slow down response of the ego vehicle according to the obstacle type and if it is moving or not. If an obstacle is determined to be moving, the corresponding moving set of parameters will be used to compute the vehicle slow down, otherwise, the static parameters will be used. The static and moving separation is useful for customizing the ego vehicle slow down behavior to, for example, slow down more significantly when passing stopped vehicles that might cause occlusion or that might suddenly open its doors.

An obstacle is classified as static if its total speed is less than the moving_object_speed_threshold parameter. Furthermore, a hysteresis based approach is used to avoid chattering, it uses the moving_object_hysteresis_range parameter range and the obstacle's previous state (moving or static) to determine if the obstacle is moving or not. In other words, if an obstacle was previously classified as static, it will not change its classification to moving unless its total speed is greater than moving_object_speed_threshold + moving_object_hysteresis_range. Likewise, an obstacle previously classified as moving, will only change to static if its speed is lower than moving_object_speed_threshold - moving_object_hysteresis_range.

The closest point on the obstacle to the ego's trajectory is calculated. Then, the slow down velocity is calculated by linear interpolation with the distance between the point and trajectory as follows.

Variable Description v_{out} calculated velocity for slow down v_{min} slow_down.min_lat_velocity v_{max} slow_down.max_lat_velocity l_{min} slow_down.min_lat_margin l_{max} slow_down.max_lat_margin l'_{max} behavior_determination.slow_down.max_lat_margin

The calculated velocity is inserted in the trajectory where the obstacle is inside the area with behavior_determination.slow_down.max_lat_margin.

"},{"location":"planning/autoware_obstacle_cruise_planner/#implementation","title":"Implementation","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#flowchart","title":"Flowchart","text":"

Successive functions consist of autoware_obstacle_cruise_planner as follows.

Various algorithms for stop and cruise planning will be implemented, and one of them is designated depending on the use cases. The core algorithm implementation generateTrajectory depends on the designated algorithm.

"},{"location":"planning/autoware_obstacle_cruise_planner/#algorithm-selection-for-cruise-planner","title":"Algorithm selection for cruise planner","text":"

Currently, only a PID-based planner is supported. Each planner will be explained in the following.

Parameter Type Description common.planning_method string cruise and stop planning algorithm, selected from \"pid_base\""},{"location":"planning/autoware_obstacle_cruise_planner/#pid-based-planner","title":"PID-based planner","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#stop-planning_1","title":"Stop planning","text":"

In the pid_based_planner namespace,

Parameter Type Description obstacle_velocity_threshold_from_cruise_to_stop double obstacle velocity threshold to be stopped from cruised [m/s]

Only one obstacle is targeted for the stop planning. It is the obstacle among obstacle candidates whose velocity is less than obstacle_velocity_threshold_from_cruise_to_stop, and which is the nearest to the ego along the trajectory. A stop point is inserted keepingcommon.safe_distance_margin distance between the ego and obstacle.

Note that, as explained in the stop planning design, a stop planning which requires a strong acceleration (less than common.min_strong_accel) will be canceled.

"},{"location":"planning/autoware_obstacle_cruise_planner/#cruise-planning_1","title":"Cruise planning","text":"

In the pid_based_planner namespace,

Parameter Type Description kp double p gain for pid control [-] ki double i gain for pid control [-] kd double d gain for pid control [-] output_ratio_during_accel double The output velocity will be multiplied by the ratio during acceleration to follow the front vehicle. [-] vel_to_acc_weight double target acceleration is target velocity * vel_to_acc_weight [-] min_cruise_target_vel double minimum target velocity during cruise [m/s]

In order to keep the safe distance, the target velocity and acceleration is calculated and sent as an external velocity limit to the velocity smoothing package (motion_velocity_smoother by default). The target velocity and acceleration is respectively calculated with the PID controller according to the error between the reference safe distance and the actual distance.

"},{"location":"planning/autoware_obstacle_cruise_planner/#optimization-based-planner","title":"Optimization-based planner","text":"

under construction

"},{"location":"planning/autoware_obstacle_cruise_planner/#minor-functions","title":"Minor functions","text":""},{"location":"planning/autoware_obstacle_cruise_planner/#prioritization-of-behavior-modules-stop-point","title":"Prioritization of behavior module's stop point","text":"

When stopping for a pedestrian walking on the crosswalk, the behavior module inserts the zero velocity in the trajectory in front of the crosswalk. Also autoware_obstacle_cruise_planner's stop planning also works, and the ego may not reach the behavior module's stop point since the safe distance defined in autoware_obstacle_cruise_planner may be longer than the behavior module's safe distance. To resolve this non-alignment of the stop point between the behavior module and autoware_obstacle_cruise_planner, common.min_behavior_stop_margin is defined. In the case of the crosswalk described above, autoware_obstacle_cruise_planner inserts the stop point with a distance common.min_behavior_stop_margin at minimum between the ego and obstacle.

Parameter Type Description common.min_behavior_stop_margin double minimum stop margin when stopping with the behavior module enabled [m]"},{"location":"planning/autoware_obstacle_cruise_planner/#a-function-to-keep-the-closest-stop-obstacle-in-target-obstacles","title":"A function to keep the closest stop obstacle in target obstacles","text":"

In order to keep the closest stop obstacle in the target obstacles, we check whether it is disappeared or not from the target obstacles in the checkConsistency function. If the previous closest stop obstacle is remove from the lists, we keep it in the lists for stop_obstacle_hold_time_threshold seconds. Note that if a new stop obstacle appears and the previous closest obstacle removes from the lists, we do not add it to the target obstacles again.

Parameter Type Description behavior_determination.stop_obstacle_hold_time_threshold double maximum time for holding closest stop obstacle [s]"},{"location":"planning/autoware_obstacle_cruise_planner/#how-to-debug","title":"How To Debug","text":"

How to debug can be seen here.

"},{"location":"planning/autoware_obstacle_cruise_planner/#known-limits","title":"Known Limits","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/","title":"Debug","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#debug","title":"Debug","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#debug-visualization","title":"Debug visualization","text":""},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#detection-area","title":"Detection area","text":"

Green polygons which is a detection area is visualized by detection_polygons in the ~/debug/marker topic. To determine each behavior (cruise, stop, and slow down), if behavior_determination.*.max_lat_margin is not zero, the polygons are expanded with the additional width.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#collision-points","title":"Collision points","text":"

Red points which are collision points with obstacle are visualized by *_collision_points for each behavior in the ~/debug/marker topic.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#obstacle-for-cruise","title":"Obstacle for cruise","text":"

Orange sphere which is an obstacle for cruise is visualized by obstacles_to_cruise in the ~/debug/marker topic.

Orange wall which means a safe distance to cruise if the ego's front meets the wall is visualized in the ~/debug/cruise/virtual_wall topic.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#obstacle-for-stop","title":"Obstacle for stop","text":"

Red sphere which is an obstacle for stop is visualized by obstacles_to_stop in the ~/debug/marker topic.

Red wall which means a safe distance to stop if the ego's front meets the wall is visualized in the ~/virtual_wall topic.

"},{"location":"planning/autoware_obstacle_cruise_planner/docs/debug/#obstacle-for-slow-down","title":"Obstacle for slow down","text":"

Yellow sphere which is an obstacle for slow_down is visualized by obstacles_to_slow_down in the ~/debug/marker topic.

Yellow wall which means a safe distance to slow_down if the ego's front meets the wall is visualized in the ~/debug/slow_down/virtual_wall topic.

"},{"location":"planning/autoware_obstacle_stop_planner/","title":"Obstacle Stop Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-stop-planner","title":"Obstacle Stop Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#overview","title":"Overview","text":"

obstacle_stop_planner has following modules

"},{"location":"planning/autoware_obstacle_stop_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/trajectory autoware_planning_msgs::Trajectory trajectory ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/odometry nav_msgs::Odometry vehicle velocity ~/input/dynamic_objects autoware_perception_msgs::PredictedObjects dynamic objects ~/input/expand_stop_range tier4_planning_msgs::msg::ExpandStopRange expand stop range"},{"location":"planning/autoware_obstacle_stop_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/autoware_obstacle_stop_planner/#common-parameter","title":"Common Parameter","text":"Name Type Description Default Range max_vel float max velocity limit [m/s] 11.1 N/A normal.min_acc float min deceleration [m/ss] -0.5 N/A normal.max_acc float max acceleration [m/ss] 1 N/A normal.min_jerk float min jerk [m/sss] -0.5 N/A normal.max_jerk float max jerk [m/sss] 1 N/A limit.min_acc float min deceleration limit [m/ss] -2.5 N/A limit.max_acc float max acceleration [m/ss] 1 N/A limit.min_jerk float min jerk [m/sss] -1.5 N/A limit.max_jerk float max jerk [m/sss] 1.5 N/A Parameter Type Description enable_slow_down bool enable slow down planner [-] max_velocity double max velocity [m/s] chattering_threshold double even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] enable_z_axis_obstacle_filtering bool filter obstacles in z axis (height) [-] z_axis_filtering_buffer double additional buffer for z axis filtering [m] use_predicted_objects bool whether to use predicted objects for collision and slowdown detection [-] predicted_object_filtering_threshold double threshold for filtering predicted objects [valid only publish_obstacle_polygon true] [m] publish_obstacle_polygon bool if use_predicted_objects is true, node publishes collision polygon [-]"},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-stop-planner_1","title":"Obstacle Stop Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#role","title":"Role","text":"

This module inserts the stop point before the obstacle with margin. In nominal case, the margin is the sum of baselink_to_front and max_longitudinal_margin. The baselink_to_front means the distance between baselink( center of rear-wheel axis) and front of the car. The detection area is generated along the processed trajectory as following figure. (This module cut off the input trajectory behind the ego position and decimates the trajectory points for reducing computational costs.)

parameters for obstacle stop planner

target for obstacle stop planner

If another stop point has already been inserted by other modules within max_longitudinal_margin, the margin is the sum of baselink_to_front and min_longitudinal_margin. This feature exists to avoid stopping unnaturally position. (For example, the ego stops unnaturally far away from stop line of crosswalk that pedestrians cross to without this feature.)

minimum longitudinal margin

The module searches the obstacle pointcloud within detection area. When the pointcloud is found, Adaptive Cruise Controller modules starts to work. only when Adaptive Cruise Controller modules does not insert target velocity, the stop point is inserted to the trajectory. The stop point means the point with 0 velocity.

"},{"location":"planning/autoware_obstacle_stop_planner/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module, the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/autoware_obstacle_stop_planner/#parameters","title":"Parameters","text":"Name Type Description Default Range chattering_threshold float even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] 0.5 N/A lowpass_gain float gain parameter for low pass filter [-] 0.9 N/A max_velocity float max velocity [m/s] 20.0 N/A enable_slow_down boolean whether to use slow down planner [-] false N/A enable_z_axis_obstacle_filtering boolean filter obstacles in z axis (height) [-] true N/A z_axis_filtering_buffer float additional buffer for z axis filtering [m] 0.0 N/A voxel_grid_x float voxel grid x parameter for filtering pointcloud [m] 0.05 N/A voxel_grid_y float voxel grid y parameter for filtering pointcloud [m] 0.05 N/A voxel_grid_z float voxel grid z parameter for filtering pointcloud [m] 100000.0 N/A use_predicted_objects boolean whether to use predicted objects [-] false N/A publish_obstacle_polygon boolean whether to publish obstacle polygon [-] false N/A predicted_object_filtering_threshold float threshold for filtering predicted objects (valid only publish_obstacle_polygon true) [m] 1.5 N/A stop_position.max_longitudinal_margin float stop margin distance from obstacle on the path [m] 5.0 N/A stop_position.max_longitudinal_margin_behind_goal float stop margin distance from obstacle behind the goal on the path [m] 3.0 N/A stop_position.min_longitudinal_margin float stop margin distance when any other stop point is inserted in stop margin [m] 2.0 N/A stop_position.hold_stop_margin_distance float the ego keeps stopping if the ego is in this margin [m] 0.0 N/A detection_area.lateral_margin float margin [m] 0.0 N/A detection_area.vehicle_lateral_margin float margin of vehicle footprint [m] 0.0 N/A detection_area.pedestrian_lateral_margin float margin of pedestrian footprint [m] 0.0 N/A detection_area.unknown_lateral_margin float margin of unknown footprint [m] 0.0 N/A detection_area.step_length float step length for pointcloud search range [m] 1.0 N/A detection_area.enable_stop_behind_goal_for_obstacle boolean enable extend trajectory after goal lane for obstacle detection true N/A slow_down_section.longitudinal_forward_margin float margin distance from slow down point to vehicle front [m] 5.0 N/A slow_down_section.longitudinal_backward_margin float margin distance from slow down point to vehicle rear [m] 5.0 N/A slow_down_section.longitudinal_margin_span float fineness param for relaxing slow down margin (use this param if consider_constraints is True) [m/s] -0.1 N/A slow_down_section.min_longitudinal_forward_margin float min margin for relaxing slow down margin (use this param if consider_constraints is True) [m/s] 1.0 N/A detection_area.lateral_margin float offset from vehicle side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A detection_area.vehicle_lateral_margin float offset from vehicle side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A detection_area.pedestrian_lateral_margin float offset from pedestrian side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A detection_area.unknown_lateral_margin float offset from unknown side edge for expanding the search area of the surrounding point cloud [m] 1.0 N/A target_velocity.max_slow_down_velocity float max slow down velocity (use this param if consider_constraints is False)[m/s] 1.38 N/A target_velocity.min_slow_down_velocity float offset from vehicle side edge for expanding the search area of the surrounding point cloud [m] 0.28 N/A target_velocity.slow_down_velocity float target slow down velocity (use this param if consider_constraints is True)[m/s] 1.38 N/A constraints.jerk_min_slow_down float min slow down jerk constraint [m/sss] -0.3 N/A constraints.jerk_span float fineness param for planning deceleration jerk [m/sss] -0.01 N/A constraints.jerk_start float init jerk used for deceleration planning [m/sss] -0.1 N/A slow_down_planner.consider_constraints boolean set 'True', if no decel plan found under jerk/dec constrains, relax target slow down vel [-] false N/A slow_down_planner.velocity_threshold_decel_complete float use for judge whether the ego velocity converges the target slow down velocity [m/s] 0.2 N/A slow_down_planner.acceleration_threshold_decel_complete float use for judge whether the ego velocity converges the target slow down velocity [m/ss] 0.1 N/A"},{"location":"planning/autoware_obstacle_stop_planner/#stop-position","title":"Stop position","text":"Parameter Type Description max_longitudinal_margin double margin between obstacle and the ego's front [m] max_longitudinal_margin_behind_goal double margin between obstacle and the ego's front when the stop point is behind the goal[m] min_longitudinal_margin double if any obstacle exists within max_longitudinal_margin, this module set margin as the value of stop margin to min_longitudinal_margin [m] hold_stop_margin_distance double parameter for restart prevention (See above section) [m]"},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-detection-area","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for collision obstacle detection area [m] step_length double step length for pointcloud search range [m] enable_stop_behind_goal_for_obstacle bool enabling extend trajectory after goal lane for obstacle detection"},{"location":"planning/autoware_obstacle_stop_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_obstacle_stop_planner/#slow-down-planner","title":"Slow Down Planner","text":""},{"location":"planning/autoware_obstacle_stop_planner/#role_1","title":"Role","text":"

This module inserts the slow down section before the obstacle with forward margin and backward margin. The forward margin is the sum of baselink_to_front and longitudinal_forward_margin, and the backward margin is the sum of baselink_to_front and longitudinal_backward_margin. The ego keeps slow down velocity in slow down section. The velocity is calculated the following equation.

\\(v_{target} = v_{min} + \\frac{l_{ld} - l_{vw}/2}{l_{margin}} (v_{max} - v_{min} )\\)

The above equation means that the smaller the lateral deviation of the pointcloud, the lower the velocity of the slow down section.

parameters for slow down planner

target for slow down planner"},{"location":"planning/autoware_obstacle_stop_planner/#parameters_1","title":"Parameters","text":"Name Type Description Default Range adaptive_cruise_control.use_object_to_estimate_vel boolean use tracking objects for estimating object velocity or not true N/A adaptive_cruise_control.use_pcl_to_estimate_vel boolean use pcl for estimating object velocity or not true N/A adaptive_cruise_control.consider_obj_velocity boolean consider forward vehicle velocity to ACC or not true N/A adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc float start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] 1.5 N/A adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc float stop adaptive cruise control when the velocity of the forward obstacle falls below this value [m/s] 1.0 N/A adaptive_cruise_control.emergency_stop_acceleration float supposed minimum acceleration (deceleration) in emergency stop [m/ss] -5.0 N/A adaptive_cruise_control.emergency_stop_idling_time float supposed idling time to start emergency stop [s] 0.5 N/A adaptive_cruise_control.min_dist_stop float minimum distance of emergency stop [m] 4.0 N/A adaptive_cruise_control.obstacle_emergency_stop_acceleration float supposed minimum acceleration (deceleration) in emergency stop [m/ss] -5.0 N/A adaptive_cruise_control.max_standard_acceleration float supposed maximum acceleration in active cruise control [m/ss] 0.5 N/A adaptive_cruise_control.min_standard_acceleration float supposed minimum acceleration (deceleration) in active cruise control -1.0 N/A adaptive_cruise_control.standard_idling_time float supposed idling time to react object in active cruise control [s] 0.5 N/A adaptive_cruise_control.min_dist_standard float minimum distance in active cruise control [m] 4.0 N/A adaptive_cruise_control.obstacle_min_standard_acceleration float supposed minimum acceleration of forward obstacle [m/ss] -1.5 N/A adaptive_cruise_control.margin_rate_to_change_vel float margin to insert upper velocity [-] 0.3 N/A adaptive_cruise_control.use_time_compensation_to_calc_distance boolean use time-compensation to calculate distance to forward vehicle true N/A adaptive_cruise_control.p_coefficient_positive float coefficient P in PID control (used when target dist -current_dist >=0) [-] 0.1 N/A adaptive_cruise_control.p_coefficient_negative float coefficient P in PID control (used when target dist -current_dist <0) [-] 0.3 N/A adaptive_cruise_control.d_coefficient_positive float coefficient D in PID control (used when delta_dist >=0) [-] 0.0 N/A adaptive_cruise_control.d_coefficient_negative float coefficient D in PID control (used when delta_dist <0) [-] 0.2 N/A adaptive_cruise_control.object_polygon_length_margin float The distance to extend the polygon length the object in pointcloud-object matching [m] 2.0 N/A adaptive_cruise_control.object_polygon_width_margin float The distance to extend the polygon width the object in pointcloud-object matching [m] 0.5 N/A adaptive_cruise_control.valid_estimated_vel_diff_time float Maximum time difference treated as continuous points in speed estimation using a point cloud [s] 1.0 N/A adaptive_cruise_control.valid_vel_que_time float Time width of information used for speed estimation in speed estimation using a point cloud [s] 0.5 N/A adaptive_cruise_control.valid_estimated_vel_max float Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] 20.0 N/A adaptive_cruise_control.valid_estimated_vel_min float Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] -20.0 N/A adaptive_cruise_control.thresh_vel_to_stop float Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] 1.5 N/A adaptive_cruise_control.lowpass_gain_of_upper_velocity float Lowpass-gain of upper velocity 0.75 N/A adaptive_cruise_control.use_rough_velocity_estimation boolean Use rough estimated velocity if the velocity estimation is failed (#### If this parameter is true, the vehicle may collide with the front car. Be careful. ####) false N/A adaptive_cruise_control.rough_velocity_rate float In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value 0.9 N/A"},{"location":"planning/autoware_obstacle_stop_planner/#slow-down-section","title":"Slow down section","text":"Parameter Type Description longitudinal_forward_margin double margin between obstacle and the ego's front [m] longitudinal_backward_margin double margin between obstacle and the ego's rear [m]"},{"location":"planning/autoware_obstacle_stop_planner/#obstacle-detection-area_1","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for slow down obstacle detection area [m]"},{"location":"planning/autoware_obstacle_stop_planner/#slow-down-target-velocity","title":"Slow down target velocity","text":"Parameter Type Description max_slow_down_velocity double max slow down velocity [m/s] min_slow_down_velocity double min slow down velocity [m/s]"},{"location":"planning/autoware_obstacle_stop_planner/#flowchart_1","title":"Flowchart","text":""},{"location":"planning/autoware_obstacle_stop_planner/#adaptive-cruise-controller","title":"Adaptive Cruise Controller","text":""},{"location":"planning/autoware_obstacle_stop_planner/#role_2","title":"Role","text":"

Adaptive Cruise Controller module embeds maximum velocity in trajectory when there is a dynamic point cloud on the trajectory. The value of maximum velocity depends on the own velocity, the velocity of the point cloud ( = velocity of the front car), and the distance to the point cloud (= distance to the front car).

Parameter Type Description adaptive_cruise_control.use_object_to_estimate_vel bool use dynamic objects for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.use_pcl_to_estimate_vel bool use raw pointclouds for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.consider_obj_velocity bool consider forward vehicle velocity to calculate target velocity in adaptive cruise or not adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc double start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc double stop acc when the velocity of the forward obstacle falls below this value [m/s] adaptive_cruise_control.emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.emergency_stop_idling_time double supposed idling time to start emergency stop [s] adaptive_cruise_control.min_dist_stop double minimum distance of emergency stop [m] adaptive_cruise_control.obstacle_emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.max_standard_acceleration double supposed maximum acceleration in active cruise control [m/ss] adaptive_cruise_control.min_standard_acceleration double supposed minimum acceleration (deceleration) in active cruise control [m/ss] adaptive_cruise_control.standard_idling_time double supposed idling time to react object in active cruise control [s] adaptive_cruise_control.min_dist_standard double minimum distance in active cruise control [m] adaptive_cruise_control.obstacle_min_standard_acceleration double supposed minimum acceleration of forward obstacle [m/ss] adaptive_cruise_control.margin_rate_to_change_vel double rate of margin distance to insert target velocity [-] adaptive_cruise_control.use_time_compensation_to_calc_distance bool use time-compensation to calculate distance to forward vehicle adaptive_cruise_control.p_coefficient_positive double coefficient P in PID control (used when target dist -current_dist >=0) [-] adaptive_cruise_control.p_coefficient_negative double coefficient P in PID control (used when target dist -current_dist <0) [-] adaptive_cruise_control.d_coefficient_positive double coefficient D in PID control (used when delta_dist >=0) [-] adaptive_cruise_control.d_coefficient_negative double coefficient D in PID control (used when delta_dist <0) [-] adaptive_cruise_control.object_polygon_length_margin double The distance to extend the polygon length the object in pointcloud-object matching [m] adaptive_cruise_control.object_polygon_width_margin double The distance to extend the polygon width the object in pointcloud-object matching [m] adaptive_cruise_control.valid_estimated_vel_diff_time double Maximum time difference treated as continuous points in speed estimation using a point cloud [s] adaptive_cruise_control.valid_vel_que_time double Time width of information used for speed estimation in speed estimation using a point cloud [s] adaptive_cruise_control.valid_estimated_vel_max double Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.valid_estimated_vel_min double Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.thresh_vel_to_stop double Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] adaptive_cruise_control.lowpass_gain_of_upper_velocity double Lowpass-gain of target velocity adaptive_cruise_control.use_rough_velocity_estimation: bool Use rough estimated velocity if the velocity estimation is failed (valid only if osp.use_predicted_objects false) adaptive_cruise_control.rough_velocity_rate double In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value"},{"location":"planning/autoware_obstacle_stop_planner/#flowchart_2","title":"Flowchart","text":"

(*1) The target vehicle point is calculated as a closest obstacle PointCloud from ego along the trajectory.

(*2) The sources of velocity estimation can be changed by the following ROS parameters.

This module works only when the target point is found in the detection area of the Obstacle stop planner module.

The first process of this module is to estimate the velocity of the target vehicle point. The velocity estimation uses the velocity information of dynamic objects or the travel distance of the target vehicle point from the previous step. The dynamic object information is primal, and the travel distance estimation is used as a backup in case of the perception failure. If the target vehicle point is contained in the bounding box of a dynamic object geometrically, the velocity of the dynamic object is used as the target point velocity. Otherwise, the target point velocity is calculated by the travel distance of the target point from the previous step; that is (current_position - previous_position) / dt. Note that this travel distance based estimation fails when the target point is detected in the first time (it mainly happens in the cut-in situation). To improve the stability of the estimation, the median of the calculation result for several steps is used.

If the calculated velocity is within the threshold range, it is used as the target point velocity.

Only when the estimation is succeeded and the estimated velocity exceeds the value of obstacle_stop_velocity_thresh_*, the distance to the pointcloud from self-position is calculated. For prevent chattering in the mode transition, obstacle_velocity_thresh_to_start_acc is used for the threshold to start adaptive cruise, and obstacle_velocity_thresh_to_stop_acc is used for the threshold to stop adaptive cruise. When the calculated distance value exceeds the emergency distance \\(d\\_{emergency}\\) calculated by emergency_stop parameters, target velocity to insert is calculated.

The emergency distance \\(d\\_{emergency}\\) is calculated as follows.

\\(d_{emergency} = d_{margin_{emergency}} + t_{idling_{emergency}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {emergency}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{emergency}}})\\)

The target velocity is determined to keep the distance to the obstacle pointcloud from own vehicle at the standard distance \\(d\\_{standard}\\) calculated as following. Therefore, if the distance to the obstacle pointcloud is longer than standard distance, The target velocity becomes higher than the current velocity, and vice versa. For keeping the distance, a PID controller is used.

\\(d_{standard} = d_{margin_{standard}} + t_{idling_{standard}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {standard}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{standard}}})\\)

If the target velocity exceeds the value of thresh_vel_to_stop, the target velocity is embedded in the trajectory.

"},{"location":"planning/autoware_obstacle_stop_planner/#known-limits","title":"Known Limits","text":" "},{"location":"planning/autoware_path_optimizer/","title":"Path optimizer","text":""},{"location":"planning/autoware_path_optimizer/#path-optimizer","title":"Path optimizer","text":""},{"location":"planning/autoware_path_optimizer/#purpose","title":"Purpose","text":"

This package generates a trajectory that is kinematically-feasible to drive and collision-free based on the input path, drivable area. Only position and orientation of trajectory are updated in this module, and velocity is just taken over from the one in the input path.

"},{"location":"planning/autoware_path_optimizer/#feature","title":"Feature","text":"

This package is able to

Note that the velocity is just taken over from the input path.

"},{"location":"planning/autoware_path_optimizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_path_optimizer/#input","title":"input","text":"Name Type Description ~/input/path autoware_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry nav_msgs/msg/Odometry Current Velocity of ego vehicle"},{"location":"planning/autoware_path_optimizer/#output","title":"output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/msg/Trajectory Optimized trajectory that is feasible to drive and collision-free"},{"location":"planning/autoware_path_optimizer/#flowchart","title":"Flowchart","text":"

Flowchart of functions is explained here.

"},{"location":"planning/autoware_path_optimizer/#createplannerdata","title":"createPlannerData","text":"

The following data for planning is created.

struct PlannerData\n{\n// input\nHeader header;\nstd::vector<TrajectoryPoint> traj_points; // converted from the input path\nstd::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n\n// ego\ngeometry_msgs::msg::Pose ego_pose;\ndouble ego_vel;\n};\n
"},{"location":"planning/autoware_path_optimizer/#check-replan","title":"check replan","text":"

When one of the following conditions are met, trajectory optimization will be executed. Otherwise, previously optimized trajectory is used with updating the velocity from the latest input path.

max_path_shape_around_ego_lat_dist

"},{"location":"planning/autoware_path_optimizer/#getmodelpredictivetrajectory","title":"getModelPredictiveTrajectory","text":"

This module makes the trajectory kinematically-feasible and collision-free. We define vehicle pose in the frenet coordinate, and minimize tracking errors by optimization. This optimization considers vehicle kinematics and collision checking with road boundary and obstacles. To decrease the computation cost, the optimization is applied to the shorter trajectory (default: 50 [m]) than the whole trajectory, and concatenate the remained trajectory with the optimized one at last.

The trajectory just in front of the ego must not be changed a lot so that the steering wheel will be stable. Therefore, we use the previously generated trajectory in front of the ego.

Optimization center on the vehicle, that tries to locate just on the trajectory, can be tuned along side the vehicle vertical axis. This parameter mpt.kinematics.optimization center offset is defined as the signed length from the back-wheel center to the optimization center. Some examples are shown in the following figure, and it is shown that the trajectory of vehicle shape differs according to the optimization center even if the reference trajectory (green one) is the same.

More details can be seen here.

"},{"location":"planning/autoware_path_optimizer/#applyinputvelocity","title":"applyInputVelocity","text":"

Velocity is assigned in the optimized trajectory from the velocity in the behavior path. The shapes of the optimized trajectory and the path are different, therefore the each nearest trajectory point to the path is searched and the velocity is interpolated with zero-order hold.

"},{"location":"planning/autoware_path_optimizer/#insertzerovelocityoutsidedrivablearea","title":"insertZeroVelocityOutsideDrivableArea","text":"

Optimized trajectory is too short for velocity planning, therefore extend the trajectory by concatenating the optimized trajectory and the behavior path considering drivability. Generated trajectory is checked if it is inside the drivable area or not, and if outside drivable area, output a trajectory inside drivable area with the behavior path or the previously generated trajectory.

As described above, the behavior path is separated into two paths: one is for optimization and the other is the remain. The first path becomes optimized trajectory, and the second path just is transformed to a trajectory. Then a trajectory inside the drivable area is calculated as follows.

Optimization failure is dealt with the same as if the optimized trajectory is outside the drivable area. The output trajectory is memorized as a previously generated trajectory for the next cycle.

Rationale In the current design, since there are some modelling errors, the constraints are considered to be soft constraints. Therefore, we have to make sure that the optimized trajectory is inside the drivable area or not after optimization.

"},{"location":"planning/autoware_path_optimizer/#limitation","title":"Limitation","text":""},{"location":"planning/autoware_path_optimizer/#comparison-to-other-methods","title":"Comparison to other methods","text":"

Trajectory planning problem that satisfies kinematically-feasibility and collision-free has two main characteristics that makes hard to be solved: one is non-convex and the other is high dimension. Based on the characteristics, we investigate pros/cons of the typical planning methods: optimization-based, sampling-based, and learning-based method.

"},{"location":"planning/autoware_path_optimizer/#optimization-based-method","title":"Optimization-based method","text":""},{"location":"planning/autoware_path_optimizer/#sampling-based-method","title":"Sampling-based method","text":""},{"location":"planning/autoware_path_optimizer/#learning-based-method","title":"Learning-based method","text":"

Based on these pros/cons, we chose the optimization-based planner first. Although it has a cons to converge to the local minima, it can get a good solution by the preprocessing to approximate the problem to convex that almost equals to the original non-convex problem.

"},{"location":"planning/autoware_path_optimizer/#how-to-tune-parameters","title":"How to Tune Parameters","text":""},{"location":"planning/autoware_path_optimizer/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":" "},{"location":"planning/autoware_path_optimizer/#computation-time","title":"Computation time","text":""},{"location":"planning/autoware_path_optimizer/#robustness","title":"Robustness","text":""},{"location":"planning/autoware_path_optimizer/#other-options","title":"Other options","text":""},{"location":"planning/autoware_path_optimizer/#how-to-debug","title":"How To Debug","text":"

How to debug can be seen here.

"},{"location":"planning/autoware_path_optimizer/docs/debug/","title":"Debug","text":""},{"location":"planning/autoware_path_optimizer/docs/debug/#debug","title":"Debug","text":""},{"location":"planning/autoware_path_optimizer/docs/debug/#debug-visualization","title":"Debug visualization","text":"

The visualization markers of the planning flow (Input, Model Predictive Trajectory, and Output) are explained here.

All the following markers can be visualized by

ros2 launch autoware_path_optimizer launch_visualiation.launch.xml vehilce_model:=sample_vehicle\n

The vehicle_model must be specified to make footprints with vehicle's size.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#input","title":"Input","text":" "},{"location":"planning/autoware_path_optimizer/docs/debug/#model-predictive-trajectory-mpt","title":"Model Predictive Trajectory (MPT)","text":" "},{"location":"planning/autoware_path_optimizer/docs/debug/#output","title":"Output","text":" "},{"location":"planning/autoware_path_optimizer/docs/debug/#calculation-time","title":"Calculation time","text":"

The path_optimizer consists of many functions such as boundaries' width calculation, collision-free planning, etc. We can see the calculation time for each function as follows.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#raw-data","title":"Raw data","text":"

Enable option.enable_calculation_time_info or echo the topic as follows.

$ ros2 topic echo /planning/scenario_planning/lane_driving/motion_planning/path_optimizer/debug/calculation_time --field data\n---\n        insertFixedPoint:= 0.008 [ms]\ngetPaddedTrajectoryPoints:= 0.002 [ms]\nupdateConstraint:= 0.741 [ms]\noptimizeTrajectory:= 0.101 [ms]\nconvertOptimizedPointsToTrajectory:= 0.014 [ms]\ngetEBTrajectory:= 0.991 [ms]\nresampleReferencePoints:= 0.058 [ms]\nupdateFixedPoint:= 0.237 [ms]\nupdateBounds:= 0.22 [ms]\nupdateVehicleBounds:= 0.509 [ms]\ncalcReferencePoints:= 1.649 [ms]\ncalcMatrix:= 0.209 [ms]\ncalcValueMatrix:= 0.015 [ms]\ncalcObjectiveMatrix:= 0.305 [ms]\ncalcConstraintMatrix:= 0.641 [ms]\ninitOsqp:= 6.896 [ms]\nsolveOsqp:= 2.796 [ms]\ncalcOptimizedSteerAngles:= 9.856 [ms]\ncalcMPTPoints:= 0.04 [ms]\ngetModelPredictiveTrajectory:= 12.782 [ms]\noptimizeTrajectory:= 12.981 [ms]\napplyInputVelocity:= 0.577 [ms]\ninsertZeroVelocityOutsideDrivableArea:= 0.81 [ms]\ngetDebugMarker:= 0.684 [ms]\npublishDebugMarker:= 4.354 [ms]\npublishDebugMarkerOfOptimization:= 5.047 [ms]\ngenerateOptimizedTrajectory:= 20.374 [ms]\nextendTrajectory:= 0.326 [ms]\npublishDebugData:= 0.008 [ms]\nonPath:= 20.737 [ms]\n
"},{"location":"planning/autoware_path_optimizer/docs/debug/#plot","title":"Plot","text":"

With the following script, any calculation time of the above functions can be plot.

ros2 run autoware_path_optimizer calculation_time_plotter.py\n

You can specify functions to plot with the -f option.

ros2 run autoware_path_optimizer calculation_time_plotter.py -f \"onPath, generateOptimizedTrajectory, calcReferencePoints\"\n
"},{"location":"planning/autoware_path_optimizer/docs/debug/#qa-for-debug","title":"Q&A for Debug","text":""},{"location":"planning/autoware_path_optimizer/docs/debug/#the-output-frequency-is-low","title":"The output frequency is low","text":"

Check the function which is comparatively heavy according to this information.

For your information, the following functions for optimization and its initialization may be heavy in some complicated cases.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#when-a-part-of-the-trajectory-has-high-curvature","title":"When a part of the trajectory has high curvature","text":"

Some of the following may have an issue. Please check if there is something weird by the visualization.

"},{"location":"planning/autoware_path_optimizer/docs/debug/#when-the-trajectorys-shape-is-zigzag","title":"When the trajectory's shape is zigzag","text":"

Some of the following may have an issue. Please check if there is something weird by the visualization.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/","title":"Model predictive trajectory","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#model-predictive-trajectory","title":"Model predictive trajectory","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#abstract","title":"Abstract","text":"

Model Predictive Trajectory (MPT) calculates the trajectory that meets the following conditions.

Conditions for collision free is considered to be not hard constraints but soft constraints. When the optimization failed or the optimized trajectory is not collision free, the output trajectory will be previously generated trajectory.

Trajectory near the ego must be stable, therefore the condition where trajectory points near the ego are the same as previously generated trajectory is considered, and this is the only hard constraints in MPT.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#vehicle-kinematics","title":"Vehicle kinematics","text":"

As the following figure, we consider the bicycle kinematics model in the frenet frame to track the reference path. At time step \\(k\\), we define lateral distance to the reference path, heading angle against the reference path, and steer angle as \\(y_k\\), \\(\\theta_k\\), and \\(\\delta_k\\) respectively.

Assuming that the commanded steer angle is \\(\\delta_{des, k}\\), the kinematics model in the frenet frame is formulated as follows. We also assume that the steer angle \\(\\delta_k\\) is first-order lag to the commanded one.

\\[ \\begin{align} y_{k+1} & = y_{k} + v \\sin \\theta_k dt \\\\\\ \\theta_{k+1} & = \\theta_k + \\frac{v \\tan \\delta_k}{L}dt - \\kappa_k v \\cos \\theta_k dt \\\\\\ \\delta_{k+1} & = \\delta_k - \\frac{\\delta_k - \\delta_{des,k}}{\\tau}dt \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#linearization","title":"Linearization","text":"

Then we linearize these equations. \\(y_k\\) and \\(\\theta_k\\) are tracking errors, so we assume that those are small enough. Therefore \\(\\sin \\theta_k \\approx \\theta_k\\).

Since \\(\\delta_k\\) is a steer angle, it is not always small. By using a reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) calculated by the reference path curvature \\(\\kappa_k\\), we express \\(\\delta_k\\) with a small value \\(\\Delta \\delta_k\\).

Note that the steer angle \\(\\delta_k\\) is within the steer angle limitation \\(\\delta_{\\max}\\). When the reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) is larger than the steer angle limitation \\(\\delta_{\\max}\\), and \\(\\delta_{\\mathrm{ref}, k}\\) is used to linearize the steer angle, \\(\\Delta \\delta_k\\) is \\(\\Delta \\delta_k = \\delta - \\delta_{\\mathrm{ref}, k} = \\delta_{\\max} - \\delta_{\\mathrm{ref}, k}\\), and the absolute \\(\\Delta \\delta_k\\) gets larger. Therefore, we have to apply the steer angle limitation to \\(\\delta_{\\mathrm{ref}, k}\\) as well.

\\[ \\begin{align} \\delta_{\\mathrm{ref}, k} & = \\mathrm{clamp}(\\arctan(L \\kappa_k), -\\delta_{\\max}, \\delta_{\\max}) \\\\\\ \\delta_k & = \\delta_{\\mathrm{ref}, k} + \\Delta \\delta_k, \\ \\Delta \\delta_k \\ll 1 \\\\\\ \\end{align} \\]

\\(\\mathrm{clamp}(v, v_{\\min}, v_{\\max})\\) is a function to convert \\(v\\) to be larger than \\(v_{\\min}\\) and smaller than \\(v_{\\max}\\).

Using this \\(\\delta_{\\mathrm{ref}, k}\\), \\(\\tan \\delta_k\\) is linearized as follows.

\\[ \\begin{align} \\tan \\delta_k & \\approx \\tan \\delta_{\\mathrm{ref}, k} + \\lbrace\\frac{d \\tan \\delta}{d \\delta}\\rbrace|_{\\delta = \\delta_{\\mathrm{ref}, k}} \\Delta \\delta_k \\\\\\ & = \\tan \\delta_{\\mathrm{ref}, k} + \\lbrace\\frac{d \\tan \\delta}{d \\delta}\\rbrace|_{\\delta = \\delta_{\\mathrm{ref}, k}} (\\delta_{\\mathrm{ref}, k} - \\delta_k) \\\\\\ & = \\tan \\delta_{\\mathrm{ref}, k} - \\frac{\\delta_{\\mathrm{ref}, k}}{\\cos^2 \\delta_{\\mathrm{ref}, k}} + \\frac{1}{\\cos^2 \\delta_{\\mathrm{ref}, k}} \\delta_k \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#one-step-state-equation","title":"One-step state equation","text":"

Based on the linearization, the error kinematics is formulated with the following linear equations,

\\[ \\begin{align} \\begin{pmatrix} y_{k+1} \\\\\\ \\theta_{k+1} \\end{pmatrix} = \\begin{pmatrix} 1 & v dt \\\\\\ 0 & 1 \\end{pmatrix} \\begin{pmatrix} y_k \\\\\\ \\theta_k \\end{pmatrix} + \\begin{pmatrix} 0 \\\\\\ \\frac{v dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} \\end{pmatrix} \\delta_{k} + \\begin{pmatrix} 0 \\\\\\ \\frac{v \\tan(\\delta_{\\mathrm{ref}, k}) dt}{L} - \\frac{v \\delta_{\\mathrm{ref}, k} dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} - \\kappa_k v dt \\end{pmatrix} \\end{align} \\]

which can be formulated as follows with the state \\(\\mathbf{x}\\), control input \\(u\\) and some matrices, where \\(\\mathbf{x} = (y_k, \\theta_k)\\)

\\[ \\begin{align} \\mathbf{x}_{k+1} = A_k \\mathbf{x}_k + \\mathbf{b}_k u_k + \\mathbf{w}_k \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#time-series-state-equation","title":"Time-series state equation","text":"

Then, we formulate time-series state equation by concatenating states, control inputs and matrices respectively as

\\[ \\begin{align} \\mathbf{x} = A \\mathbf{x}_0 + B \\mathbf{u} + \\mathbf{w} \\end{align} \\]

where

\\[ \\begin{align} \\mathbf{x} = (\\mathbf{x}^T_1, \\mathbf{x}^T_2, \\mathbf{x}^T_3, \\dots, \\mathbf{x}^T_{n-1})^T \\\\\\ \\mathbf{u} = (u_0, u_1, u_2, \\dots, u_{n-2})^T \\\\\\ \\mathbf{w} = (\\mathbf{w}^T_0, \\mathbf{w}^T_1, \\mathbf{w}^T_2, \\dots, \\mathbf{w}^T_{n-1})^T. \\\\\\ \\end{align} \\]

In detail, each matrices are constructed as follows.

\\[ \\begin{align} \\begin{pmatrix} \\mathbf{x}_1 \\\\\\ \\mathbf{x}_2 \\\\\\ \\mathbf{x}_3 \\\\\\ \\vdots \\\\\\ \\mathbf{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} A_0 \\\\\\ A_1 A_0 \\\\\\ A_2 A_1 A_0\\\\\\ \\vdots \\\\\\ \\prod\\limits_{k=0}^{n-1} A_{k} \\end{pmatrix} \\mathbf{x}_0 + \\begin{pmatrix} B_0 & 0 & & \\dots & 0 \\\\\\ A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\\\ A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} u_0 \\\\\\ u_1 \\\\\\ u_2 \\\\\\ \\vdots \\\\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} I & 0 & & \\dots & 0 \\\\\\ A_0 & I & 0 & \\dots & 0 \\\\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\mathbf{w}_0 \\\\\\ \\mathbf{w}_1 \\\\\\ \\mathbf{w}_2 \\\\\\ \\vdots \\\\\\ \\mathbf{w}_{n-2} \\end{pmatrix} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#free-boundary-conditioned-time-series-state-equation","title":"Free-boundary-conditioned time-series state equation","text":"

For path planning which does not start from the current ego pose, \\(\\mathbf{x}_0\\) should be the design variable of optimization. Therefore, we make \\(\\mathbf{u}'\\) by concatenating \\(\\mathbf{x}_0\\) and \\(\\mathbf{u}\\), and redefine \\(\\mathbf{x}\\) as follows.

\\[ \\begin{align} \\mathbf{u}' & = (\\mathbf{x}^T_0, \\mathbf{u}^T)^T \\\\\\ \\mathbf{x} & = (\\mathbf{x}^T_0, \\mathbf{x}^T_1, \\mathbf{x}^T_2, \\dots, \\mathbf{x}^T_{n-1})^T \\end{align} \\]

Then we get the following state equation

\\[ \\begin{align} \\mathbf{x}' = B \\mathbf{u}' + \\mathbf{w}, \\end{align} \\]

which is in detail

\\[ \\begin{align} \\begin{pmatrix} \\mathbf{x}_0 \\\\\\ \\mathbf{x}_1 \\\\\\ \\mathbf{x}_2 \\\\\\ \\mathbf{x}_3 \\\\\\ \\vdots \\\\\\ \\mathbf{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} I & 0 & \\dots & & & 0 \\\\\\ A_0 & B_0 & 0 & & \\dots & 0 \\\\\\ A_1 A_0 & A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\\\ A_2 A_1 A_0 & A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\\\ \\vdots & \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-1} A_k & \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} \\mathbf{x}_0 \\\\\\ u_0 \\\\\\ u_1 \\\\\\ u_2 \\\\\\ \\vdots \\\\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} 0 & \\dots & & & 0 \\\\\\ I & 0 & & \\dots & 0 \\\\\\ A_0 & I & 0 & \\dots & 0 \\\\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\mathbf{w}_0 \\\\\\ \\mathbf{w}_1 \\\\\\ \\mathbf{w}_2 \\\\\\ \\vdots \\\\\\ \\mathbf{w}_{n-2} \\end{pmatrix}. \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#objective-function","title":"Objective function","text":"

The objective function for smoothing and tracking is shown as follows, which can be formulated with value function matrices \\(Q, R\\).

\\[ \\begin{align} J_1 (\\mathbf{x}', \\mathbf{u}') & = w_y \\sum_{k} y_k^2 + w_{\\theta} \\sum_{k} \\theta_k^2 + w_{\\delta} \\sum_k \\delta_k^2 + w_{\\dot \\delta} \\sum_k \\dot \\delta_k^2 + w_{\\ddot \\delta} \\sum_k \\ddot \\delta_k^2 \\\\\\ & = \\mathbf{x}'^T Q \\mathbf{x}' + \\mathbf{u}'^T R \\mathbf{u}' \\\\\\ & = \\mathbf{u}'^T H \\mathbf{u}' + \\mathbf{u}'^T \\mathbf{f} \\end{align} \\]

As mentioned before, the constraints to be collision free with obstacles and road boundaries are formulated to be soft constraints. Assuming that the lateral distance to the road boundaries or obstacles from the back wheel center, front wheel center, and the point between them are \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) respectively, and slack variables for each point are \\(\\lambda_{\\mathrm{base}}, \\lambda_{\\mathrm{top}}, \\lambda_{\\mathrm{mid}}\\), the soft constraints can be formulated as follows.

\\[ y_{\\mathrm{base}, k, \\min} - \\lambda_{\\mathrm{base}, k} \\leq y_{\\mathrm{base}, k} (y_k) \\leq y_{\\mathrm{base}, k, \\max} + \\lambda_{\\mathrm{base}, k}\\\\\\ y_{\\mathrm{top}, k, \\min} - \\lambda_{\\mathrm{top}, k} \\leq y_{\\mathrm{top}, k} (y_k) \\leq y_{\\mathrm{top}, k, \\max} + \\lambda_{\\mathrm{top}, k}\\\\\\ y_{\\mathrm{mid}, k, \\min} - \\lambda_{\\mathrm{mid}, k} \\leq y_{\\mathrm{mid}, k} (y_k) \\leq y_{\\mathrm{mid}, k, \\max} + \\lambda_{\\mathrm{mid}, k} \\\\\\ 0 \\leq \\lambda_{\\mathrm{base}, k} \\\\\\ 0 \\leq \\lambda_{\\mathrm{top}, k} \\\\\\ 0 \\leq \\lambda_{\\mathrm{mid}, k} \\]

Since \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) is formulated as a linear function of \\(y_k\\), the objective function for soft constraints is formulated as follows.

\\[ \\begin{align} J_2 & (\\mathbf{\\lambda}_\\mathrm{base}, \\mathbf{\\lambda}_\\mathrm{top}, \\mathbf {\\lambda}_\\mathrm{mid})\\\\\\ & = w_{\\mathrm{base}} \\sum_{k} \\lambda_{\\mathrm{base}, k} + w_{\\mathrm{mid}} \\sum_k \\lambda_{\\mathrm{mid}, k} + w_{\\mathrm{top}} \\sum_k \\lambda_{\\mathrm{top}, k} \\end{align} \\]

Slack variables are also design variables for optimization. We define a vector \\(\\mathbf{v}\\), that concatenates all the design variables.

\\[ \\begin{align} \\mathbf{v} = \\begin{pmatrix} \\mathbf{u}'^T & \\mathbf{\\lambda}_\\mathrm{base}^T & \\mathbf{\\lambda}_\\mathrm{top}^T & \\mathbf{\\lambda}_\\mathrm{mid}^T \\end{pmatrix}^T \\end{align} \\]

The summation of these two objective functions is the objective function for the optimization problem.

\\[ \\begin{align} \\min_{\\mathbf{v}} J (\\mathbf{v}) = \\min_{\\mathbf{v}} J_1 (\\mathbf{u}') + J_2 (\\mathbf{\\lambda}_\\mathrm{base}, \\mathbf{\\lambda}_\\mathrm{top}, \\mathbf{\\lambda}_\\mathrm{mid}) \\end{align} \\]

As mentioned before, we use hard constraints where some trajectory points in front of the ego are the same as the previously generated trajectory points. This hard constraints is formulated as follows.

\\[ \\begin{align} \\delta_k = \\delta_{k}^{\\mathrm{prev}} (0 \\leq i \\leq N_{\\mathrm{fix}}) \\end{align} \\]

Finally we transform those objective functions to the following QP problem, and solve it.

\\[ \\begin{align} \\min_{\\mathbf{v}} \\ & \\frac{1}{2} \\mathbf{v}^T \\mathbf{H} \\mathbf{v} + \\mathbf{f} \\mathbf{v} \\\\\\ \\mathrm{s.t.} \\ & \\mathbf{b}_{lower} \\leq \\mathbf{A} \\mathbf{v} \\leq \\mathbf{b}_{upper} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#constraints","title":"Constraints","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#steer-angle-limitation","title":"Steer angle limitation","text":"

Steer angle has a limitation \\(\\delta_{max}\\) and \\(\\delta_{min}\\). Therefore we add linear inequality equations.

\\[ \\begin{align} \\delta_{min} \\leq \\delta_i \\leq \\delta_{max} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#collision-free","title":"Collision free","text":"

To realize collision-free trajectory planning, we have to formulate constraints that the vehicle is inside the road and also does not collide with obstacles in linear equations. For linearity, we implemented some methods to approximate the vehicle shape with a set of circles, that is reliable and easy to implement.

  1. Bicycle Model
  2. Uniform Circles
  3. Fitting Uniform Circles

Now we formulate the linear constraints where a set of circles on each trajectory point is collision-free. By using the drivable area, we calculate upper and lower boundaries along reference points, which will be interpolated on any position on the trajectory. NOTE that upper and lower boundary is left and right, respectively.

Assuming that upper and lower boundaries are \\(b_l\\), \\(b_u\\) respectively, and \\(r\\) is a radius of a circle, lateral deviation of the circle center \\(y'\\) has to be

\\[ b_l + r \\leq y' \\leq b_u - r. \\]

Based on the following figure, \\(y'\\) can be formulated as follows.

\\[ \\begin{align} y' & = L \\sin(\\theta + \\beta) + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\\\ & = L \\sin \\theta \\cos \\beta + L \\cos \\theta \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\\\ & \\approx L \\theta \\cos \\beta + L \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\end{align} \\] \\[ b_l + r - \\lambda \\leq y' \\leq b_u - r + \\lambda. \\] \\[ \\begin{align} y' & = C_1 \\mathbf{x} + C_2 \\\\\\ & = C_1 (B \\mathbf{v} + \\mathbf{w}) + C_2 \\\\\\ & = C_1 B \\mathbf{v} + \\mathbf{w} + C_2 \\end{align} \\]

Note that longitudinal position of the circle center and the trajectory point to calculate boundaries are different. But each boundaries are vertical against the trajectory, resulting in less distortion by the longitudinal position difference since road boundaries does not change so much. For example, if the boundaries are not vertical against the trajectory and there is a certain difference of longitudinal position between the circe center and the trajectory point, we can easily guess that there is much more distortion when comparing lateral deviation and boundaries.

\\[ \\begin{align} A_{blk} & = \\begin{pmatrix} C_1 B & O & \\dots & O & I_{N_{ref} \\times N_{ref}} & O \\dots & O\\\\\\ -C_1 B & O & \\dots & O & I & O \\dots & O\\\\\\ O & O & \\dots & O & I & O \\dots & O \\end{pmatrix} \\in \\mathbf{R}^{3 N_{ref} \\times D_v + N_{circle} N_{ref}} \\\\\\ \\mathbf{b}_{lower, blk} & = \\begin{pmatrix} \\mathbf{b}_{lower} - C_1 \\mathbf{w} - C_2 \\\\\\ -\\mathbf{b}_{upper} + C_1 \\mathbf{w} + C_2 \\\\\\ O \\end{pmatrix} \\in \\mathbf{R}^{3 N_{ref}} \\\\\\ \\mathbf{b}_{upper, blk} & = \\mathbf{\\infty} \\in \\mathbf{R}^{3 N_{ref}} \\end{align} \\]

We will explain options for optimization.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#l-infinity-optimization","title":"L-infinity optimization","text":"

The above formulation is called L2 norm for slack variables. Instead, if we use L-infinity norm where slack variables are shared by enabling l_inf_norm.

\\[ \\begin{align} A_{blk} = \\begin{pmatrix} C_1 B & I_{N_{ref} \\times N_{ref}} \\\\\\ -C_1 B & I \\\\\\ O & I \\end{pmatrix} \\in \\mathbf{R}^{3N_{ref} \\times D_v + N_{ref}} \\end{align} \\]"},{"location":"planning/autoware_path_optimizer/docs/mpt/#tips-for-stable-trajectory-planning","title":"Tips for stable trajectory planning","text":"

In order to make the trajectory optimization problem stabler to solve, the boundary constraint which the trajectory footprints should be inside and optimization weights are modified.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#keep-minimum-boundary-width","title":"Keep minimum boundary width","text":"

The drivable area's width is sometimes smaller than the vehicle width since the behavior module does not consider the width. To realize the stable trajectory optimization, the drivable area's width is guaranteed to be larger than the vehicle width and an additional margin in a rule-based way.

We cannot distinguish the boundary by roads from the boundary by obstacles for avoidance in the motion planner, the drivable area is modified in the following multi steps assuming that \\(l_{width}\\) is the vehicle width and \\(l_{margin}\\) is an additional margin.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#extend-violated-boundary","title":"Extend violated boundary","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#avoid-sudden-steering","title":"Avoid sudden steering","text":"

When the obstacle suddenly appears which is determined to avoid by the behavior module, the drivable area's shape just in front of the ego will change, resulting in the sudden steering. To prevent this, the drivable area's shape close to the ego is fixed as previous drivable area's shape.

Assume that \\(v_{ego}\\) is the ego velocity, and \\(t_{fix}\\) is the time to fix the forward drivable area's shape.

"},{"location":"planning/autoware_path_optimizer/docs/mpt/#calculate-avoidance-cost","title":"Calculate avoidance cost","text":""},{"location":"planning/autoware_path_optimizer/docs/mpt/#change-optimization-weights","title":"Change optimization weights","text":"\\[ \\begin{align} r & = \\mathrm{lerp}(w^{\\mathrm{steer}}_{\\mathrm{normal}}, w^{\\mathrm{steer}}_{\\mathrm{avoidance}}, c) \\\\\\ w^{\\mathrm{lat}} & = \\mathrm{lerp}(w^{\\mathrm{lat}}_{\\mathrm{normal}}, w^{\\mathrm{lat}}_{\\mathrm{avoidance}}, r) \\\\\\ w^{\\mathrm{yaw}} & = \\mathrm{lerp}(w^{\\mathrm{yaw}}_{\\mathrm{normal}}, w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}, r) \\end{align} \\]

Assume that \\(c\\) is the normalized avoidance cost, \\(w^{\\mathrm{lat}}\\) is the weight for lateral error, \\(w^{\\mathrm{yaw}}\\) is the weight for yaw error, and other variables are as follows.

Parameter Type Description \\(w^{\\mathrm{steer}}_{\\mathrm{normal}}\\) double weight for steering minimization in normal cases \\(w^{\\mathrm{steer}}_{\\mathrm{avoidance}}\\) double weight for steering minimization in avoidance cases \\(w^{\\mathrm{lat}}_{\\mathrm{normal}}\\) double weight for lateral error minimization in normal cases \\(w^{\\mathrm{lat}}_{\\mathrm{avoidance}}\\) double weight for lateral error minimization in avoidance cases \\(w^{\\mathrm{yaw}}_{\\mathrm{normal}}\\) double weight for yaw error minimization in normal cases \\(w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}\\) double weight for yaw error minimization in avoidance cases"},{"location":"planning/autoware_path_smoother/","title":"Path Smoothing","text":""},{"location":"planning/autoware_path_smoother/#path-smoothing","title":"Path Smoothing","text":""},{"location":"planning/autoware_path_smoother/#purpose","title":"Purpose","text":"

This package contains code to smooth a path or trajectory.

"},{"location":"planning/autoware_path_smoother/#features","title":"Features","text":""},{"location":"planning/autoware_path_smoother/#elastic-band","title":"Elastic Band","text":"

More details about the elastic band can be found here.

"},{"location":"planning/autoware_path_smoother/docs/eb/","title":"Elastic band","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#elastic-band","title":"Elastic band","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#abstract","title":"Abstract","text":"

Elastic band smooths the input path. Since the latter optimization (model predictive trajectory) is calculated on the frenet frame, path smoothing is applied here so that the latter optimization will be stable.

Note that this smoothing process does not consider collision checking. Therefore the output path may have a collision with road boundaries or obstacles.

"},{"location":"planning/autoware_path_smoother/docs/eb/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#general-parameters","title":"General parameters","text":"Parameter Type Description eb.common.num_points int points for elastic band optimization eb.common.delta_arc_length double delta arc length for elastic band optimization"},{"location":"planning/autoware_path_smoother/docs/eb/#parameters-for-optimization","title":"Parameters for optimization","text":"Parameter Type Description eb.option.enable_warm_start bool flag to use warm start eb.weight.smooth_weight double weight for smoothing eb.weight.lat_error_weight double weight for minimizing the lateral error"},{"location":"planning/autoware_path_smoother/docs/eb/#parameters-for-validation","title":"Parameters for validation","text":"Parameter Type Description eb.option.enable_optimization_validation bool flag to validate optimization eb.validation.max_error double max lateral error by optimization"},{"location":"planning/autoware_path_smoother/docs/eb/#formulation","title":"Formulation","text":""},{"location":"planning/autoware_path_smoother/docs/eb/#objective-function","title":"Objective function","text":"

We formulate a quadratic problem minimizing the diagonal length of the rhombus on each point generated by the current point and its previous and next points, shown as the red vector's length.

Assuming that \\(k\\)'th point is \\(\\mathbf{p}_k = (x_k, y_k)\\), the objective function is as follows.

\\[ \\begin{align} \\ J & = \\min \\sum_{k=1}^{n-2} ||(\\mathbf{p}_{k+1} - \\mathbf{p}_{k}) - (\\mathbf{p}_{k} - \\mathbf{p}_{k-1})||^2 \\\\\\ \\ & = \\min \\sum_{k=1}^{n-2} ||\\mathbf{p}_{k+1} - 2 \\mathbf{p}_{k} + \\mathbf{p}_{k-1}||^2 \\\\\\ \\ & = \\min \\sum_{k=1}^{n-2} \\{(x_{k+1} - x_k + x_{k-1})^2 + (y_{k+1} - y_k + y_{k-1})^2\\} \\\\\\ \\ & = \\min \\begin{pmatrix} \\ x_0 \\\\\\ \\ x_1 \\\\\\ \\ x_2 \\\\\\ \\vdots \\\\\\ \\ x_{n-3}\\\\\\ \\ x_{n-2} \\\\\\ \\ x_{n-1} \\\\\\ \\ y_0 \\\\\\ \\ y_1 \\\\\\ \\ y_2 \\\\\\ \\vdots \\\\\\ \\ y_{n-3}\\\\\\ \\ y_{n-2} \\\\\\ \\ y_{n-1} \\\\\\ \\end{pmatrix}^T \\begin{pmatrix} 1 & -2 & 1 & 0 & \\dots& \\\\\\ -2 & 5 & -4 & 1 & 0 &\\dots \\\\\\ 1 & -4 & 6 & -4 & 1 & \\\\\\ 0 & 1 & -4 & 6 & -4 & \\\\\\ \\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\\\ & \\vdots & & & \\\\\\ & & & 1 & -4 & 6 & -4 & 1 \\\\\\ & & & & 1 & -4 & 5 & -2 \\\\\\ & & & & & 1 & -2 & 1& \\\\\\ & & & & & & & &1 & -2 & 1 & 0 & \\dots& \\\\\\ & & & & & & & &-2 & 5 & -4 & 1 & 0 &\\dots \\\\\\ & & & & & & & &1 & -4 & 6 & -4 & 1 & \\\\\\ & & & & & & & &0 & 1 & -4 & 6 & -4 & \\\\\\ & & & & & & & &\\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\\\ & & & & & & & & & \\vdots & & & \\\\\\ & & & & & & & & & & & 1 & -4 & 6 & -4 & 1 \\\\\\ & & & & & & & & & & & & 1 & -4 & 5 & -2 \\\\\\ & & & & & & & & & & & & & 1 & -2 & 1& \\\\\\ \\end{pmatrix} \\begin{pmatrix} \\ x_0 \\\\\\ \\ x_1 \\\\\\ \\ x_2 \\\\\\ \\vdots \\\\\\ \\ x_{n-3}\\\\\\ \\ x_{n-2} \\\\\\ \\ x_{n-1} \\\\\\ \\ y_0 \\\\\\ \\ y_1 \\\\\\ \\ y_2 \\\\\\ \\vdots \\\\\\ \\ y_{n-3}\\\\\\ \\ y_{n-2} \\\\\\ \\ y_{n-1} \\\\\\ \\end{pmatrix} \\end{align} \\]"},{"location":"planning/autoware_path_smoother/docs/eb/#constraint","title":"Constraint","text":"

The distance that each point can move is limited so that the path will not changed a lot but will be smoother. In detail, the longitudinal distance that each point can move is zero, and the lateral distance is parameterized as eb.clearance.clearance_for_fix, eb.clearance.clearance_for_joint and eb.clearance.clearance_for_smooth.

The following figure describes how to constrain the lateral distance to move. The red line is where the point can move. The points for the upper and lower bound are described as \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), respectively.

Based on the line equation whose slope angle is \\(\\theta_k\\) and that passes through \\((x_k, y_k)\\), \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), the lateral constraint is formulated as follows.

\\[ C_k^l \\leq C_k \\leq C_k^u \\]

In addition, the beginning point is fixed and the end point as well if the end point is considered as the goal. This constraint can be applied with the upper equation by changing the distance that each point can move.

"},{"location":"planning/autoware_path_smoother/docs/eb/#debug","title":"Debug","text":" "},{"location":"planning/autoware_planning_factor_interface/","title":"planning factor interface","text":""},{"location":"planning/autoware_planning_test_manager/","title":"Autoware Planning Test Manager","text":""},{"location":"planning/autoware_planning_test_manager/#autoware-planning-test-manager","title":"Autoware Planning Test Manager","text":""},{"location":"planning/autoware_planning_test_manager/#background","title":"Background","text":"

In each node of the planning module, when exceptional input, such as unusual routes or significantly deviated ego-position, is given, the node may not be prepared for such input and could crash. As a result, debugging node crashes can be time-consuming. For example, if an empty trajectory is given as input and it was not anticipated during implementation, the node might crash due to the unaddressed exceptional input when changes are merged, during scenario testing or while the system is running on an actual vehicle.

"},{"location":"planning/autoware_planning_test_manager/#purpose","title":"Purpose","text":"

The purpose is to provide a utility for implementing tests to ensure that node operates correctly when receiving exceptional input. By utilizing this utility and implementing tests for exceptional input, the purpose is to reduce bugs that are only discovered when actually running the system, by requiring measures for exceptional input before merging PRs.

"},{"location":"planning/autoware_planning_test_manager/#features","title":"Features","text":""},{"location":"planning/autoware_planning_test_manager/#confirmation-of-normal-operation","title":"Confirmation of normal operation","text":"

For the test target node, confirm that the node operates correctly and publishes the required messages for subsequent nodes. To do this, test_node publish the necessary messages and confirm that the node's output is being published.

"},{"location":"planning/autoware_planning_test_manager/#robustness-confirmation-for-special-inputs","title":"Robustness confirmation for special inputs","text":"

After confirming normal operation, ensure that the test target node does not crash when given exceptional input. To do this, provide exceptional input from the test_node and confirm that the node does not crash.

(WIP)

"},{"location":"planning/autoware_planning_test_manager/#usage","title":"Usage","text":"
TEST(PlanningModuleInterfaceTest, NodeTestWithExceptionTrajectory)\n{\nrclcpp::init(0, nullptr);\n\n// instantiate test_manager with PlanningInterfaceTestManager type\nauto test_manager = std::make_shared<autoware::planning_test_manager::PlanningInterfaceTestManager>();\n\n// get package directories for necessary configuration files\nconst auto autoware_test_utils_dir =\nament_index_cpp::get_package_share_directory(\"autoware_test_utils\");\nconst auto target_node_dir =\nament_index_cpp::get_package_share_directory(\"target_node\");\n\n// set arguments to get the config file\nnode_options.arguments(\n{\"--ros-args\", \"--params-file\",\nautoware_test_utils_dir + \"/config/test_vehicle_info.param.yaml\", \"--params-file\",\nautoware_planning_validator_dir + \"/config/planning_validator.param.yaml\"});\n\n// instantiate the TargetNode with node_options\nauto test_target_node = std::make_shared<TargetNode>(node_options);\n\n// publish the necessary topics from test_manager second argument is topic name\ntest_manager->publishOdometry(test_target_node, \"/localization/kinematic_state\");\ntest_manager->publishMaxVelocity(\ntest_target_node, \"velocity_smoother/input/external_velocity_limit_mps\");\n\n// set scenario_selector's input topic name(this topic is changed to test node)\ntest_manager->setTrajectoryInputTopicName(\"input/parking/trajectory\");\n\n// test with normal trajectory\nASSERT_NO_THROW(test_manager->testWithNominalTrajectory(test_target_node));\n\n// make sure target_node is running\nEXPECT_GE(test_manager->getReceivedTopicNum(), 1);\n\n// test with trajectory input with empty/one point/overlapping point\nASSERT_NO_THROW(test_manager->testWithAbnormalTrajectory(test_target_node));\n\n// shutdown ROS context\nrclcpp::shutdown();\n}\n
"},{"location":"planning/autoware_planning_test_manager/#implemented-tests","title":"Implemented tests","text":"Node Test name exceptional input output Exceptional input pattern autoware_planning_validator NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points velocity_smoother NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_cruise_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_stop_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_velocity_limiter NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points path_optimizer NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points scenario_selector NodeTestWithExceptionTrajectoryLaneDrivingMode NodeTestWithExceptionTrajectoryParkingMode trajectory scenario Empty, single point, path with duplicate points for scenarios:LANEDRIVING and PARKING freespace_planner NodeTestWithExceptionRoute route trajectory Empty route behavior_path_planner NodeTestWithExceptionRoute NodeTestWithOffTrackEgoPose route route odometry Empty route Off-lane ego-position behavior_velocity_planner NodeTestWithExceptionPathWithLaneID path_with_lane_id path Empty path"},{"location":"planning/autoware_planning_test_manager/#important-notes","title":"Important Notes","text":"

During test execution, when launching a node, parameters are loaded from the parameter file within each package. Therefore, when adding parameters, it is necessary to add the required parameters to the parameter file in the target node package. This is to prevent the node from being unable to launch if there are missing parameters when retrieving them from the parameter file during node launch.

"},{"location":"planning/autoware_planning_test_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"

(WIP)

"},{"location":"planning/autoware_planning_topic_converter/","title":"Planning Topic Converter","text":""},{"location":"planning/autoware_planning_topic_converter/#planning-topic-converter","title":"Planning Topic Converter","text":""},{"location":"planning/autoware_planning_topic_converter/#purpose","title":"Purpose","text":"

This package provides tools that convert topic type among types are defined in https://github.com/autowarefoundation/autoware_msgs.

"},{"location":"planning/autoware_planning_topic_converter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_planning_topic_converter/#usage-example","title":"Usage example","text":"

The tools in this package are provided as composable ROS 2 component nodes, so that they can be spawned into an existing process, launched from launch files, or invoked from the command line.

<load_composable_node target=\"container_name\">\n<composable_node pkg=\"planning_topic_converter\" plugin=\"autoware::planning_topic_converter::PathToTrajectory\" name=\"path_to_trajectory_converter\" namespace=\"\">\n<!-- params -->\n<param name=\"input_topic\" value=\"foo\"/>\n<param name=\"output_topic\" value=\"bar\"/>\n<!-- composable node config -->\n<extra_arg name=\"use_intra_process_comms\" value=\"false\"/>\n</composable_node>\n</load_composable_node>\n
"},{"location":"planning/autoware_planning_topic_converter/#parameters","title":"Parameters","text":"Name Type Description input_topic string input topic name. output_topic string output topic name."},{"location":"planning/autoware_planning_topic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_planning_topic_converter/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_planning_validator/","title":"Planning Validator","text":""},{"location":"planning/autoware_planning_validator/#planning-validator","title":"Planning Validator","text":"

The autoware_planning_validator is a module that checks the validity of a trajectory before it is published. The status of the validation can be viewed in the /diagnostics and /validation_status topics. When an invalid trajectory is detected, the autoware_planning_validator will process the trajectory following the selected option: \"0. publish the trajectory as it is\", \"1. stop publishing the trajectory\", \"2. publish the last validated trajectory\".

"},{"location":"planning/autoware_planning_validator/#supported-features","title":"Supported features","text":"

The following features are supported for trajectory validation and can have thresholds set by parameters:

The following features are to be implemented.

"},{"location":"planning/autoware_planning_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"planning/autoware_planning_validator/#inputs","title":"Inputs","text":"

The autoware_planning_validator takes in the following inputs:

Name Type Description ~/input/kinematics nav_msgs/Odometry ego pose and twist ~/input/trajectory autoware_planning_msgs/Trajectory target trajectory to be validated in this node"},{"location":"planning/autoware_planning_validator/#outputs","title":"Outputs","text":"

It outputs the following:

Name Type Description ~/output/trajectory autoware_planning_msgs/Trajectory validated trajectory ~/output/validation_status planning_validator/PlanningValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"planning/autoware_planning_validator/#parameters","title":"Parameters","text":"

The following parameters can be set for the autoware_planning_validator:

"},{"location":"planning/autoware_planning_validator/#system-parameters","title":"System parameters","text":"Name Type Description Default value invalid_trajectory_handling_type int set the operation when the invalid trajectory is detected. 0: publish the trajectory even if it is invalid, 1: stop publishing the trajectory, 2: publish the last validated trajectory. 0 publish_diag bool the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true diag_error_count_threshold int if true, diagnostics msg is published. true display_on_terminal bool show error msg on terminal true"},{"location":"planning/autoware_planning_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"planning/autoware_planning_validator/#thresholds","title":"Thresholds","text":"

The input trajectory is detected as invalid if the index exceeds the following thresholds.

Name Type Description Default value thresholds.interval double invalid threshold of the distance of two neighboring trajectory points [m] 100.0 thresholds.relative_angle double invalid threshold of the relative angle of two neighboring trajectory points [rad] 2.0 thresholds.curvature double invalid threshold of the curvature in each trajectory point [1/m] 1.0 thresholds.lateral_acc double invalid threshold of the lateral acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_max_acc double invalid threshold of the maximum longitudinal acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_min_acc double invalid threshold of the minimum longitudinal deceleration in each trajectory point [m/ss] -9.8 thresholds.steering double invalid threshold of the steering angle in each trajectory point [rad] 1.414 thresholds.steering_rate double invalid threshold of the steering angle rate in each trajectory point [rad/s] 10.0 thresholds.velocity_deviation double invalid threshold of the velocity deviation between the ego velocity and the trajectory point closest to ego [m/s] 100.0 thresholds.distance_deviation double invalid threshold of the distance deviation between the ego position and the trajectory point closest to ego [m] 100.0 parameters.longitudinal_distance_deviation double invalid threshold of the longitudinal distance deviation between the ego position and the trajectory [m] 2.0"},{"location":"planning/autoware_planning_validator/#parameters_1","title":"Parameters","text":"

For parameters used e.g. to calculate threshold.

| parameters.forward_trajectory_length_acceleration | double | This value is used to calculate required trajectory length. | -5.0 | | parameters.forward_trajectory_length_margin | double | A margin of the required trajectory length not to raise an error when the ego slightly exceeds the trajectory end point. | 2.0 |

"},{"location":"planning/autoware_remaining_distance_time_calculator/","title":"Index","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#remaining-distance-and-time-calculator","title":"Remaining Distance and Time Calculator","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#role","title":"Role","text":"

This package aims to provide mission remaining distance and remaining time calculations.

"},{"location":"planning/autoware_remaining_distance_time_calculator/#activation-and-timing","title":"Activation and Timing","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#module-parameters","title":"Module Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period. [Hz]"},{"location":"planning/autoware_remaining_distance_time_calculator/#inner-workings","title":"Inner-workings","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#remaining-distance-calculation","title":"Remaining Distance Calculation","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#remaining-time-calculation","title":"Remaining Time Calculation","text":""},{"location":"planning/autoware_remaining_distance_time_calculator/#future-work","title":"Future Work","text":""},{"location":"planning/autoware_route_handler/","title":"route handler","text":""},{"location":"planning/autoware_route_handler/#route-handler","title":"route handler","text":"

route_handler is a library for calculating driving route on the lanelet map.

"},{"location":"planning/autoware_route_handler/#unit-testing","title":"Unit Testing","text":"

The unit testing depends on autoware_test_utils package. autoware_test_utils is a library that provides several common functions to simplify unit test creating.

By default, route file is necessary to create tests. The following illustrates the route that are used in the unit test

"},{"location":"planning/autoware_route_handler/#lane-change-test-route","title":"Lane change test route","text":""},{"location":"planning/autoware_rtc_interface/","title":"RTC Interface","text":""},{"location":"planning/autoware_rtc_interface/#rtc-interface","title":"RTC Interface","text":""},{"location":"planning/autoware_rtc_interface/#purpose","title":"Purpose","text":"

RTC Interface is an interface to publish the decision status of behavior planning modules and receive execution command from external of an autonomous driving system.

"},{"location":"planning/autoware_rtc_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_rtc_interface/#usage-example","title":"Usage example","text":"
// Generate instance (in this example, \"intersection\" is selected)\nautoware::rtc_interface::RTCInterface rtc_interface(node, \"intersection\");\n\n// Generate UUID\nconst unique_identifier_msgs::msg::UUID uuid = generateUUID(getModuleId());\n\n// Repeat while module is running\nwhile (...) {\n// Get safety status of the module corresponding to the module id\nconst bool safe = ...\n\n// Get distance to the object corresponding to the module id\nconst double start_distance = ...\nconst double finish_distance = ...\n\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Update status\nrtc_interface.updateCooperateStatus(uuid, safe, start_distance, finish_distance, stamp);\n\nif (rtc_interface.isActivated(uuid)) {\n// Execute planning\n} else {\n// Stop planning\n}\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Publish status topic\nrtc_interface.publishCooperateStatus(stamp);\n}\n\n// Remove the status from array\nrtc_interface.removeCooperateStatus(uuid);\n
"},{"location":"planning/autoware_rtc_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_rtc_interface/#rtcinterface-constructor","title":"RTCInterface (Constructor)","text":"
autoware::rtc_interface::RTCInterface(rclcpp::Node & node, const std::string & name);\n
"},{"location":"planning/autoware_rtc_interface/#description","title":"Description","text":"

A constructor for autoware::rtc_interface::RTCInterface.

"},{"location":"planning/autoware_rtc_interface/#input","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output","title":"Output","text":"

An instance of RTCInterface

"},{"location":"planning/autoware_rtc_interface/#publishcooperatestatus","title":"publishCooperateStatus","text":"
autoware::rtc_interface::publishCooperateStatus(const rclcpp::Time & stamp)\n
"},{"location":"planning/autoware_rtc_interface/#description_1","title":"Description","text":"

Publish registered cooperate status.

"},{"location":"planning/autoware_rtc_interface/#input_1","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_1","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#updatecooperatestatus","title":"updateCooperateStatus","text":"
autoware::rtc_interface::updateCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid, const bool safe, const double start_distance, const double finish_distance, const rclcpp::Time & stamp)\n
"},{"location":"planning/autoware_rtc_interface/#description_2","title":"Description","text":"

Update cooperate status corresponding to uuid. If cooperate status corresponding to uuid is not registered yet, add new cooperate status.

"},{"location":"planning/autoware_rtc_interface/#input_2","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_2","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#removecooperatestatus","title":"removeCooperateStatus","text":"
autoware::rtc_interface::removeCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/autoware_rtc_interface/#description_3","title":"Description","text":"

Remove cooperate status corresponding to uuid from registered statuses.

"},{"location":"planning/autoware_rtc_interface/#input_3","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_3","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#clearcooperatestatus","title":"clearCooperateStatus","text":"
autoware::rtc_interface::clearCooperateStatus()\n
"},{"location":"planning/autoware_rtc_interface/#description_4","title":"Description","text":"

Remove all cooperate statuses.

"},{"location":"planning/autoware_rtc_interface/#input_4","title":"Input","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#output_4","title":"Output","text":"

Nothing

"},{"location":"planning/autoware_rtc_interface/#isactivated","title":"isActivated","text":"
autoware::rtc_interface::isActivated(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/autoware_rtc_interface/#description_5","title":"Description","text":"

Return received command status corresponding to uuid.

"},{"location":"planning/autoware_rtc_interface/#input_5","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_5","title":"Output","text":"

If auto mode is enabled, return based on the safety status. If not, if received command is ACTIVATED, return true. If not, return false.

"},{"location":"planning/autoware_rtc_interface/#isregistered","title":"isRegistered","text":"
autoware::rtc_interface::isRegistered(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/autoware_rtc_interface/#description_6","title":"Description","text":"

Return true if uuid is registered.

"},{"location":"planning/autoware_rtc_interface/#input_6","title":"Input","text":""},{"location":"planning/autoware_rtc_interface/#output_6","title":"Output","text":"

If uuid is registered, return true. If not, return false.

"},{"location":"planning/autoware_rtc_interface/#debugging-tools","title":"Debugging Tools","text":"

There is a debugging tool called RTC Replayer for the RTC interface.

"},{"location":"planning/autoware_rtc_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_rtc_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_scenario_selector/","title":"autoware_scenario_selector","text":""},{"location":"planning/autoware_scenario_selector/#autoware_scenario_selector","title":"autoware_scenario_selector","text":""},{"location":"planning/autoware_scenario_selector/#scenario_selector_node","title":"scenario_selector_node","text":"

scenario_selector_node is a node that switches trajectories from each scenario.

"},{"location":"planning/autoware_scenario_selector/#input-topics","title":"Input topics","text":"Name Type Description ~input/lane_driving/trajectory autoware_planning_msgs::Trajectory trajectory of LaneDriving scenario ~input/parking/trajectory autoware_planning_msgs::Trajectory trajectory of Parking scenario ~input/lanelet_map autoware_map_msgs::msg::LaneletMapBin ~input/route autoware_planning_msgs::LaneletRoute route and goal pose ~input/odometry nav_msgs::Odometry for checking whether vehicle is stopped is_parking_completed bool (implemented as rosparam) whether all split trajectory of Parking are published"},{"location":"planning/autoware_scenario_selector/#output-topics","title":"Output topics","text":"Name Type Description ~output/scenario tier4_planning_msgs::Scenario current scenario and scenarios to be activated ~output/trajectory autoware_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/autoware_scenario_selector/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/autoware_scenario_selector/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in scenario_selector.launch or add args when executing roslaunch
  2. roslaunch autoware_scenario_selector scenario_selector.launch
"},{"location":"planning/autoware_scenario_selector/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float timer's update rate 10 \u22650.0 th_max_message_delay_sec float threshold time of input messages' maximum delay 1 \u22650.0 th_arrived_distance_m float threshold distance to check if vehicle has arrived at the trajectory's endpoint 1 \u22650.0 th_stopped_time_sec float threshold time to check if vehicle is stopped 1 \u22650.0 th_stopped_velocity_mps float threshold velocity to check if vehicle is stopped 0.01 \u22650.0 enable_mode_switching boolean enable switching between scenario modes when ego is stuck in parking area 1 N/A"},{"location":"planning/autoware_scenario_selector/#flowchart","title":"Flowchart","text":""},{"location":"planning/autoware_surround_obstacle_checker/","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/#surround-obstacle-checker","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/#purpose","title":"Purpose","text":"

This module subscribes required data (ego-pose, obstacles, etc), and publishes zero velocity limit to keep stopping if any of stop conditions are satisfied.

"},{"location":"planning/autoware_surround_obstacle_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_surround_obstacle_checker/#algorithms","title":"Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/#check-data","title":"Check data","text":"

Check that surround_obstacle_checker receives no ground pointcloud, dynamic objects and current velocity data.

"},{"location":"planning/autoware_surround_obstacle_checker/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.

"},{"location":"planning/autoware_surround_obstacle_checker/#stop-requirement","title":"Stop requirement","text":"

If it satisfies all following conditions, it plans stopping.

"},{"location":"planning/autoware_surround_obstacle_checker/#states","title":"States","text":"

To prevent chattering, surround_obstacle_checker manages two states. As mentioned in stop condition section, it prevents chattering by changing threshold to find surround obstacle depending on the states.

"},{"location":"planning/autoware_surround_obstacle_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_surround_obstacle_checker/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/autoware_surround_obstacle_checker/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization ~/debug/footprint geometry_msgs::msg::PolygonStamped Ego vehicle base footprint for visualization ~/debug/footprint_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_distance offset for visualization ~/debug/footprint_recover_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_recover_distance offset for visualization"},{"location":"planning/autoware_surround_obstacle_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range pointcloud.enable_check boolean enable to check surrounding pointcloud false N/A pointcloud.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.enable_check boolean enable to check surrounding unknown objects true N/A unknown.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.enable_check boolean enable to check surrounding car true N/A car.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.enable_check boolean enable to check surrounding truck true N/A truck.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.enable_check boolean enable to check surrounding bus true N/A bus.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.enable_check boolean enable to check surrounding trailer true N/A trailer.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.enable_check boolean enable to check surrounding motorcycle true N/A motorcycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.enable_check boolean enable to check surrounding bicycle true N/A bicycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_side_distance float f objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.enable_check boolean enable to check surrounding pedestrian true N/A pedestrian.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 surround_check_hysteresis_distance float If no object exists in this hysteresis distance added to the above distance, transit to \"non-surrounding-obstacle\" status [m] 0.3 \u22650.0 state_clear_time float Threshold to clear stop state [s] 2.0 \u22650.0 stop_state_ego_speed float Threshold to check ego vehicle stopped [m/s] 0.1 \u22650.0 publish_debug_footprints boolean Publish vehicle footprint & footprints with surround_check_distance and surround_check_recover_distance offsets. true N/A debug_footprint_label string select the label for debug footprint car ['pointcloud', 'unknown', 'car', 'truck', 'bus', 'trailer', 'motorcycle', 'bicycle', 'pedestrian'] Name Type Description Default value enable_check bool Indicates whether each object is considered in the obstacle check target. true for objects; false for point clouds surround_check_front_distance bool If there are objects or point clouds within this distance in front, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_side_distance double If there are objects or point clouds within this side distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_back_distance double If there are objects or point clouds within this back distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_hysteresis_distance double If no object exists within surround_check_xxx_distance plus this additional distance, transition to the \"non-surrounding-obstacle\" status [m]. 0.3 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1 publish_debug_footprints bool Publish vehicle footprint with/without offsets true"},{"location":"planning/autoware_surround_obstacle_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform stop planning, it is necessary to get obstacle pointclouds data. Hence, it does not plan stopping if the obstacle is in blind spot.

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#surround-obstacle-checker","title":"Surround Obstacle Checker","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#purpose","title":"Purpose","text":"

surround_obstacle_checker \u306f\u3001\u81ea\u8eca\u304c\u505c\u8eca\u4e2d\u3001\u81ea\u8eca\u306e\u5468\u56f2\u306b\u969c\u5bb3\u7269\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\u306b\u767a\u9032\u3057\u306a\u3044\u3088\u3046\u306b\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#algorithms","title":"Algorithms","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#check-data","title":"Check data","text":"

\u70b9\u7fa4\u3001\u52d5\u7684\u7269\u4f53\u3001\u81ea\u8eca\u901f\u5ea6\u306e\u30c7\u30fc\u30bf\u304c\u53d6\u5f97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3059\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

\u81ea\u8eca\u3068\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u8a08\u7b97\u3059\u308b\u3002 \u3053\u3053\u3067\u306f\u3001\u81ea\u8eca\u306e\u30dd\u30ea\u30b4\u30f3\u3092\u8a08\u7b97\u3057\u3001\u70b9\u7fa4\u306e\u5404\u70b9\u304a\u3088\u3073\u5404\u52d5\u7684\u7269\u4f53\u306e\u30dd\u30ea\u30b4\u30f3\u3068\u306e\u8ddd\u96e2\u3092\u305d\u308c\u305e\u308c\u8a08\u7b97\u3059\u308b\u3053\u3068\u3067\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u6c42\u3081\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#stop-condition","title":"Stop condition","text":"

\u6b21\u306e\u6761\u4ef6\u3092\u3059\u3079\u3066\u6e80\u305f\u3059\u3068\u304d\u3001\u81ea\u8eca\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#states","title":"States","text":"

\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u9632\u6b62\u306e\u305f\u3081\u3001surround_obstacle_checker \u3067\u306f\u72b6\u614b\u3092\u7ba1\u7406\u3057\u3066\u3044\u308b\u3002 Stop condition \u306e\u9805\u3067\u8ff0\u3079\u305f\u3088\u3046\u306b\u3001\u72b6\u614b\u306b\u3088\u3063\u3066\u969c\u5bb3\u7269\u5224\u5b9a\u306e\u3057\u304d\u3044\u5024\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u3092\u9632\u6b62\u3057\u3066\u3044\u308b\u3002

"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true surround_check_distance double If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance double If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1"},{"location":"planning/autoware_surround_obstacle_checker/surround_obstacle_checker-design.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

\u3053\u306e\u6a5f\u80fd\u304c\u52d5\u4f5c\u3059\u308b\u305f\u3081\u306b\u306f\u969c\u5bb3\u7269\u70b9\u7fa4\u306e\u89b3\u6e2c\u304c\u5fc5\u8981\u306a\u305f\u3081\u3001\u969c\u5bb3\u7269\u304c\u6b7b\u89d2\u306b\u5165\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u308f\u306a\u3044\u3002

"},{"location":"planning/autoware_velocity_smoother/","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/#velocity-smoother","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/#purpose","title":"Purpose","text":"

autoware_velocity_smoother outputs a desired velocity profile on a reference trajectory. This module plans a velocity profile within the limitations of the velocity, the acceleration and the jerk to realize both the maximization of velocity and the ride quality. We call this module autoware_velocity_smoother because the limitations of the acceleration and the jerk means the smoothness of the velocity profile.

"},{"location":"planning/autoware_velocity_smoother/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_velocity_smoother/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_velocity_smoother/#extract-trajectory","title":"Extract trajectory","text":"

For the point on the reference trajectory closest to the center of the rear wheel axle of the vehicle, it extracts the reference path between extract_behind_dist behind and extract_ahead_dist ahead.

"},{"location":"planning/autoware_velocity_smoother/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

It applies the velocity limit input from the external of autoware_velocity_smoother. Remark that the external velocity limit is different from the velocity limit already set on the map and the reference trajectory. The external velocity is applied at the position that it is able to reach the velocity limit with the deceleration and the jerk constraints set as the parameter.

"},{"location":"planning/autoware_velocity_smoother/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

It applies the velocity limit near the stopping point. This function is used to approach near the obstacle or improve the accuracy of stopping.

"},{"location":"planning/autoware_velocity_smoother/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

It applies the velocity limit to decelerate at the curve. It calculates the velocity limit from the curvature of the reference trajectory and the maximum lateral acceleration max_lateral_accel. The velocity limit is set as not to fall under min_curve_velocity.

Note: velocity limit that requests larger than nominal.jerk is not applied. In other words, even if a sharp curve is planned just in front of the ego, no deceleration is performed.

"},{"location":"planning/autoware_velocity_smoother/#apply-steering-rate-limit","title":"Apply steering rate limit","text":"

It calculates the desired steering angles of trajectory points. and it applies the steering rate limit. If the (steering_angle_rate > max_steering_angle_rate), it decreases the velocity of the trajectory point to acceptable velocity.

"},{"location":"planning/autoware_velocity_smoother/#resample-trajectory","title":"Resample trajectory","text":"

It resamples the points on the reference trajectory with designated time interval. Note that the range of the length of the trajectory is set between min_trajectory_length and max_trajectory_length, and the distance between two points is longer than min_trajectory_interval_distance. It samples densely up to the distance traveled between resample_time with the current velocity, then samples sparsely after that. By sampling according to the velocity, both calculation load and accuracy are achieved since it samples finely at low velocity and coarsely at high velocity.

"},{"location":"planning/autoware_velocity_smoother/#calculate-initial-state","title":"Calculate initial state","text":"

Calculate initial values for velocity planning. The initial values are calculated according to the situation as shown in the following table.

Situation Initial velocity Initial acceleration First calculation Current velocity 0.0 Engaging engage_velocity engage_acceleration Deviate between the planned velocity and the current velocity Current velocity Previous planned value Normal Previous planned value Previous planned value"},{"location":"planning/autoware_velocity_smoother/#smooth-velocity","title":"Smooth velocity","text":"

It plans the velocity. The algorithm of velocity planning is chosen from JerkFiltered, L2 and Linf, and it is set in the launch file. In these algorithms, they use OSQP[1] as the solver of the optimization.

"},{"location":"planning/autoware_velocity_smoother/#jerkfiltered","title":"JerkFiltered","text":"

It minimizes the sum of the minus of the square of the velocity and the square of the violation of the velocity limit, the acceleration limit and the jerk limit.

"},{"location":"planning/autoware_velocity_smoother/#l2","title":"L2","text":"

It minimizes the sum of the minus of the square of the velocity, the square of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/autoware_velocity_smoother/#linf","title":"Linf","text":"

It minimizes the sum of the minus of the square of the velocity, the maximum absolute value of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/autoware_velocity_smoother/#post-process","title":"Post process","text":"

It performs the post-process of the planned velocity.

After the optimization, a resampling called post resampling is performed before passing the optimized trajectory to the next node. Since the required path interval from optimization may be different from the one for the next module, post resampling helps to fill this gap. Therefore, in post resampling, it is necessary to check the path specification of the following module to determine the parameters. Note that if the computational load of the optimization algorithm is high and the path interval is sparser than the path specification of the following module in the first resampling, post resampling would resample the trajectory densely. On the other hand, if the computational load of the optimization algorithm is small and the path interval is denser than the path specification of the following module in the first resampling, the path is sparsely resampled according to the specification of the following module.

"},{"location":"planning/autoware_velocity_smoother/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_velocity_smoother/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/autoware_velocity_smoother/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_steering_rate_limited autoware_planning_msgs/Trajectory Steering angle rate limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/autoware_velocity_smoother/#parameters","title":"Parameters","text":""},{"location":"planning/autoware_velocity_smoother/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/autoware_velocity_smoother/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/autoware_velocity_smoother/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value enable_lateral_acc_limit bool To toggle the lateral acceleration filter on and off. You can switch it dynamically at runtime. true max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0 min_decel_for_lateral_acc_lim_filter double Deceleration limit to avoid sudden braking by the lateral acceleration filter [m/ss]. Strong limitation degrades the deceleration response to the appearance of sharp curves due to obstacle avoidance, etc. -2.5"},{"location":"planning/autoware_velocity_smoother/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/autoware_velocity_smoother/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/autoware_velocity_smoother/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/autoware_velocity_smoother/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/autoware_velocity_smoother/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/autoware_velocity_smoother/#limit-steering-angle-rate-parameters","title":"Limit steering angle rate parameters","text":"Name Type Description Default value enable_steering_rate_limit bool To toggle the steer rate filter on and off. You can switch it dynamically at runtime. true max_steering_angle_rate double Maximum steering angle rate [degree/s] 40.0 resample_ds double Distance between trajectory points [m] 0.1 curvature_threshold double If curvature > curvature_threshold, steeringRateLimit is triggered [1/m] 0.02 curvature_calculation_distance double Distance of points while curvature is calculating [m] 1.0"},{"location":"planning/autoware_velocity_smoother/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/autoware_velocity_smoother/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/autoware_velocity_smoother/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_velocity_smoother/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/autoware_velocity_smoother/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/autoware_velocity_smoother/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/autoware_velocity_smoother/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#velocity-smoother","title":"Velocity Smoother","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#purpose","title":"Purpose","text":"

autoware_velocity_smoother\u306f\u76ee\u6a19\u8ecc\u9053\u4e0a\u306e\u5404\u70b9\u306b\u304a\u3051\u308b\u671b\u307e\u3057\u3044\u8eca\u901f\u3092\u8a08\u753b\u3057\u3066\u51fa\u529b\u3059\u308b\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002 \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f\u3001\u901f\u5ea6\u306e\u6700\u5927\u5316\u3068\u4e57\u308a\u5fc3\u5730\u306e\u826f\u3055\u3092\u4e21\u7acb\u3059\u308b\u305f\u3081\u306b\u3001\u4e8b\u524d\u306b\u6307\u5b9a\u3055\u308c\u305f\u5236\u9650\u901f\u5ea6\u3001\u5236\u9650\u52a0\u901f\u5ea6\u304a\u3088\u3073\u5236\u9650\u8e8d\u5ea6\u306e\u7bc4\u56f2\u3067\u8eca\u901f\u3092\u8a08\u753b\u3059\u308b\u3002 \u52a0\u901f\u5ea6\u3084\u8e8d\u5ea6\u306e\u5236\u9650\u3092\u4e0e\u3048\u308b\u3053\u3068\u306f\u8eca\u901f\u306e\u5909\u5316\u3092\u6ed1\u3089\u304b\u306b\u3059\u308b\u3053\u3068\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u3001\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u3092autoware_velocity_smoother\u3068\u547c\u3093\u3067\u3044\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#extract-trajectory","title":"Extract trajectory","text":"

\u81ea\u8eca\u5f8c\u8f2a\u8ef8\u4e2d\u5fc3\u4f4d\u7f6e\u306b\u6700\u3082\u8fd1\u3044\u53c2\u7167\u7d4c\u8def\u4e0a\u306e\u70b9\u306b\u5bfe\u3057\u3001extract_behind_dist\u3060\u3051\u623b\u3063\u305f\u70b9\u304b\u3089extract_ahead_dist\u3060\u3051\u9032\u3093\u3060\u70b9\u307e\u3067\u306e\u53c2\u7167\u7d4c\u8def\u3092\u629c\u304d\u51fa\u3059\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

\u30e2\u30b8\u30e5\u30fc\u30eb\u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002 \u3053\u3053\u3067\u6271\u3046\u5916\u90e8\u306e\u901f\u5ea6\u5236\u9650\u306f/planning/scenario_planning/max_velocity\u306e topic \u3067\u6e21\u3055\u308c\u308b\u3082\u306e\u3067\u3001\u5730\u56f3\u4e0a\u3067\u8a2d\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u306a\u3069\u3001\u53c2\u7167\u7d4c\u8def\u306b\u3059\u3067\u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u308b\u5236\u9650\u901f\u5ea6\u3068\u306f\u5225\u3067\u3042\u308b\u3002 \u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u308b\u901f\u5ea6\u5236\u9650\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u6307\u5b9a\u3055\u308c\u3066\u3044\u308b\u6e1b\u901f\u5ea6\u304a\u3088\u3073\u8e8d\u5ea6\u306e\u5236\u9650\u306e\u7bc4\u56f2\u3067\u6e1b\u901f\u53ef\u80fd\u306a\u4f4d\u7f6e\u304b\u3089\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

\u505c\u6b62\u70b9\u306b\u8fd1\u3065\u3044\u305f\u3068\u304d\u306e\u901f\u5ea6\u3092\u8a2d\u5b9a\u3059\u308b\u3002\u969c\u5bb3\u7269\u8fd1\u508d\u307e\u3067\u8fd1\u3065\u304f\u5834\u5408\u3084\u3001\u6b63\u7740\u7cbe\u5ea6\u5411\u4e0a\u306a\u3069\u306e\u76ee\u7684\u306b\u7528\u3044\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

\u7d4c\u8def\u306e\u66f2\u7387\u306b\u5fdc\u3058\u3066\u3001\u6307\u5b9a\u3055\u308c\u305f\u6700\u5927\u6a2a\u52a0\u901f\u5ea6max_lateral_accel\u3092\u8d85\u3048\u306a\u3044\u901f\u5ea6\u3092\u5236\u9650\u901f\u5ea6\u3068\u3057\u3066\u8a2d\u5b9a\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u5236\u9650\u901f\u5ea6\u306fmin_curve_velocity\u3092\u4e0b\u56de\u3089\u306a\u3044\u3088\u3046\u306b\u8a2d\u5b9a\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#resample-trajectory","title":"Resample trajectory","text":"

\u6307\u5b9a\u3055\u308c\u305f\u6642\u9593\u9593\u9694\u3067\u7d4c\u8def\u306e\u70b9\u3092\u518d\u30b5\u30f3\u30d7\u30eb\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u7d4c\u8def\u5168\u4f53\u306e\u9577\u3055\u306fmin_trajectory_length\u304b\u3089max_trajectory_length\u306e\u9593\u3068\u306a\u308b\u3088\u3046\u306b\u518d\u30b5\u30f3\u30d7\u30eb\u3092\u884c\u3044\u3001\u70b9\u306e\u9593\u9694\u306fmin_trajectory_interval_distance\u3088\u308a\u5c0f\u3055\u304f\u306a\u3089\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u3002 \u73fe\u5728\u8eca\u901f\u3067resample_time\u306e\u9593\u9032\u3080\u8ddd\u96e2\u307e\u3067\u306f\u5bc6\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u3001\u305d\u308c\u4ee5\u964d\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002 \u3053\u306e\u65b9\u6cd5\u3067\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u4f4e\u901f\u6642\u306f\u5bc6\u306b\u3001\u9ad8\u901f\u6642\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30eb\u3055\u308c\u308b\u305f\u3081\u3001\u505c\u6b62\u7cbe\u5ea6\u3068\u8a08\u7b97\u8ca0\u8377\u8efd\u6e1b\u306e\u4e21\u7acb\u3092\u56f3\u3063\u3066\u3044\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#calculate-initial-state","title":"Calculate initial state","text":"

\u901f\u5ea6\u8a08\u753b\u306e\u305f\u3081\u306e\u521d\u671f\u5024\u3092\u8a08\u7b97\u3059\u308b\u3002\u521d\u671f\u5024\u306f\u72b6\u6cc1\u306b\u5fdc\u3058\u3066\u4e0b\u8868\u306e\u3088\u3046\u306b\u8a08\u7b97\u3059\u308b\u3002

\u72b6\u6cc1 \u521d\u671f\u901f\u5ea6 \u521d\u671f\u52a0\u901f\u5ea6 \u6700\u521d\u306e\u8a08\u7b97\u6642 \u73fe\u5728\u8eca\u901f 0.0 \u767a\u9032\u6642 engage_velocity engage_acceleration \u73fe\u5728\u8eca\u901f\u3068\u8a08\u753b\u8eca\u901f\u304c\u4e56\u96e2 \u73fe\u5728\u8eca\u901f \u524d\u56de\u8a08\u753b\u5024 \u901a\u5e38\u6642 \u524d\u56de\u8a08\u753b\u5024 \u524d\u56de\u8a08\u753b\u5024"},{"location":"planning/autoware_velocity_smoother/README.ja/#smooth-velocity","title":"Smooth velocity","text":"

\u901f\u5ea6\u306e\u8a08\u753b\u3092\u884c\u3046\u3002\u901f\u5ea6\u8a08\u753b\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306fJerkFiltered, L2, Linf\u306e 3 \u7a2e\u985e\u306e\u3046\u3061\u304b\u3089\u30b3\u30f3\u30d5\u30a3\u30b0\u3067\u6307\u5b9a\u3059\u308b\u3002 \u6700\u9069\u5316\u306e\u30bd\u30eb\u30d0\u306f OSQP[1]\u3092\u5229\u7528\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#jerkfiltered","title":"JerkFiltered","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u30b8\u30e3\u30fc\u30af\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u30b8\u30e3\u30fc\u30af\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#l2","title":"L2","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#linf","title":"Linf","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u306e\u7dcf\u548c\u3068\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e\u7d76\u5bfe\u6700\u5927\u5024\u306e\u548c\u306e\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#post-process","title":"Post process","text":"

\u8a08\u753b\u3055\u308c\u305f\u8ecc\u9053\u306e\u5f8c\u51e6\u7406\u3092\u884c\u3046\u3002

\u6700\u9069\u5316\u306e\u8a08\u7b97\u304c\u7d42\u308f\u3063\u305f\u3042\u3068\u3001\u6b21\u306e\u30ce\u30fc\u30c9\u306b\u7d4c\u8def\u3092\u6e21\u3059\u524d\u306bpost resampling\u3068\u547c\u3070\u308c\u308b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3046\u3002\u3053\u3053\u3067\u518d\u5ea6\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u6700\u9069\u5316\u524d\u3067\u5fc5\u8981\u306a\u7d4c\u8def\u9593\u9694\u3068\u5f8c\u6bb5\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306b\u6e21\u3059\u7d4c\u8def\u9593\u9694\u304c\u5fc5\u305a\u3057\u3082\u4e00\u81f4\u3057\u3066\u3044\u306a\u3044\u304b\u3089\u3067\u3042\u308a\u3001\u305d\u306e\u5dee\u3092\u57cb\u3081\u308b\u305f\u3081\u306b\u518d\u5ea6\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u3002\u305d\u306e\u305f\u3081\u3001post resampling\u3067\u306f\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3092\u78ba\u8a8d\u3057\u3066\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u6c7a\u3081\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\u306a\u304a\u3001\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u9ad8\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u758e\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u3001post resampling\u3067\u7d4c\u8def\u3092\u871c\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002\u9006\u306b\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u5c0f\u3055\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u871c\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u3001post resampling\u3067\u7d4c\u8def\u3092\u305d\u306e\u4ed5\u69d8\u306b\u5408\u308f\u305b\u3066\u758e\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002

"},{"location":"planning/autoware_velocity_smoother/README.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/autoware_velocity_smoother/README.ja/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/autoware_velocity_smoother/README.ja/#parameters","title":"Parameters","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/autoware_velocity_smoother/README.ja/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/autoware_velocity_smoother/README.ja/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/autoware_velocity_smoother/README.ja/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/autoware_velocity_smoother/README.ja/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_resample_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_resample_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_resample_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_resample_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/autoware_velocity_smoother/README.ja/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/autoware_velocity_smoother/README.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/autoware_velocity_smoother/README.ja/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/","title":"Avoidance by lane change design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#avoidance-by-lane-change-design","title":"Avoidance by lane change design","text":"

This is a sub-module to avoid obstacles by lane change maneuver.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#purpose-role","title":"Purpose / Role","text":"

This module is designed as one of the obstacle avoidance features and generates a lane change path if the following conditions are satisfied.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Basically, this module is implemented by reusing the avoidance target filtering logic of the existing Static Object Avoidance Module and the path generation logic of the Normal Lane Change Module. On the other hand, the conditions under which the module is activated differ from those of a normal avoidance module.

Check that the following conditions are satisfied after the filtering process for the avoidance target.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#number-of-the-avoidance-target-objects","title":"Number of the avoidance target objects","text":"

This module is launched when the number of avoidance target objects on EGO DRIVING LANE is greater than execute_object_num. If there are no avoidance targets in the ego driving lane or their number is less than the parameter, the obstacle is avoided by normal avoidance behavior (if the normal avoidance module is registered).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#lane-change-end-point-condition","title":"Lane change end point condition","text":"

Unlike the normal avoidance module, which specifies the shift line end point, this module does not specify its end point when generating a lane change path. On the other hand, setting execute_only_when_lane_change_finish_before_object to true will activate this module only if the lane change can be completed before the avoidance target object.

Although setting the parameter to false would increase the scene of avoidance by lane change, it is assumed that sufficient lateral margin may not be ensured in some cases because the vehicle passes by the side of obstacles during the lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_avoidance_by_lane_change_module/#parameters","title":"Parameters","text":"Name Unit Type Description Default value execute_object_num [-] int Number of avoidance target objects on ego driving lane is greater than this value, this module will be launched. 1 execute_object_longitudinal_margin [m] double [maybe unused] Only when distance between the ego and avoidance target object is longer than this value, this module will be launched. 0.0 execute_only_when_lane_change_finish_before_object [-] bool If this flag set true, this module will be launched only when the lane change end point is NOT behind the avoidance target object. true"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/","title":"Avoidance module for dynamic objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#avoidance-module-for-dynamic-objects","title":"Avoidance module for dynamic objects","text":"

This module is under development.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#purpose-role","title":"Purpose / Role","text":"

This module provides avoidance functions for vehicles, pedestrians, and obstacles in the vicinity of the ego's path in combination with the autoware_path_optimizer. Each module performs the following roles. Dynamic Avoidance module cuts off the drivable area according to the position and velocity of the target to be avoided. Obstacle Avoidance module modifies the path to be followed so that it fits within the received drivable area.

Static obstacle's avoidance functions are also provided by the Static Avoidance module, but these modules have different roles. The Static Obstacle Avoidance module performs avoidance through the outside of own lanes but cannot avoid the moving objects. On the other hand, this module can avoid moving objects. For this reason, the word \"dynamic\" is used in the module's name. The table below lists the avoidance modules that can handle each situation.

avoid within the own lane avoid through the outside of own lanes avoid not-moving objects Avoidance Module Dynamic Avoidance Module + Obstacle Avoidance Module Avoidance Module avoid moving objects Dynamic Avoidance Module + Obstacle Avoidance Module No Module (Under Development)"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#policy-of-algorithms","title":"Policy of algorithms","text":"

Here, we describe the policy of inner algorithms. The inner algorithms can be separated into two parts: The first decides whether to avoid the obstacles and the second cuts off the drivable area against the corresponding obstacle.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#select-obstacles-to-avoid","title":"Select obstacles to avoid","text":"

To decide whether to avoid an object, both the predicted path and the state (pose and twist) of each object are used. The type of objects the user wants this module to avoid is also required. Using this information, the module decides to avoid objects that obstruct the ego's passage and can be avoided.

The definition of obstruct the ego's passage is implemented as the object that collides within seconds. The other, can be avoided denotes whether it can be avoided without risk to the passengers or the other vehicles. For this purpose, the module judges whether the obstacle can be avoided with satisfying the constraints of lateral acceleration and lateral jerk. For example, the module decides not to avoid an object that is too close or fast in the lateral direction.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#cuts-off-the-drivable-area-against-the-selected-vehicles","title":"Cuts off the drivable area against the selected vehicles","text":"

For the selected obstacles to be avoided, the module cuts off the drivable area. As inputs to decide the shapes of cut-off polygons, poses of the obstacles are mainly used, assuming they move in parallel to the ego's path, instead of its predicted path. This design arises from that the predicted path of objects is not accurate enough to use the path modifications (at least currently). Furthermore, the output drivable area shape is designed as a rectangular cutout along the ego's path to make the computation scalar rather than planar.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#determination-of-lateral-dimension","title":"Determination of lateral dimension","text":"

The lateral dimensions of the polygon are calculated as follows. The polygon's width to extract from the drivable area is the obstacle width and drivable_area_generation.lat_offset_from_obstacle. We can limit the lateral shift length by drivable_area_generation.max_lat_offset_to_avoid.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#determination-of-longitudinal-dimension","title":"Determination of longitudinal dimension","text":"

Then, extracting the same directional and opposite directional obstacles from the drivable area will work as follows considering TTC (time to collision).

Regarding the same directional obstacles, obstacles whose TTC is negative will be ignored (e.g., The obstacle is in front of the ego, and the obstacle's velocity is larger than the ego's velocity.).

Same directional obstacles (Parameter names may differ from implementation)

Opposite directional obstacles (Parameter names may differ from implementation)

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#cuts-off-the-drivable-area-against-the-selected-pedestrians","title":"Cuts off the drivable area against the selected pedestrians","text":"

Then, we describe the logic to generate the drivable areas against pedestrians to be avoided. Objects of this type are considered to have priority right of way over the ego's vehicle while ensuring a minimum safety of the ego's vehicle. In other words, the module assigns a drivable area to an obstacle with a specific margin based on the predicted paths with specific confidences for a specific time interval, as shown in the following figure.

Restriction areas are generated from each pedestrian's predicted paths

Apart from polygons for objects, the module also generates another polygon to ensure the ego's safety, i.e., to avoid abrupt steering or significant changes from the path. This is similar to avoidance against the vehicles and takes precedence over keeping a safe distance from the object to be avoided. As a result, as shown in the figure below, the polygons around the objects reduced by the secured polygon of the ego are subtracted from the ego's drivable area.

Ego's minimum requirements are prioritized against object margin"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#example","title":"Example","text":"Avoidance for the bus departure Avoidance on curve Avoidance against the opposite direction vehicle Avoidance for multiple vehicle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#future-works","title":"Future works","text":"

Currently, the path shifting length is limited to 0.5 meters or less by drivable_area_generation.max_lat_offset_to_avoid. This is caused by the lack of functionality to work with other modules and the structure of the planning component. Due to this issue, this module can only handle situations where a small avoidance width is sufficient. This issue is the most significant for this module. In addition, the ability of this module to extend the drivable area as needed is also required.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_dynamic_obstacle_avoidance_module/#parameters","title":"Parameters","text":"

Under development

Name Unit Type Description Default value target_object.car [-] bool The flag whether to avoid cars or not true target_object.truck [-] bool The flag whether to avoid trucks or not true ... [-] bool ... ... target_object.min_obstacle_vel [m/s] double Minimum obstacle velocity to avoid 1.0 drivable_area_generation.lat_offset_from_obstacle [m] double Lateral offset to avoid from obstacles 0.8 drivable_area_generation.max_lat_offset_to_avoid [m] double Maximum lateral offset to avoid 0.5 drivable_area_generation.overtaking_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.overtaking_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 4.0 drivable_area_generation.overtaking_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 5.0 drivable_area_generation.overtaking_object.duration_to_hold_avoidance [s] double Duration to hold avoidance after passing by obstacles 3.0 drivable_area_generation.oncoming_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.oncoming_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 9.0 drivable_area_generation.oncoming_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#goal-planner-design","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#purpose-role","title":"Purpose / Role","text":"

Plan path around the goal.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#design","title":"Design","text":"

If goal modification is not allowed, park at the designated fixed goal. (fixed_goal_planner in the figure below) When allowed, park in accordance with the specified policy(e.g pull over on left/right side of the lane). (rough_goal_planner in the figure below). Currently rough goal planner only support pull_over feature, but it would be desirable to be able to accommodate various parking policies in the future.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#start-condition","title":"start condition","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#fixed_goal_planner","title":"fixed_goal_planner","text":"

This is a very simple function that plans a smooth path to a specified goal. This function does not require approval and always runs with the other modules. NOTE: this planner does not perform the several features described below, such as \"goal search\", \"collision check\", \"safety check\", etc.

Executed when both conditions are met.

If the target path contains a goal, modify the points of the path so that the path and the goal are connected smoothly. This process will change the shape of the path by the distance of refine_goal_search_radius_range from the goal. Note that this logic depends on the interpolation algorithm that will be executed in a later module (at the moment it uses spline interpolation), so it needs to be updated in the future.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#rough_goal_planner","title":"rough_goal_planner","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#pull-over-on-road-lane","title":"pull over on road lane","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#pull-over-on-shoulder-lane","title":"pull over on shoulder lane","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#finish-condition","title":"finish condition","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#general-parameters-for-goal_planner","title":"General parameters for goal_planner","text":"Name Unit Type Description Default value th_arrived_distance [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time [s] double time threshold for arrival of path termination 2.0 center_line_path_interval [m] double reference center line path point interval 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#goal-search","title":"Goal Search","text":"

To realize pull over even when an obstacle exists near the original goal, a collision free area is searched within a certain range around the original goal. The goal found will be published as /planning/scenario_planning/modified_goal.

goal search video

  1. The original goal is set, and the refined goal pose is obtained by moving in the direction normal to the lane center line and keeping margin_from_boundary from the edge of the lane.

  2. Using refined_goal as the base goal, search for candidate goals in the range of -forward_goal_search_length to backward_goal_search_length in the longitudinal direction and longitudinal_margin to longitudinal_margin+max_lateral_offset in th lateral direction based on refined_goal.

  3. Each candidate goal is prioritized and a path is generated for each planner for each goal. The priority of a candidate goal is determined by its distance from the base goal. The ego vehicle tries to park for the highest possible goal. The distance is determined by the selected policy. In case minimum_longitudinal_distance, sort with smaller longitudinal distances taking precedence over smaller lateral distances. In case minimum_weighted_distance, sort with the sum of weighted lateral distance and longitudinal distance. This means the distance is calculated by longitudinal_distance + lateral_cost*lateral_distance The following figure is an example of minimum_weighted_distance.\u200b The white number indicates the goal candidate priority, and the smaller the number, the higher the priority. the 0 goal indicates the base goal.

  4. If the footprint in each goal candidate is within object_recognition_collision_check_margin of that of the object, it is determined to be unsafe. These goals are not selected. If use_occupancy_grid_for_goal_search is enabled, collision detection on the grid is also performed with occupancy_grid_collision_check_margin.

Red goals candidates in the image indicate unsafe ones.

It is possible to keep longitudinal_margin in the longitudinal direction apart from the collision margin for obstacles from the goal candidate. This is intended to provide natural spacing for parking and efficient departure.

Also, if prioritize_goals_before_objects is enabled, To arrive at each goal, the number of objects that need to be avoided in the target range is counted, and those with the lowest number are given priority.

The images represent a count of objects to be avoided at each range, with priority given to those with the lowest number, regardless of the aforementioned distances.

The gray numbers represent objects to avoid, and you can see that the goal in front has a higher priority in this case.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-goal-search","title":"Parameters for goal search","text":"Name Unit Type Description Default value goal_priority [-] string In case minimum_longitudinal_distance, sort with smaller longitudinal distances taking precedence over smaller lateral distances. In case minimum_weighted_distance, sort with the sum of weighted lateral distance and longitudinal distance minimum_weighted_distance lateral_weight [-] double Weight for lateral distance used when minimum_weighted_distance 40.0 prioritize_goals_before_objects [-] bool If there are objects that may need to be avoided, prioritize the goal in front of them true forward_goal_search_length [m] double length of forward range to be explored from the original goal 20.0 backward_goal_search_length [m] double length of backward range to be explored from the original goal 20.0 goal_search_interval [m] double distance interval for goal search 2.0 longitudinal_margin [m] double margin between ego-vehicle at the goal position and obstacles 3.0 max_lateral_offset [m] double maximum offset of goal search in the lateral direction 0.5 lateral_offset_interval [m] double distance interval of goal search in the lateral direction 0.25 ignore_distance_from_lane_start [m] double This parameter ensures that the distance between the start of the shoulder lane and the goal is not less than the specified value. It's used to prevent setting goals too close to the beginning of the shoulder lane, which might lead to unsafe or impractical pull-over maneuvers. Increasing this value will force the system to ignore potential goal positions near the start of the shoulder lane, potentially leading to safer and more comfortable pull-over locations. 0.0 margin_from_boundary [m] double distance margin from edge of the shoulder lane 0.5"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#pull-over","title":"Pull Over","text":"

There are three path generation methods. The path is generated with a certain margin (default: 0.75 m) from the boundary of shoulder lane.

The process is time consuming because multiple planners are used to generate path for each candidate goal. Therefore, in this module, the path generation is performed in a thread different from the main thread. Path generation is performed at the timing when the shape of the output path of the previous module changes. If a new module launches, it is expected to go to the previous stage before the goal planner, in which case the goal planner re-generates the path. The goal planner is expected to run at the end of multiple modules, which is achieved by keep_last in the planner manager.

Threads in the goal planner are shown below.

The main thread will be the one called from the planner manager flow.

Name Unit Type Description Default value pull_over_minimum_request_length [m] double when the ego-vehicle approaches the goal by this distance or a safe distance to stop, pull over is activated. 100.0 pull_over_velocity [m/s] double decelerate to this speed by the goal search area 3.0 pull_over_minimum_velocity [m/s] double speed of pull_over after stopping once. this prevents excessive acceleration. 1.38 decide_path_distance [m] double decide path if it approaches this distance relative to the parking position. after that, no path planning and goal search are performed 10.0 maximum_deceleration [m/s2] double maximum deceleration. it prevents sudden deceleration when a parking path cannot be found suddenly 1.0 path_priority [-] string In case efficient_path use a goal that can generate an efficient path which is set in efficient_path_order. In case close_goal use the closest goal to the original one. efficient_path efficient_path_order [-] string efficient order of pull over planner along lanes excluding freespace pull over [\"SHIFT\", \"ARC_FORWARD\", \"ARC_BACKWARD\"] lane_departure_check_expansion_margin [m] double margin to expand the ego vehicle footprint when doing lane departure checks 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#shift-parking","title":"shift parking","text":"

Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.

  1. Apply uniform offset to centerline of shoulder lane for ensuring margin
  2. In the section between merge start and end, path is shifted by a method that is used to generate avoidance path (four segmental constant jerk polynomials)
  3. Combine this path with center line of road lane

shift_parking video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-shift-parking","title":"Parameters for shift parking","text":"Name Unit Type Description Default value enable_shift_parking [-] bool flag whether to enable shift parking true shift_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 deceleration_interval [m] double distance of deceleration section 15.0 after_shift_straight_distance [m] double straight line distance after pull over end point 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#geometric-parallel-parking","title":"geometric parallel parking","text":"

Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-geometric-parallel-parking","title":"Parameters geometric parallel parking","text":"Name Unit Type Description Default value arc_path_interval [m] double interval between arc path points 1.0 pull_over_max_steer_rad [rad] double maximum steer angle for path generation. it may not be possible to control steer up to max_steer_angle in vehicle_info when stopped 0.35"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#arc-forward-parking","title":"arc forward parking","text":"

Generate two forward arc paths.

arc_forward_parking video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-arc-forward-parking","title":"Parameters arc forward parking","text":"Name Unit Type Description Default value enable_arc_forward_parking [-] bool flag whether to enable arc forward parking true after_forward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 forward_parking_velocity [m/s] double velocity when forward parking 1.38 forward_parking_lane_departure_margin [m/s] double lane departure margin for front left corner of ego-vehicle when forward parking 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#arc-backward-parking","title":"arc backward parking","text":"

Generate two backward arc paths.

.

arc_backward_parking video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-arc-backward-parking","title":"Parameters arc backward parking","text":"Name Unit Type Description Default value enable_arc_backward_parking [-] bool flag whether to enable arc backward parking true after_backward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 backward_parking_velocity [m/s] double velocity when backward parking -1.38 backward_parking_lane_departure_margin [m/s] double lane departure margin for front right corner of ego-vehicle when backward 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#freespace-parking","title":"freespace parking","text":"

If the vehicle gets stuck with lane_parking, run freespace_parking. To run this feature, you need to set parking_lot to the map, activate_by_scenario of costmap_generator to false and enable_freespace_parking to true

Simultaneous execution with avoidance_module in the flowchart is under development.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-freespace-parking","title":"Parameters freespace parking","text":"Name Unit Type Description Default value enable_freespace_parking [-] bool This flag enables freespace parking, which runs when the vehicle is stuck due to e.g. obstacles in the parking area. true

See freespace_planner for other parameters.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#collision-check-for-path-generation","title":"collision check for path generation","text":"

To select a safe one from the path candidates, a collision check with obstacles is performed.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#occupancy-grid-based-collision-check","title":"occupancy grid based collision check","text":"

Generate footprints from ego-vehicle path points and determine obstacle collision from the value of occupancy_grid of the corresponding cell.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-occupancy-grid-based-collision-check","title":"Parameters for occupancy grid based collision check","text":"Name Unit Type Description Default value use_occupancy_grid_for_goal_search [-] bool flag whether to use occupancy grid for goal search collision check true use_occupancy_grid_for_goal_longitudinal_margin [-] bool flag whether to use occupancy grid for keeping longitudinal margin false use_occupancy_grid_for_path_collision_check [-] bool flag whether to use occupancy grid for collision check false occupancy_grid_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.0 theta_size [-] int size of theta angle to be considered. angular resolution for collision check will be 2\\(\\pi\\) / theta_size [rad]. 360 obstacle_threshold [-] int threshold of cell values to be considered as obstacles 60"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#object-recognition-based-collision-check","title":"object recognition based collision check","text":"

A collision decision is made for each of the path candidates, and a collision-free path is selected. There are three main margins at this point.

Then there is the concept of soft and hard margins. Although not currently parameterized, if a collision-free path can be generated by a margin several times larger than object_recognition_collision_check_margin, then the priority is higher.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-object-recognition-based-collision-check","title":"Parameters for object recognition based collision check","text":"Name Unit Type Description Default value use_object_recognition [-] bool flag whether to use object recognition for collision check true object_recognition_collision_check_soft_margins [m] vector[double] soft margins for collision check when path generation. It is not strictly the distance between footprints, but the maximum distance when ego and objects are oriented. [5.0, 4.5, 4.0, 3.5, 3.0, 2.5, 2.0, 1.5, 1.0] object_recognition_collision_check_hard_margins [m] vector[double] hard margins for collision check when path generation [0.6] object_recognition_collision_check_max_extra_stopping_margin [m] double maximum value when adding longitudinal distance margin for collision check considering stopping distance 1.0 detection_bound_offset [m] double expand pull over lane with this offset to make detection area for collision check of path generation 15.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#safety-check","title":"safety check","text":"

Perform safety checks on moving objects. If the object is determined to be dangerous, no path decision is made and no approval is given,

This module has two methods of safety check, RSS and integral_predicted_polygon.

RSS method is a method commonly used by other behavior path planner modules, see RSS based safety check utils explanation.

integral_predicted_polygon is a more safety-oriented method. This method is implemented because speeds during pull over are lower than during driving, and fewer objects travel along the edge of the lane. (It is sometimes too reactive and may be less available.) This method integrates the footprints of egos and objects at a given time and checks for collisions between them.

In addition, the safety check has a time hysteresis, and if the path is judged \"safe\" for a certain period of time(keep_unsafe_time), it is finally treated as \"safe\".

                         ==== is_safe\n                         ---- current_is_safe\n       is_safe\n        ^\n        |\n        |                   time\n      1 +--+    +---+       +---=========   +--+\n        |  |    |   |       |           |   |  |\n        |  |    |   |       |           |   |  |\n        |  |    |   |       |           |   |  |\n        |  |    |   |       |           |   |  |\n      0 =========================-------==========--> t\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-safety-check","title":"Parameters for safety check","text":"Name Unit Type Description Default value enable_safety_check [-] bool flag whether to use safety check true method [-] string method for safety check. RSS or integral_predicted_polygon integral_predicted_polygon keep_unsafe_time [s] double safety check Hysteresis time. if the path is judged \"safe\" for the time it is finally treated as \"safe\". 3.0 check_all_predicted_path - bool Flag to check all predicted paths true publish_debug_marker - bool Flag to publish debug markers false collision_check_yaw_diff_threshold [rad] double Maximum yaw difference between ego and object when executing rss-based collision checking 3.1416"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-rss-safety-check","title":"Parameters for RSS safety check","text":"Name Unit Type Description Default value rear_vehicle_reaction_time [s] double Reaction time for rear vehicles 2.0 rear_vehicle_safety_time_margin [s] double Safety time margin for rear vehicles 1.0 lateral_distance_max_threshold [m] double Maximum lateral distance threshold 2.0 longitudinal_distance_min_threshold [m] double Minimum longitudinal distance threshold 3.0 longitudinal_velocity_delta_time [s] double Delta time for longitudinal velocity 0.8"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#parameters-for-integral_predicted_polygon-safety-check","title":"Parameters for integral_predicted_polygon safety check","text":"Name Unit Type Description Default value forward_margin [m] double forward margin for ego footprint 1.0 backward_margin [m] double backward margin for ego footprint 1.0 lat_margin [m] double lateral margin for ego footprint 1.0 time_horizon [s] double Time width to integrate each footprint 10.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#path-deciding","title":"path deciding","text":"

When decide_path_distance closer to the start of the pull over, if it is collision-free at that time and safe for the predicted path of the objects, it transitions to DECIDING. If it is safe for a certain period of time, it moves to DECIDED.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_goal_planner_module/#unimplemented-parts-limitations","title":"Unimplemented parts / limitations","text":"

Unimplemented parts / limitations for freespace parking

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/","title":"Lane Change design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-design","title":"Lane Change design","text":"

The Lane Change module is activated when lane change is needed and can be safely executed.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-requirement","title":"Lane Change Requirement","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#generating-lane-change-candidate-path","title":"Generating Lane Change Candidate Path","text":"

The lane change candidate path is divided into two phases: preparation and lane-changing. The following figure illustrates each phase of the lane change candidate path.

The following chart illustrates the process of sampling candidate paths for lane change.

While the following chart demonstrates the process of generating a valid candidate path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#preparation-phase","title":"Preparation phase","text":"

The preparation trajectory is the candidate path's first and the straight portion generated along the ego vehicle's current lane. The length of the preparation trajectory is computed as follows.

lane_change_prepare_distance = current_speed * lane_change_prepare_duration + 0.5 * deceleration * lane_change_prepare_duration^2\n

During the preparation phase, the turn signal will be activated when the remaining distance is equal to or less than lane_change_search_distance.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-changing-phase","title":"Lane-changing phase","text":"

The lane-changing phase consist of the shifted path that moves ego from current lane to the target lane. Total distance of lane-changing phase is as follows. Note that during the lane changing phase, the ego vehicle travels at a constant speed.

lane_change_prepare_velocity = std::max(current_speed + deceleration * lane_change_prepare_duration, minimum_lane_changing_velocity)\nlane_changing_distance = lane_change_prepare_velocity * lane_changing_duration\n

The backward_length_buffer_for_end_of_lane is added to allow some window for any possible delay, such as control or mechanical delay during brake lag.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#multiple-candidate-path-samples-longitudinal-acceleration","title":"Multiple candidate path samples (longitudinal acceleration)","text":"

Lane change velocity is affected by the ego vehicle's current velocity. High velocity requires longer preparation and lane changing distance. However we also need to plan lane changing trajectories in case ego vehicle slows down. Computing candidate paths that assumes ego vehicle's slows down is performed by substituting predetermined deceleration value into prepare_length, prepare_velocity and lane_changing_length equation.

The predetermined longitudinal acceleration values are a set of value that starts from longitudinal_acceleration = maximum_longitudinal_acceleration, and decrease by longitudinal_acceleration_resolution until it reaches longitudinal_acceleration = -maximum_longitudinal_deceleration. Both maximum_longitudinal_acceleration and maximum_longitudinal_deceleration are calculated as: defined in the common.param file as normal.min_acc.

maximum_longitudinal_acceleration = min(common_param.max_acc, lane_change_param.max_acc)\nmaximum_longitudinal_deceleration = max(common_param.min_acc, lane_change_param.min_acc)\n

where common_param is vehicle common parameter, which defines vehicle common maximum longitudinal acceleration and deceleration. Whereas, lane_change_param has maximum longitudinal acceleration and deceleration for the lane change module. For example, if a user set and common_param.max_acc=1.0 and lane_change_param.max_acc=0.0, maximum_longitudinal_acceleration becomes 0.0, and the lane change does not accelerate in the lane change phase.

The longitudinal_acceleration_resolution is determine by the following

longitudinal_acceleration_resolution = (maximum_longitudinal_acceleration - minimum_longitudinal_acceleration) / longitudinal_acceleration_sampling_num\n

Note that when the current_velocity is lower than minimum_lane_changing_velocity, the vehicle needs to accelerate its velocity to minimum_lane_changing_velocity. Therefore, longitudinal acceleration becomes positive value (not decelerate).

The chart illustrates the conditions under which longitudinal acceleration values are sampled.

while the following describes the process by which longitudinal accelerations are sampled.

The following figure illustrates when longitudinal_acceleration_sampling_num = 4. Assuming that maximum_deceleration = 1.0 then a0 == 0.0 == no deceleration, a1 == 0.25, a2 == 0.5, a3 == 0.75 and a4 == 1.0 == maximum_deceleration. a0 is the expected lane change trajectories should ego vehicle do not decelerate, and a1's path is the expected lane change trajectories should ego vehicle decelerate at 0.25 m/s^2.

Which path will be chosen will depend on validity and collision check.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#multiple-candidate-path-samples-lateral-acceleration","title":"Multiple candidate path samples (lateral acceleration)","text":"

In addition to sampling longitudinal acceleration, we also sample lane change paths by adjusting the value of lateral acceleration. Since lateral acceleration influences the duration of a lane change, a lower lateral acceleration value results in a longer lane change path, while a higher lateral acceleration value leads to a shorter lane change path. This allows the lane change module to generate a shorter lane change path by increasing the lateral acceleration when there is limited space for the lane change.

The maximum and minimum lateral accelerations are defined in the lane change parameter file as a map. The range of lateral acceleration is determined for each velocity by linearly interpolating the values in the map. Let's assume we have the following map

Ego Velocity Minimum lateral acceleration Maximum lateral acceleration 0.0 0.2 0.3 2.0 0.2 0.4 4.0 0.3 0.4 6.0 0.3 0.5

In this case, when the current velocity of the ego vehicle is 3.0, the minimum and maximum lateral accelerations are 0.25 and 0.4 respectively. These values are obtained by linearly interpolating the second and third rows of the map, which provide the minimum and maximum lateral acceleration values.

Within this range, we sample the lateral acceleration for the ego vehicle. Similar to the method used for sampling longitudinal acceleration, the resolution of lateral acceleration (lateral_acceleration_resolution) is determined by the following:

lateral_acceleration_resolution = (maximum_lateral_acceleration - minimum_lateral_acceleration) / lateral_acceleration_sampling_num\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#candidate-paths-validity-check","title":"Candidate Path's validity check","text":"

A candidate path is considered valid if it meets the following criteria:

  1. The distance from the ego vehicle's current position to the end of the current lanes is sufficient to perform a single lane change.
  2. The distance from the ego vehicle's current position to the goal along the current lanes is adequate to complete multiple lane changes.
  3. The distance from the ego vehicle's current position to the end of the target lanes is adequate for completing multiple lane changes.
  4. The distance from the ego vehicle's current position to the next regulatory element is adequate to perform a single lane change.
  5. The lane change can be completed after passing a parked vehicle.
  6. The lane change is deemed safe to execute.

The following flow chart illustrates the validity check.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#delay-lane-change-check","title":"Delay Lane Change Check","text":"

In certain situations, when there are stopped vehicles along the target lane ahead of Ego vehicle, to avoid getting stuck, it is desired to perform the lane change maneuver after the stopped vehicle. To do so, all static objects ahead of ego along the target lane are checked in order from closest to furthest, if any object satisfies the following conditions, lane change will be delayed and candidate path will be rejected.

  1. The distance from object to terminal end is sufficient to perform lane change
  2. The distance to object is less than the lane changing length
  3. The distance from object to next object is sufficient to perform lane change

If the parameter check_only_parked_vehicle is set to true, the check will only consider objects which are determined as parked.

The following flow chart illustrates the delay lane change check.

The following figures demonstrate different situations under which will or will not be triggered:

  1. Delay lane change will be triggered as there is sufficient distance ahead.
  2. Delay lane change will NOT be triggered as there is no sufficient distance ahead
  3. Delay lane change will be triggered by fist NPC as there is sufficient distance ahead.
  4. Delay lane change will be triggered by second NPC as there is sufficient distance ahead
  5. Delay lane change will NOT be triggered as there is no sufficient distance ahead.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#candidate-paths-safety-check","title":"Candidate Path's Safety check","text":"

See safety check utils explanation

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#objects-selection-and-classification","title":"Objects selection and classification","text":"

First, we divide the target objects into obstacles in the target lane, obstacles in the current lane, and obstacles in other lanes. Target lane indicates the lane that the ego vehicle is going to reach after the lane change and current lane mean the current lane where the ego vehicle is following before the lane change. Other lanes are lanes that do not belong to the target and current lanes. The following picture describes objects on each lane. Note that users can remove objects either on current and other lanes from safety check by changing the flag, which are check_objects_on_current_lanes and check_objects_on_other_lanes.

Furthermore, to change lanes behind a vehicle waiting at a traffic light, we skip the safety check for the stopping vehicles near the traffic light. The explanation for parked car detection is written in documentation for avoidance module.

The detection area for the target lane can be expanded beyond its original boundaries to enable detection of objects that are outside the target lane's limits.

Without Lane Expansion With Lane Expansion"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#object-filtering","title":"Object filtering","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#collision-check-in-prepare-phase","title":"Collision check in prepare phase","text":"

The ego vehicle may need to secure ample inter-vehicle distance ahead of the target vehicle before attempting a lane change. The flag enable_collision_check_at_prepare_phase can be enabled to gain this behavior. The following image illustrates the differences between the false and true cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#if-the-lane-is-blocked-and-multiple-lane-changes","title":"If the lane is blocked and multiple lane changes","text":"

When driving on the public road with other vehicles, there exist scenarios where lane changes cannot be executed. Suppose the candidate path is evaluated as unsafe, for example, due to incoming vehicles in the adjacent lane. In that case, the ego vehicle can't change lanes, and it is impossible to reach the goal. Therefore, the ego vehicle must stop earlier at a certain distance and wait for the adjacent lane to be evaluated as safe. The minimum stopping distance can be computed from shift length and minimum lane changing velocity.

lane_changing_time = f(shift_length, lat_acceleration, lat_jerk)\nminimum_lane_change_distance = minimum_prepare_length + minimum_lane_changing_velocity * lane_changing_time + lane_change_finish_judge_buffer\n

The following figure illustrates when the lane is blocked in multiple lane changes cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#stopping-behavior","title":"Stopping behavior","text":"

The stopping behavior of the ego vehicle is determined based on various factors, such as the number of lane changes required, the presence of obstacles, and the position of blocking objects in relation to the lane change plan. The objective is to choose a suitable stopping point that allows for a safe and effective lane change while adapting to different traffic scenarios.

The following flowchart and subsections explain the conditions for deciding where to insert a stop point when an obstacle is ahead.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#ego-vehicles-stopping-position-when-an-object-exists-ahead","title":"Ego vehicle's stopping position when an object exists ahead","text":"

When the ego vehicle encounters an obstacle ahead, it stops while maintaining a safe distance to prepare for a possible lane change. The exact stopping position depends on factors like whether the target lane is clear or if the lane change needs to be delayed. The following explains how different stopping scenarios are handled:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-the-near-the-end-of-the-lane-change","title":"When the near the end of the lane change","text":"

Whether the target lane has obstacles or is clear, the ego vehicle stops while keeping a safe distance from the obstacle ahead, ensuring there is enough room for the lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-the-ego-vehicle-is-not-near-the-end-of-the-lane-change","title":"When the ego vehicle is not near the end of the lane change","text":"

The ego vehicle stops while maintaining a safe distance from the obstacle ahead, ensuring there is enough space for a lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#ego-vehicles-stopping-position-when-an-object-exists-in-the-lane-changing-section","title":"Ego vehicle's stopping position when an object exists in the lane changing section","text":"

If there are objects within the lane change section of the target lane, the ego vehicle stops closer to the obstacle ahead, without maintaining the usual distance for a lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-near-the-end-of-the-lane-change","title":"When near the end of the lane change","text":"

Regardless of whether there are obstacles in the target lane, the ego vehicle stops while keeping a safe distance from the obstacle ahead, allowing for the lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-not-near-the-end-of-the-lane-change","title":"When not near the end of the lane change","text":"

If there are no obstacles in the lane change section of the target lane, the ego vehicle stops while keeping a safe distance from the obstacle ahead to accommodate the lane change.

If there are obstacles within the lane change section of the target lane, the ego vehicle stops closer to the obstacle ahead, without keeping the usual distance needed for a lane change.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#when-the-target-lane-is-far-away","title":"When the target lane is far away","text":"

If the target lane for the lane change is far away and not next to the current lane, the ego vehicle stops closer to the obstacle ahead, as maintaining the usual distance for a lane change is not necessary.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-when-stuck","title":"Lane Change When Stuck","text":"

The ego vehicle is considered stuck if it is stopped and meets any of the following conditions:

In this case, the safety check for lane change is relaxed compared to normal times. Please refer to the 'stuck' section under the 'Collision checks during lane change' for more details. The function to stop by keeping a margin against forward obstacle in the previous section is being performed to achieve this feature.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-regulations","title":"Lane change regulations","text":"

If you want to regulate lane change on crosswalks, intersections, or traffic lights, the lane change module is disabled near any of them. To regulate lane change on crosswalks, intersections, or traffic lights, set regulation.crosswalk, regulation.intersection or regulation.traffic_light to true. If the ego vehicle gets stuck, to avoid stuck, it enables lane change in crosswalk/intersection. If the ego vehicle stops more than stuck_detection.stop_time seconds, it is regarded as a stuck. If the ego vehicle velocity is smaller than stuck_detection.velocity, it is regarded as stopping.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-completion-checks","title":"Lane change completion checks","text":"

To determine if the ego vehicle has successfully changed lanes, one of two criteria must be met: either the longitudinal or the lateral criteria.

For the longitudinal criteria, the ego vehicle must pass the lane-changing end pose and be within the finish_judge_buffer distance from it. The module then checks if the ego vehicle is in the target lane. If true, the module returns success. This check ensures that the planner manager updates the root lanelet correctly based on the ego vehicle's current pose. Without this check, if the ego vehicle is changing lanes while avoiding an obstacle and its current pose is in the original lane, the planner manager might set the root lanelet as the original lane. This would force the ego vehicle to perform the lane change again. With the target lane check, the ego vehicle is confirmed to be in the target lane, and the planner manager can correctly update the root lanelets.

If the longitudinal criteria are not met, the module evaluates the lateral criteria. For the lateral criteria, the ego vehicle must be within finish_judge_lateral_threshold distance from the target lane's centerline, and the angle deviation must be within finish_judge_lateral_angle_deviation degrees. The angle deviation check ensures there is no sudden steering. If the angle deviation is set too high, the ego vehicle's orientation could deviate significantly from the centerline, causing the trajectory follower to aggressively correct the steering to return to the centerline. Keeping the angle deviation value as small as possible avoids this issue.

The process of determining lane change completion is shown in the following diagram.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#terminal-lane-change-path","title":"Terminal Lane Change Path","text":"

Depending on the space configuration around the Ego vehicle, it is possible that a valid LC path cannot be generated. If that happens, then Ego will get stuck at terminal_start and not be able to proceed. Therefore we introduced the terminal LC path feature; when ego gets near to the terminal point (dist to terminal_start is less than the maximum lane change length) a terminal lane changing path will be computed starting from the terminal start point on the current lane and connects to the target lane. The terminal path only needs to be computed once in each execution of LC module. If no valid candidate paths are found in the path generation process, then the terminal path will be used as a fallback candidate path, the safety of the terminal path is not ensured and therefore it can only be force approved. The following images illustrate the expected behavior without and with the terminal path feature respectively:

Additionally if terminal path feature is enabled and path is computed, stop point placement can be configured to be at the edge of the current lane instead of at the terminal_start position, as indicated by the dashed red line in the image above.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#generating-path-using-frenet-planner","title":"Generating Path Using Frenet Planner","text":"

Warning

Generating path using Frenet planner applies only when ego is near terminal start

If the ego vehicle is far from the terminal, the lane change module defaults to using the path shifter. This ensures that the lane change is completed while the target lane remains a neighbor of the current lane. However, this approach may result in high curvature paths near the terminal, potentially causing long vehicles to deviate from the lane.

To address this, the lane change module provides an option to choose between the path shifter and the Frenet planner. The Frenet planner allows for some flexibility in the lane change endpoint, extending the lane changing end point slightly beyond the current lane's neighbors.

The following table provides comparisons between the planners

With Path Shifter With Frenet Planner

Note

The planner can be enabled or disabled using the frenet.enable flag.

Note

Since only a segment of the target lane is used as input to generate the lane change path, the end pose of the lane change segment may not smoothly connect to the target lane centerline. To address this, increase the value of frenet.th_curvature_smoothing to improve the smoothness.

Note

The yaw difference threshold (frenet.th_yaw_diff) limits the maximum curvature difference between the end of the prepare segment and the lane change segment. This threshold might prevent the generation of a lane change path when the lane curvature is high. In such cases, you can increase the frenet.th_yaw_diff value. However, note that if the prepare path was initially shifted by other modules, the resultant steering may not be continuous.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#aborting-a-previously-approved-lane-change","title":"Aborting a Previously Approved Lane Change","text":"

Once the lane change path is approved, there are several situations where we may need to abort the maneuver. The abort process is triggered when any of the following conditions is met

  1. The ego vehicle is near a traffic light, crosswalk, or intersection, and it is possible to complete the lane change after the ego vehicle passes these areas.
  2. The target object list is updated, requiring us to delay lane change
  3. The lane change is forcefully canceled via RTC.
  4. The path has become unsafe.

Furthermore, if the path has become unsafe, there are three possible outcomes for the maneuver:

  1. CANCEL: The approved lane change path is canceled, and the ego vehicle resumes its previous maneuver.
  2. ABORT: The lane change module generates a return path to bring the ego vehicle back to its current lane.
  3. CRUISE or STOP: If aborting is not feasible, the ego vehicle continues with the lane change. Another module should decide whether the ego vehicle should cruise or stop in this scenario.

CANCEL can be enabled by setting the cancel.enable_on_prepare_phase flag to true, and ABORT can be enabled by setting the cancel.enable_on_lane_changing_phase flag to true.

Warning

Enabling CANCEL is a prerequisite for enabling ABORT.

Warning

When CANCEL is disabled, all maneuvers will default to either CRUISE or STOP.

The chart shows the high level flow of the lane change abort process.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#preventing-oscillating-paths-when-unsafe","title":"Preventing Oscillating Paths When Unsafe","text":"

Lane change paths can oscillate when conditions switch between safe and unsafe. To address this, a hysteresis count check is added before executing an abort maneuver. When the path is unsafe, the unsafe_hysteresis_count_ increases. If it exceeds the unsafe_hysteresis_threshold, an abort condition check is triggered. This logic stabilizes the path approval process and prevents abrupt changes caused by temporary unsafe conditions.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#evaluating-ego-vehicles-position-to-prevent-abrupt-maneuvers","title":"Evaluating Ego Vehicle's Position to Prevent Abrupt Maneuvers","text":"

To avoid abrupt maneuvers during CANCEL or ABORT, the lane change module ensures the ego vehicle can safely return to the original lane. This is done through geometric checks that verify whether the ego vehicle remains within the lane boundaries.

The edges of the ego vehicle\u2019s footprint are compared against the boundary of the current lane to determine if they exceed the overhang tolerance, cancel.overhang_tolerance. If the distance from any edge of the footprint to the boundary exceeds this threshold, the vehicle is considered to be diverging.

The footprints checked against the lane boundary include:

  1. Current Footprint: Based on the ego vehicle's current position.
  2. Future Footprint: Based on the ego vehicle's estimated position after traveling a distance, calculated as \\(\ud835\udc51_{est}=\ud835\udc63_{ego} \\cdot \\Delta_{\ud835\udc61}\\), where

    as depicted in the following diagram

Note

The ego vehicle is considered capable of safely returning to the current lane only if BOTH the current and future footprint checks are true.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#checking-approved-path-safety","title":"Checking Approved Path Safety","text":"

The lane change module samples accelerations along the path and recalculates velocity to perform safety checks. The motivation for this feature is explained in the Limitation section.

The computation of sampled accelerations is as follows:

Let

\\[ \\text{resolution} = \\frac{a_{\\text{min}} - a_{\\text{LC}}}{N} \\]

The sequence of sampled accelerations is then given as

\\[ \\text{acc} = a_{\\text{LC}} + k \\cdot \\text{resolution}, \\quad k = [0, N] \\]

where

If none of the sampled accelerations pass the safety check, the lane change path will be canceled, subject to the hysteresis check.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#cancel","title":"Cancel","text":"

When lane change is canceled, the approved path is reset. After the reset, the ego vehicle will return to following the original reference path (the last approved path before the lane change started), as illustrated in the following image

The following parameters can be configured to tune the behavior of the cancel process:

  1. Safety constraints for cancel.
  2. Safety constraints for parked vehicle.

Note

To ensure feasible behavior, all safety constraint values must be equal to or less than their corresponding parameters in the execution settings.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#abort","title":"Abort","text":"

During the prepare phase, the ego vehicle follows the previously approved path. However, once the ego vehicle begins the lane change, its heading starts to diverge from this path. Resetting to the previously approved path in this situation would cause abrupt steering, as the controller would attempt to rapidly realign the vehicle with the reference trajectory.

Instead, the lane change module generates an abort path. This return path is specifically designed to guide the ego vehicle back to the current lane, avoiding any sudden maneuvers. The following image provides an illustration of the abort process.

The abort path is generated by shifting the approved lane change path using the path shifter. This ensures the continuity in lateral velocity, and prevents abrupt changes in the vehicle\u2019s movement. The abort start shift and abort end shift are computed as follows:

  1. Start Shift: \\(d_{start}^{abort} = v_{ego} \\cdot \\Delta_{t}\\)
  2. End Shift: \\(d_{end}^{abort} = v_{ego} \\cdot ( \\Delta_{t} + t_{end} )\\)

as depicted in the following diagram

Note

When executing the abort process, comfort is not a primary concern. However, due to safety considerations, limited real-world testing has been conducted to tune or validate this parameter. Currently, the maximum lateral jerk is set to an arbitrary value. To avoid generating a path with excessive lateral jerk, this value can be configured using cancel.max_lateral_jerk.

Note

Lane change module returns ModuleStatus::FAILURE once abort is completed.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#stopcruise","title":"Stop/Cruise","text":"

Once canceling or aborting the lane change is no longer an option, the ego vehicle will proceed with the lane change. This can happen in the following situations:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#parameters","title":"Parameters","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#essential-lane-change-parameters","title":"Essential lane change parameters","text":"

The following parameters are configurable in lane_change.param.yaml

Name Unit Type Description Default value time_limit [ms] double Time limit for lane change candidate path generation 50.0 backward_lane_length [m] double The backward length to check incoming objects in lane change target lane. 200.0 backward_length_buffer_for_end_of_lane [m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change 3.0 backward_length_buffer_for_blocking_object [m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change when there is an object in front 3.0 backward_length_from_intersection [m] double Distance threshold from the last intersection to invalidate or cancel the lane change path 5.0 enable_stopped_vehicle_buffer [-] bool If true, will keep enough distance from current lane front stopped object to perform lane change when possible true trajectory.max_prepare_duration [s] double The maximum preparation time for the ego vehicle to be ready to perform lane change. 4.0 trajectory.min_prepare_duration [s] double The minimum preparation time for the ego vehicle to be ready to perform lane change. 2.0 trajectory.lateral_jerk [m/s3] double Lateral jerk value for lane change path generation 0.5 trajectory.minimum_lane_changing_velocity [m/s] double Minimum speed during lane changing process. 2.78 trajectory.lon_acc_sampling_num [-] int Number of possible lane-changing trajectories that are being influenced by longitudinal acceleration 3 trajectory.lat_acc_sampling_num [-] int Number of possible lane-changing trajectories that are being influenced by lateral acceleration 3 trajectory.max_longitudinal_acc [m/s2] double maximum longitudinal acceleration for lane change 1.0 trajectory.min_longitudinal_acc [m/s2] double maximum longitudinal deceleration for lane change -1.0 trajectory.lane_changing_decel_factor [-] double longitudinal deceleration factor during lane changing phase 0.5 trajectory.th_prepare_curvature [-] double If the maximum curvature of the prepare segment exceeds the threshold, the prepare segment is invalid. 0.03 min_length_for_turn_signal_activation [m] double Turn signal will be activated if the ego vehicle approaches to this length from minimum lane change length 10.0 lateral_acceleration.velocity [m/s] double Reference velocity for lateral acceleration calculation (look up table) [0.0, 4.0, 10.0] lateral_acceleration.min_values [m/s2] double Min lateral acceleration values corresponding to velocity (look up table) [0.4, 0.4, 0.4] lateral_acceleration.max_values [m/s2] double Max lateral acceleration values corresponding to velocity (look up table) [0.65, 0.65, 0.65]"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#parameter-to-judge-if-lane-change-is-completed","title":"Parameter to judge if lane change is completed","text":"

The following parameters are used to judge lane change completion.

Name Unit Type Description Default value lane_change_finish_judge_buffer [m] double The longitudinal distance starting from the lane change end pose. 2.0 finish_judge_lateral_threshold [m] double The lateral distance from targets lanes' centerline. Used in addition with finish_judge_lateral_angle_deviation 0.1 finish_judge_lateral_angle_deviation [deg] double Ego angle deviation with reference to target lanes' centerline. Used in addition with finish_judge_lateral_threshold 2.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#lane-change-regulations_1","title":"Lane change regulations","text":"Name Unit Type Description Default value regulation.crosswalk [-] boolean Allow lane change in between crosswalks true regulation.intersection [-] boolean Allow lane change in between intersections true regulation.traffic_light [-] boolean Allow lane change to be performed in between traffic light true"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#ego-vehicle-stuck-detection","title":"Ego vehicle stuck detection","text":"Name Unit Type Description Default value stuck_detection.velocity [m/s] double Velocity threshold for ego vehicle stuck detection 0.1 stuck_detection.stop_time [s] double Stop time threshold for ego vehicle stuck detection 3.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#delay-lane-change","title":"Delay Lane Change","text":"Name Unit Type Description Default value delay_lane_change.enable [-] bool Flag to enable/disable lane change delay feature true delay_lane_change.check_only_parked_vehicle [-] bool Flag to limit delay feature for only parked vehicles false delay_lane_change.min_road_shoulder_width [m] double Width considered as road shoulder if lane doesn't have road shoulder when checking for parked vehicle 0.5 delay_lane_change.th_parked_vehicle_shift_ratio [-] double Stopped vehicles beyond this distance ratio from center line will be considered as parked 0.6"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#terminal-lane-change-path_1","title":"Terminal Lane Change Path","text":"

The following parameters are used to configure terminal lane change path feature.

Name Unit Type Description Default value terminal_path.enable [-] bool Flag to enable/disable terminal path feature true terminal_path.disable_near_goal [-] bool Flag to disable terminal path feature if ego is near goal true terminal_path.stop_at_boundary [-] bool If true, ego will stop at current lane boundary instead of middle of lane false"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#generating-lane-changing-path-using-frenet-planner","title":"Generating Lane Changing Path using Frenet Planner","text":"

Warning

Only applicable when ego is near terminal start

Name Unit Type Description Default value frenet.enable [-] bool Flag to enable/disable frenet planner when ego is near terminal start. true frenet.th_yaw_diff [deg] double If the yaw diff between of the prepare segment's end and lane changing segment's start exceed the threshold , the lane changing segment is invalid. 10.0 frenet.th_curvature_smoothing [-] double Filters and appends target path points with curvature below the threshold to candidate path. 0.1"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#collision-checks","title":"Collision checks","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#target-objects","title":"Target Objects","text":"Name Unit Type Description Default value target_object.car [-] boolean Include car objects for safety check true target_object.truck [-] boolean Include truck objects for safety check true target_object.bus [-] boolean Include bus objects for safety check true target_object.trailer [-] boolean Include trailer objects for safety check true target_object.unknown [-] boolean Include unknown objects for safety check true target_object.bicycle [-] boolean Include bicycle objects for safety check true target_object.motorcycle [-] boolean Include motorcycle objects for safety check true target_object.pedestrian [-] boolean Include pedestrian objects for safety check true"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#common","title":"common","text":"Name Unit Type Description Default value safety_check.lane_expansion.left_offset [m] double Expand the left boundary of the detection area, allowing objects previously outside on the left to be detected and registered as targets. 0.0 safety_check.lane_expansion.right_offset [m] double Expand the right boundary of the detection area, allowing objects previously outside on the right to be detected and registered as targets. 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#additional-parameters","title":"Additional parameters","text":"Name Unit Type Description Default value collision_check.enable_for_prepare_phase.general_lanes [-] boolean Perform collision check starting from the prepare phase for situations not explicitly covered by other settings (e.g., intersections). If false, collision check only evaluated for lane changing phase. false collision_check.enable_for_prepare_phase.intersection [-] boolean Perform collision check starting from prepare phase when ego is in intersection. If false, collision check only evaluated for lane changing phase. true collision_check.enable_for_prepare_phase.turns [-] boolean Perform collision check starting from prepare phase when ego is in lanelet with turn direction tags. If false, collision check only evaluated for lane changing phase. true collision_check.check_current_lanes [-] boolean If true, the lane change module always checks objects in the current lanes for collision assessment. If false, it only checks objects in the current lanes when the ego vehicle is stuck. false collision_check.check_other_lanes [-] boolean If true, the lane change module includes objects in other lanes when performing collision assessment. false collision_check.use_all_predicted_paths [-] boolean If false, use only the predicted path that has the maximum confidence. true collision_check.prediction_time_resolution [s] double Time resolution for object's path interpolation and collision check. 0.5 collision_check.yaw_diff_threshold [rad] double Maximum yaw difference between predicted ego pose and predicted object pose when executing rss-based collision checking 3.1416 collision_check.th_incoming_object_yaw [rad] double Maximum yaw difference between current ego pose and current object pose. Objects with a yaw difference exceeding this value are excluded from the safety check. 2.3562"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-during-lane-change-path-is-computed","title":"safety constraints during lane change path is computed","text":"Name Unit Type Description Default value safety_check.execution.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.execution.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.execution.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.execution.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 1.0 safety_check.execution.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.execution.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.execution.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.8 safety_check.execution.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-specifically-for-stopped-or-parked-vehicles","title":"safety constraints specifically for stopped or parked vehicles","text":"Name Unit Type Description Default value safety_check.parked.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.parked.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -2.0 safety_check.parked.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 1.0 safety_check.parked.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 0.8 safety_check.parked.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 1.0 safety_check.parked.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.parked.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.8 safety_check.parked.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-to-cancel-lane-change-path","title":"safety constraints to cancel lane change path","text":"Name Unit Type Description Default value safety_check.cancel.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.cancel.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -2.0 safety_check.cancel.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 1.5 safety_check.cancel.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 0.8 safety_check.cancel.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 1.0 safety_check.cancel.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 2.5 safety_check.cancel.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.6 safety_check.cancel.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#safety-constraints-used-during-lane-change-path-is-computed-when-ego-is-stuck","title":"safety constraints used during lane change path is computed when ego is stuck","text":"Name Unit Type Description Default value safety_check.stuck.expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.stuck.rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 1.0 safety_check.stuck.lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.stuck.longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.stuck.longitudinal_velocity_delta_time [m] double The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.8 safety_check.stuck.extended_polygon_policy [-] string Policy used to determine the polygon shape for the safety check. Available options are: rectangle or along-path. rectangle

(*1) the value must be negative.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#abort-lane-change","title":"Abort lane change","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value cancel.enable_on_prepare_phase [-] boolean Enable cancel lane change true cancel.enable_on_lane_changing_phase [-] boolean Enable abort lane change. false cancel.delta_time [s] double The time taken to start steering to return to the center line. 3.0 cancel.duration [s] double The time taken to complete returning to the center line. 3.0 cancel.max_lateral_jerk [m/sss] double The maximum lateral jerk for abort path 1000.0 cancel.overhang_tolerance [m] double Lane change cancel is prohibited if the vehicle head exceeds the lane boundary more than this tolerance distance 0.0 cancel.unsafe_hysteresis_threshold [-] int threshold that helps prevent frequent switching between safe and unsafe decisions 10 cancel.deceleration_sampling_num [-] int Number of deceleration patterns to check safety to cancel lane change 5"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#debug","title":"Debug","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value publish_debug_marker [-] boolean Flag to publish debug marker false"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#debug-marker-visualization","title":"Debug Marker & Visualization","text":"

To enable the debug marker, execute (no restart is needed)

ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner lane_change.publish_debug_marker true\n

or simply set the publish_debug_marker to true in the lane_change.param.yaml for permanent effect (restart is needed).

Then add the marker

/planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/lane_change_left\n

in rviz2.

Available information

  1. Ego to object relation, plus safety check information
  2. Ego vehicle interpolated pose up to the latest safety check position.
  3. Object is safe or not, shown by the color of the polygon (Green = Safe, Red = unsafe)
  4. Valid candidate paths.
  5. Position when lane changing start and end.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_lane_change_module/#limitation","title":"Limitation","text":"
  1. When a lane change is canceled, the lane change module returns ModuleStatus::FAILURE. As the module is removed from the approved module stack (see Failure modules), a new instance of the lane change module is initiated. Due to this, any information stored prior to the reset is lost. For example, the lane_change_prepare_duration in the TransientData is reset to its maximum value.
  2. The lane change module has no knowledge of any velocity modifications introduced to the path after it is approved. This is because other modules may add deceleration points after subscribing to the behavior path planner output, and the final velocity is managed by the velocity smoother. Since this limitation affects CANCEL, the lane change module mitigates it by sampling accelerations along the approved lane change path. These sampled accelerations are used during safety checks to estimate the velocity that might occur if the ego vehicle decelerates.
  3. Ideally, the abort path should account for whether its execution would affect trailing vehicles in the current lane. However, the lane change module does not evaluate such interactions or assess whether the abort path is safe. As a result, the abort path is not guaranteed to be safe. To minimize the risk of unsafe situations, the abort maneuver is only permitted if the ego vehicle has not yet diverged from the current lane.
  4. Due to limited resources, the abort path logic is not fully optimized. The generated path may overshoot, causing the return trajectory to slightly shift toward the opposite lane. This can be dangerous, especially if the opposite lane has traffic moving in the opposite direction. Furthermore, the logic does not account for different vehicle types, which can lead to varying effects. For instance, the behavior might differ significantly between a bus and a small passenger car.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/","title":"Behavior Path Planner","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#behavior-path-planner","title":"Behavior Path Planner","text":"

The Behavior Path Planner's main objective is to significantly enhance the safety of autonomous vehicles by minimizing the risk of accidents. It improves driving efficiency through time conservation and underpins reliability with its rule-based approach. Additionally, it allows users to integrate their own custom behavior modules or use it with different types of vehicles, such as cars, buses, and delivery robots, as well as in various environments, from busy urban streets to open highways.

The module begins by thoroughly analyzing the ego vehicle's current situation, including its position, speed, and surrounding environment. This analysis leads to essential driving decisions about lane changes or stopping and subsequently generates a path that is both safe and efficient. It considers road geometry, traffic rules, and dynamic conditions while also incorporating obstacle avoidance to respond to static and dynamic obstacles such as other vehicles, pedestrians, or unexpected roadblocks, ensuring safe navigation.

Moreover, the planner responds to the behavior of other traffic participants, predicting their actions and accordingly adjusting the vehicle's path. This ensures not only the safety of the autonomous vehicle but also contributes to smooth traffic flow. Its adherence to traffic laws, including speed limits and compliance with traffic signals, further guarantees lawful and predictable driving behavior. The planner is also designed to minimize sudden or abrupt maneuvers, aiming for a comfortable and natural driving experience.

Note

The Planning Component Design documentation outlines the foundational philosophy guiding the design and future development of the Behavior Path Planner module. We strongly encourage readers to consult this document to understand the rationale behind its current configuration and the direction of its ongoing development.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#purpose-use-cases","title":"Purpose / Use Cases","text":"

Essentially, the module has three primary responsibilities:

  1. Creating a path based on the traffic situation.
  2. Generating drivable area, i.e. the area within which the vehicle can maneuver.
  3. Generating turn signal commands to be relayed to the vehicle interface.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#features","title":"Features","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#supported-scene-modules","title":"Supported Scene Modules","text":"

Behavior Path Planner has the following scene modules

Name Description Details Lane Following This module generates a reference path from lanelet centerline. LINK Static Obstacle Avoidance This module generates an avoidance path when there are objects that should be avoided. LINK Dynamic Obstacle Avoidance WIP LINK Avoidance By Lane Change This module generates a lane change path when there are objects that should be avoided. LINK Lane Change This module is performed when it is necessary and a collision check with other vehicles is cleared. LINK External Lane Change WIP LINK Goal Planner This module is performed when the ego vehicle is in a driving lane and the goal is in the shoulder lane. The ego vehicle will stop at the goal. LINK Start Planner This module is performed when the ego vehicle is stationary and the footprint of the ego vehicle is included in the shoulder lane. This module ends when the ego vehicle merges into the road. LINK Side Shift This module shifts the path to the left or right based on external instructions, intended for remote control applications. LINK

Note

Click on the following images to view videos of their execution

Note

Users can refer to Planning component design for some additional behavior.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#how-to-add-or-implement-new-module","title":"How to add or implement new module","text":"

All scene modules are implemented by inheriting the base class scene_module_interface.hpp.

Warning

The remainder of this subsection is a work in progress (WIP).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#planner-manager","title":"Planner Manager","text":"

The Planner Manager's responsibilities include:

  1. Activating the relevant scene module in response to the specific situation faced by the autonomous vehicle. For example, when a parked vehicle blocks the ego vehicle's driving lane, the manager would engage the avoidance module.
  2. Managing the execution order when multiple modules are running simultaneously. For instance, if both the lane-changing and avoidance modules are operational, the manager decides which should take precedence.
  3. Merging paths from multiple modules when they are activated simultaneously and each generates its own path, thereby creating a single functional path.

Note

To check the scene module's transition \u2013 i.e., registered, approved and candidate modules \u2013 set verbose: true in the Behavior Path Planner configuration file.

Note

For more in-depth information, refer to the Manager design document.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#input","title":"Input","text":"Name Required? Type Description ~/input/odometry \u25cb nav_msgs::msg::Odometry For ego velocity ~/input/accel \u25cb geometry_msgs::msg::AccelWithCovarianceStamped For ego acceleration ~/input/objects \u25cb autoware_perception_msgs::msg::PredictedObjects Dynamic objects from the perception module ~/input/occupancy_grid_map \u25cb nav_msgs::msg::OccupancyGrid Occupancy grid map from the perception module. This is used for only the Goal Planner module ~/input/traffic_signals \u25cb autoware_perception_msgs::msg::TrafficLightGroupArray Traffic signal information from the perception module ~/input/vector_map \u25cb autoware_map_msgs::msg::LaneletMapBin Vector map information ~/input/route \u25cb autoware_planning_msgs::msg::LaneletRoute Current route from start to goal ~/input/scenario \u25cb tier4_planning_msgs::msg::Scenario Launches Behavior Path Planner if current scenario == Scenario:LaneDriving ~/input/lateral_offset \u25b3 tier4_planning_msgs::msg::LateralOffset Lateral offset to trigger side shift ~/system/operation_mode/state \u25cb autoware_adapi_v1_msgs::msg::OperationModeState Allows the planning module to know if vehicle is in autonomous mode or if it can be controlledref "},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#output","title":"Output","text":"Name Type Description QoS Durability ~/output/path tier4_planning_msgs::msg::PathWithLaneId The path generated by modules volatile ~/output/turn_indicators_cmd autoware_vehicle_msgs::msg::TurnIndicatorsCommand Turn indicators command volatile ~/output/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand Hazard lights command volatile ~/output/modified_goal autoware_planning_msgs::msg::PoseWithUuidStamped Output modified goal commands transient_local ~/output/reroute_availability tier4_planning_msgs::msg::RerouteAvailability The path the module is about to take. To be executed as soon as external approval is obtained volatile"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#debug","title":"Debug","text":"Name Type Description QoS Durability ~/debug/avoidance_debug_message_array tier4_planning_msgs::msg::AvoidanceDebugMsgArray Debug message for avoidance. Notifies users of reasons avoidance path cannot be generated volatile ~/debug/lane_change_debug_message_array tier4_planning_msgs::msg::LaneChangeDebugMsgArray Debug message for lane change. Notifies users of unsafe conditions during lane-changing process volatile ~/debug/maximum_drivable_area visualization_msgs::msg::MarkerArray Shows maximum static drivable area volatile ~/debug/turn_signal_info visualization_msgs::msg::MarkerArray TBA volatile ~/debug/bound visualization_msgs::msg::MarkerArray Debug for static drivable area volatile ~/planning/path_candidate/* autoware_planning_msgs::msg::Path The path before approval volatile ~/planning/path_reference/* autoware_planning_msgs::msg::Path Reference path generated by each module volatile

Note

For specific information about which topics are being subscribed to and published, refer to behavior_path_planner.xml.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#how-to-enable-or-disable-modules","title":"How to Enable or Disable Modules","text":"

Enabling and disabling the modules in the Behavior Path Planner is primarily managed through two key files: default_preset.yaml and behavior_path_planner.launch.xml.

The default_preset.yaml file acts as a configuration file for enabling or disabling specific modules within the planner. It contains a series of arguments which represent the Behavior Path Planner's modules or features. For example:

Note

Click here to view the default_preset.yaml.

The behavior_path_planner.launch.xml file references the settings defined in default_preset.yaml to apply the configurations when the Behavior Path Planner's node is running. For instance, the parameter static_obstacle_avoidance.enable_module in the following segment corresponds to launch_static_obstacle_avoidance_module from default_preset.yaml:

<param name=\"static_obstacle_avoidance.enable_module\" value=\"$(var launch_static_obstacle_avoidance_module)\"/>\n

Therefore, to enable or disable a module, simply set the corresponding module in default_preset.yaml to true or false. These changes will be applied upon the next launch of Autoware.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#generating-path","title":"Generating Path","text":"

A sophisticated methodology is used for path generation, particularly focusing on maneuvers like lane changes and avoidance. At the core of this design is the smooth lateral shifting of the reference path, achieved through a constant-jerk profile. This approach ensures a consistent rate of change in acceleration, facilitating smooth transitions and minimizing abrupt changes in lateral dynamics, crucial for passenger comfort and safety.

The design involves complex mathematical formulations for calculating the lateral shift of the vehicle's path over time. These calculations include determining lateral displacement, velocity, and acceleration, while considering the vehicle's lateral acceleration and velocity limits. This is essential for ensuring that the vehicle's movements remain safe and manageable.

The ShiftLine struct (as seen here) is utilized to represent points along the path where the lateral shift starts and ends. It includes details like the start and end points in absolute coordinates, the relative shift lengths at these points compared to the reference path, and the associated indexes on the reference path. This struct is integral to managing the path shifts, as it allows the path planner to dynamically adjust the trajectory based on the vehicle's current position and planned maneuver.

Furthermore, the design and its implementation incorporate various equations and mathematical models to calculate essential parameters for the path shift. These include the total distance of the lateral shift, the maximum allowable lateral acceleration and jerk, and the total time required for the shift. Practical considerations are also noted, such as simplifying assumptions in the absence of a specific time interval for most lane change and avoidance cases.

The shifted path generation logic enables the Behavior Path Planner to dynamically generate safe and efficient paths, precisely controlling the vehicle\u2019s lateral movements to ensure the smooth execution of lane changes and avoidance maneuvers. This careful planning and execution adhere to the vehicle's dynamic capabilities and safety constraints, maximizing efficiency and safety in autonomous vehicle navigation.

Note

If you're a math lover, refer to Path Generation Design for the nitty-gritty.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#collision-assessment-safety-check","title":"Collision Assessment / Safety Check","text":"

The purpose of the collision assessment function in the Behavior Path Planner is to evaluate the potential for collisions with target objects across all modules. It is utilized in two scenarios:

  1. During candidate path generation, to ensure that the generated candidate path is collision-free.
  2. When the path is approved by the manager, and the ego vehicle is executing the current module. If the current situation is deemed unsafe, depending on each module's requirements, the planner will either cancel the execution or opt to execute another module.

The safety check process involves several steps. Initially, it obtains the pose of the target object at a specific time, typically through interpolation of the predicted path. It then checks for any overlap between the ego vehicle and the target object at this time. If an overlap is detected, the path is deemed unsafe. The function also identifies which vehicle is in front by using the arc length along the given path. The function operates under the assumption that accurate data on the position, velocity, and shape of both the ego vehicle (the autonomous vehicle) and any target objects are available. It also relies on the yaw angle of each point in the predicted paths of these objects, which is expected to point towards the next path point.

A critical part of the safety check is the calculation of the RSS (Responsibility-Sensitive Safety) distance-inspired algorithm. This algorithm considers factors such as reaction time, safety time margin, and the velocities and decelerations of both vehicles. Extended object polygons are created for both the ego and target vehicles. Notably, the rear object\u2019s polygon is extended by the RSS distance longitudinally and by a lateral margin. The function finally checks for overlap between this extended rear object polygon and the front object polygon. Any overlap indicates a potential unsafe situation.

However, the module does have a limitation concerning the yaw angle of each point in the predicted paths of target objects, which may not always accurately point to the next point, leading to potential inaccuracies in some edge cases.

Note

For further reading on the collision assessment method, please refer to Safety check utils

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#generating-drivable-area","title":"Generating Drivable Area","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#static-drivable-area-logic","title":"Static Drivable Area logic","text":"

The drivable area is used to determine the area in which the ego vehicle can travel. The primary goal of static drivable area expansion is to ensure safe travel by generating an area that encompasses only the necessary spaces for the vehicle's current behavior, while excluding non-essential areas. For example, while avoidance module is running, the drivable area includes additional space needed for maneuvers around obstacles, and it limits the behavior by not extending the avoidance path outside of lanelet areas.

Static drivable area expansion operates under assumptions about the correct arrangement of lanes and the coverage of both the front and rear of the vehicle within the left and right boundaries. Key parameters for drivable area generation include extra footprint offsets for the ego vehicle, the handling of dynamic objects, maximum expansion distance, and specific methods for expansion. Additionally, since each module generates its own drivable area, before passing it as the input to generate the next running module's drivable area, or before generating a unified drivable area, the system sorts drivable lanes based on the vehicle's passage order. This ensures the correct definition of the lanes used in drivable area generation.

Note

Further details can be found in Drivable Area Design.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#dynamic-drivable-area-logic","title":"Dynamic Drivable Area Logic","text":"

Large vehicles require much more space, which sometimes causes them to veer out of their current lane. A typical example being a bus making a turn at a corner. In such cases, relying on a static drivable area is insufficient, since the static method depends on lane information provided by high-definition maps. To overcome the limitations of the static approach, the dynamic drivable area expansion algorithm adjusts the navigable space for an autonomous vehicle in real-time. It conserves computational power by reusing previously calculated path data, updating only when there is a significant change in the vehicle's position. The system evaluates the minimum lane width necessary to accommodate the vehicle's turning radius and other dynamic factors. It then calculates the optimal expansion of the drivable area's boundaries to ensure there is adequate space for safe maneuvering, taking into account the vehicle's path curvature. The rate at which these boundaries can expand or contract is moderated to maintain stability in the vehicle's navigation. The algorithm aims to maximize the drivable space while avoiding fixed obstacles and adhering to legal driving limits. Finally, it applies these boundary adjustments and smooths out the path curvature calculations to ensure a safe and legally compliant navigable path is maintained throughout the vehicle's operation.

Note

The feature can be enabled in the drivable_area_expansion.param.yaml.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#generating-turn-signal","title":"Generating Turn Signal","text":"

The Behavior Path Planner module uses the autoware_vehicle_msgs::msg::TurnIndicatorsCommand to output turn signal commands (see TurnIndicatorsCommand.idl). The system evaluates the driving context and determines when to activate turn signals based on its maneuver planning\u2014like turning, lane changing, or obstacle avoidance.

Within this framework, the system differentiates between desired and required blinker activations. Desired activations are those recommended by traffic laws for typical driving scenarios, such as signaling before a lane change or turn. Required activations are those that are deemed mandatory for safety reasons, like signaling an abrupt lane change to avoid an obstacle.

The TurnIndicatorsCommand message structure has a command field that can take one of several constants: NO_COMMAND indicates no signal is necessary, DISABLE to deactivate signals, ENABLE_LEFT to signal a left turn, and ENABLE_RIGHT to signal a right turn. The Behavior Path Planner sends these commands at the appropriate times, based on its rules-based system that considers both the desired and required scenarios for blinker activation.

Note

For more in-depth information, refer to Turn Signal Design document.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#rerouting","title":"Rerouting","text":"

Warning

The rerouting feature is under development. Further information will be included at a later date.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#parameters-and-configuration","title":"Parameters and Configuration","text":"

The configuration files are organized in a hierarchical directory structure for ease of navigation and management. Each subdirectory contains specific configuration files relevant to its module. The root directory holds general configuration files that apply to the overall behavior of the planner. The following is an overview of the directory structure with the respective configuration files.

behavior_path_planner\n\u251c\u2500\u2500 behavior_path_planner.param.yaml\n\u251c\u2500\u2500 drivable_area_expansion.param.yaml\n\u251c\u2500\u2500 scene_module_manager.param.yaml\n\u251c\u2500\u2500 static_obstacle_avoidance\n\u2502   \u2514\u2500\u2500 static_obstacle_avoidance.param.yaml\n\u251c\u2500\u2500 avoidance_by_lc\n\u2502   \u2514\u2500\u2500 avoidance_by_lc.param.yaml\n\u251c\u2500\u2500 dynamic_obstacle_avoidance\n\u2502   \u2514\u2500\u2500 dynamic_obstacle_avoidance.param.yaml\n\u251c\u2500\u2500 goal_planner\n\u2502   \u2514\u2500\u2500 goal_planner.param.yaml\n\u251c\u2500\u2500 lane_change\n\u2502   \u2514\u2500\u2500 lane_change.param.yaml\n\u251c\u2500\u2500 side_shift\n\u2502   \u2514\u2500\u2500 side_shift.param.yaml\n\u2514\u2500\u2500 start_planner\n    \u2514\u2500\u2500 start_planner.param.yaml\n

Similarly, the common directory contains configuration files that are used across various modules, providing shared parameters and settings essential for the functioning of the Behavior Path Planner:

common\n\u251c\u2500\u2500 common.param.yaml\n\u251c\u2500\u2500 costmap_generator.param.yaml\n\u2514\u2500\u2500 nearest_search.param.yaml\n

The preset directory contains the configurations for managing the operational state of various modules. It includes the default_preset.yaml file, which specifically caters to enabling and disabling modules within the system.

preset\n\u2514\u2500\u2500 default_preset.yaml\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/#limitations-future-work","title":"Limitations & Future Work","text":"
  1. The Goal Planner module cannot be simultaneously executed together with other modules.
  2. The module is not designed as a plugin. Integrating a custom module is not straightforward. Users have to modify part of the Behavior Path Planner's main code.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_interface_design/","title":"Interface design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_interface_design/#interface-design","title":"Interface design","text":"

Warning

Under Construction

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/","title":"Limitations","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitations","title":"Limitations","text":"

The document describes the limitations that are currently present in the behavior_path_planner module.

The following items (but not limited to) fall in the scope of limitation:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-multiple-connected-opposite-lanes-require-linestring-with-shared-id","title":"Limitation: Multiple connected opposite lanes require Linestring with shared ID","text":"

To fully utilize the Lanelet2's API, the design of the vector map (.osm) needs to follow all the criteria described in Lanelet2 documentation. Specifically, in the case of 2 or more lanes, the Linestrings that divide the current lane with the opposite/adjacent lane need to have a matching Linestring ID. Assume the following ideal case.

In the image, Linestring ID51 is shared by Lanelet A and Lanelet B. Hence we can directly use the available left, adjacentLeft, right, adjacentRight and findUsages method within Lanelet2's API to directly query the direction and opposite lane availability.

const auto right_lane = routing_graph_ptr_->right(lanelet);\nconst auto adjacent_right_lane = routing_graph_ptr_->adjacentRight(lanelet);\nconst auto opposite_right_lane = lanelet_map_ptr_->laneletLayer.findUsages(lanelet.rightBound().invert());\n

The following images show the situation where these API does not work directly. This means that we cannot use them straight away, and several assumptions and logical instruction are needed to make these APIs work.

In this example (multiple linestring issues), Lanelet C contains Linestring ID61 and ID62, while Lanelet D contains Linestring ID63 and ID 64. Although the Linestring ID62 and ID64 have identical point IDs and seem visually connected, the API will treat these Linestring as though they are separated. When it searches for any Lanelet that is connected via Linestring ID62, it will return NULL, since ID62 only connects to Lanelet C and not other Lanelet.

Although, in this case, it is possible to forcefully search the lanelet availability by checking the lanelet that contains the points, usinggetLaneletFromPoint method. But, the implementation requires complex rules for it to work. Take the following images as an example.

Assume Object X is in Lanelet F. We can forcefully search Lanelet E via Point 7, and it will work if Point 7 is utilized by only 2 lanelet. However, the complexity increases when we want to start searching for the direction of the opposite lane. We can infer the direction of the lanelet by using mathematical operations (dot product of vector V_ID72 (Point 6 minus Point 9), and V_ID74 (Point 7 minus Point 8). But, notice that we did not use Point 7 in V_ID72. This is because searching it requires an iteration, adding additional non-beneficial computation.

Suppose the points are used by more than 2 lanelets. In that case, we have to find the differences for all lanelet, and the result might be undefined. The reason is that the differences between the coordinates do not reflect the actual shape of the lanelet. The following image demonstrates this point.

There are many other available solutions to try. However, further attempt to solve this might cause issues in the future, especially for maintaining or scaling up the software.

In conclusion, the multiple Linestring issues will not be supported. Covering these scenarios might give the user an \"everything is possible\" impression. This is dangerous since any attempt to create a non-standardized vector map is not compliant with safety regulations.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-avoidance-at-corners-and-intersections","title":"Limitation: Avoidance at Corners and Intersections","text":"

Currently, the implementation doesn't cover avoidance at corners and intersections. The reason is similar to here. However, this case can still be supported in the future (assuming the vector map is defined correctly).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-chattering-shifts","title":"Limitation: Chattering shifts","text":"

There are possibilities that the shifted path chatters as a result of various factors. For example, bounded box shape or position from the perception input. Sometimes, it is difficult for the perception to get complete information about the object's size. As the object size is updated, the object length will also be updated. This might cause shifts point to be re-calculated, therefore resulting in chattering shift points.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#manager-design","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#purpose-role","title":"Purpose / Role","text":"

The manager launches and executes scene modules in behavior_path_planner depending on the use case, and has been developed to achieve following features:

Movie

Support status:

Name Simple exclusive execution Advanced simultaneous execution Avoidance Avoidance By Lane Change Lane Change External Lane Change Goal Planner (without goal modification) Goal Planner (with goal modification) Pull Out Side Shift

Click here for supported scene modules.

Warning

It is still under development and some functions may be unstable.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#overview","title":"Overview","text":"

The manager is the core part of the behavior_path_planner implementation. It outputs path based on the latest data.

The manager has sub-managers for each scene module, and its main task is

Additionally, the manager generates root reference path, and if any other modules don't request execution, the path is used as the planning result of behavior_path_planner.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#slot","title":"Slot","text":"

The manager owns several containers of sub-managers, namely slots, that holds/runs several sub-managers and send the output to the next slot. Given the initial reference path, each slot processes the input path and the output path is processed by the next slot. The final slot output is utilized as the output of the manager. The slot passes following information

struct SlotOutput\n{\nBehaviorModuleOutput valid_output;\n\n// if candidate module is running, valid_output contains the planning by candidate module. In\n// that case, downstream slots will just run aproved modules and do not try to launch new\n// modules\nbool is_upstream_candidate_exclusive{false};\n\n// if this slot failed, downstream slots need to refresh approved/candidate modules and just\n// forward valid_output of this slot output\nbool is_upstream_failed_approved{false};\n\n// if the approved module in this slot returned to isWaitingApproval, downstream slots need to\n// refresh candidate once\nbool is_upstream_waiting_approved{false};\n};\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#sub-managers","title":"Sub-managers","text":"

The sub-manager's main task is

sub-managers

Sub-manager is registered on the manager with the following function.

/**\n * @brief register managers.\n * @param manager pointer.\n */\nvoid registerSceneModuleManager(const SceneModuleManagerPtr & manager_ptr)\n{\nRCLCPP_INFO(logger_, \"register %s module\", manager_ptr->getModuleName().c_str());\nmanager_ptrs_.push_back(manager_ptr);\nprocessing_time_.emplace(manager_ptr->getModuleName(), 0.0);\n}\n

Code is here

Sub-manager has the following parameters that are needed by the manager to manage the launched modules, and these parameters can be set for each module.

struct ModuleConfigParameters\n{\nbool enable_module{false};\nbool enable_rtc{false};\nbool enable_simultaneous_execution_as_approved_module{false};\nbool enable_simultaneous_execution_as_candidate_module{false};\nuint8_t priority{0};\n};\n

Code is here

Name Type Description enable_module bool if true, the sub-manager is registered on the manager. enable_rtc bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc. enable_simultaneous_execution_as_candidate_module bool if true, the manager allows its scene modules to run with other scene modules as candidate module. enable_simultaneous_execution_as_approved_module bool if true, the manager allows its scene modules to run with other scene modules as approved module. priority uint8_t the manager decides execution priority based on this parameter. The smaller the number is, the higher the priority is."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-modules","title":"Scene modules","text":"

Scene modules receives necessary data and RTC command, and outputs candidate path(s), reference path and RTC cooperate status. When multiple modules run in series, the output of the previous module is received as input and the information is used to generate a new modified path, as shown in the following figure. And, when one module is running alone, it receives a reference path generated from the centerline of the lane in which Ego is currently driving as previous module output.

scene module I/O Type Description IN behavior_path_planner::BehaviorModuleOutput previous module output. contains data necessary for path planning. IN behavior_path_planner::PlannerData contains data necessary for path planning. IN tier4_planning_msgs::srv::CooperateCommands contains approval data for scene module's path modification. (details) OUT behavior_path_planner::BehaviorModuleOutput contains modified path, turn signal information, etc... OUT tier4_planning_msgs::msg::CooperateStatus contains RTC cooperate status. (details) OUT autoware_planning_msgs::msg::Path candidate path output by a module that has not received approval for path change. when it approved, the ego's following path is switched to this path. (just for visualization) OUT autoware_planning_msgs::msg::Path reference path generated from the centerline of the lane the ego is going to follow. (just for visualization) OUT visualization_msgs::msg::MarkerArray virtual wall, debug info, etc...

Scene modules running on the manager are stored on the candidate modules stack or approved modules stack depending on the condition whether the path modification has been approved or not.

Stack Approval condition Description candidate modules Not approved The candidate modules whose modified path has not been approved by RTC is stored in vector candidate_module_ptrs_ in the manager. The candidate modules stack is updated in the following order. 1. The manager selects only those modules that can be executed based on the configuration of the sub-manager whose scene module requests execution. 2. Determines the execution priority. 3. Executes them as candidate module. All of these modules receive the decided (approved) path from approved modules stack and RUN in PARALLEL. approved modules Already approved When the path modification is approved via RTC commands, the manager moves the candidate module to approved modules stack. These modules are stored in approved_module_ptrs_. In this stack, all scene modules RUN in SERIES."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#process-flow","title":"Process flow","text":"

There are 6 steps in one process:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step1","title":"Step1","text":"

At first, the manager set latest planner data, and run all approved modules and get output path. At this time, the manager checks module status and removes expired modules from approved modules stack.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step2","title":"Step2","text":"

Input approved modules output and necessary data to all registered modules, and the modules judge the necessity of path modification based on it. The manager checks which module makes execution request.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step3","title":"Step3","text":"

Check request module existence.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step4","title":"Step4","text":"

The manager decides which module to execute as candidate modules from the modules that requested to execute path modification.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step5","title":"Step5","text":"

Decides the priority order of execution among candidate modules. And, run all candidate modules. Each modules outputs reference path and RTC cooperate status.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step6","title":"Step6","text":"

Move approved module to approved modules stack from candidate modules stack.

and, within a single planning cycle, these steps are repeated until the following conditions are satisfied.

  while (rclcpp::ok()) {\n/**\n     * STEP1: get approved modules' output\n     */\nconst auto approved_modules_output = runApprovedModules(data);\n\n/**\n     * STEP2: check modules that need to be launched\n     */\nconst auto request_modules = getRequestModules(approved_modules_output);\n\n/**\n     * STEP3: if there is no module that need to be launched, return approved modules' output\n     */\nif (request_modules.empty()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n     * STEP4: if there is module that should be launched, execute the module\n     */\nconst auto [highest_priority_module, candidate_modules_output] =\nrunRequestModules(request_modules, data, approved_modules_output);\nif (!highest_priority_module) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n     * STEP5: if the candidate module's modification is NOT approved yet, return the result.\n     * NOTE: the result is output of the candidate module, but the output path don't contains path\n     * shape modification that needs approval. On the other hand, it could include velocity profile\n     * modification.\n     */\nif (highest_priority_module->isWaitingApproval()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn candidate_modules_output;\n}\n\n/**\n     * STEP6: if the candidate module is approved, push the module into approved_module_ptrs_\n     */\naddApprovedModule(highest_priority_module);\nclearCandidateModules();\n}\n

Code is here

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#priority-of-execution-request","title":"Priority of execution request","text":"

Compare priorities parameter among sub-managers to determine the order of execution based on config. Therefore, the priority between sub-modules does NOT change at runtime.

  /**\n   * @brief swap the modules order based on it's priority.\n   * @param modules.\n   * @details for now, the priority is decided in config file and doesn't change runtime.\n   */\nvoid sortByPriority(std::vector<SceneModulePtr> & modules) const\n{\n// TODO(someone) enhance this priority decision method.\nstd::sort(modules.begin(), modules.end(), [this](auto a, auto b) {\nreturn getManager(a)->getPriority() < getManager(b)->getPriority();\n});\n}\n

Code is here

In the future, however, we are considering having the priorities change dynamically depending on the situation in order to achieve more complex use cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-request-modules-to-run","title":"How to decide which request modules to run?","text":"

On this manager, it is possible that multiple scene modules may request path modification at same time. In that case, the modules to be executed as candidate module is determined in the following order.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step1_1","title":"Step1","text":"

Push back the modules that make a request to request_modules.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step2_1","title":"Step2","text":"

Check approved modules stack, and remove non-executable modules fromrequest_modules based on the following condition.

Executable or not:

Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NO

If a module that doesn't support simultaneous execution exists in approved modules stack (NOT satisfy Condition B), no more modules can be added to the stack, and therefore none of the modules can be executed as candidate.

For example, if approved module's setting of enable_simultaneous_execution_as_approved_module is ENABLE, then only modules whose the setting is ENABLE proceed to the next step.

Other examples:

Process Description If approved modules stack is empty, then all request modules proceed to the next step, regardless of the setting of enable_simultaneous_execution_as_approved_module. If approved module's setting of enable_simultaneous_execution_as_approved_module is DISABLE, then all request modules are discarded."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step3_1","title":"Step3","text":"

Sort request_modules by priority.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step4_1","title":"Step4","text":"

Check and pick up executable modules as candidate in order of priority based on the following conditions.

Executable or not:

Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NO

For example, if the highest priority module's setting of enable_simultaneous_execution_as_candidate_module is DISABLE, then all modules after the second priority are discarded.

Other examples:

Process Description If a module with a higher priority exists, lower priority modules whose setting of enable_simultaneous_execution_as_candidate_module is DISABLE are discarded. If all modules' setting of enable_simultaneous_execution_as_candidate_module is ENABLE, then all modules proceed to the next step."},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#step5_1","title":"Step5","text":"

Run all candidate modules.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-modules-output-to-use","title":"How to decide which module's output to use?","text":"

Sometimes, multiple candidate modules are running simultaneously.

In this case, the manager selects a candidate modules which output path is used as behavior_path_planner output by approval condition in the following rules.

Module A's approval condition Module A's priority Module B's approval condition Module B's priority Final priority Approved 1 Approved 99 Module A > Module B Approved 1 Not approved 99 Module A > Module B Not approved 1 Approved 99 Module B > Module A Not approved 1 Not approved 99 Module A > Module B

Note

The smaller the number is, the higher the priority is.

module priority

Additionally, the manager moves the highest priority module to approved modules stack if it is already approved.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-module-unregister-process","title":"Scene module unregister process","text":"

The manager removes expired module in approved modules stack based on the module's status.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#waiting-approval-modules","title":"Waiting approval modules","text":"

If one module requests multiple path changes, the module may be back to waiting approval condition again. In this case, the manager moves the module to candidate modules stack. If there are some modules that was pushed back to approved modules stack later than the waiting approved module, it is also removed from approved modules stack.

This is because module C is planning output path with the output of module B as input, and if module B is removed from approved modules stack and the input of module C changes, the output path of module C may also change greatly, and the output path will be unstable.

As a result, the module A's output is used as approved modules stack.

If this case happened in the slot, is_upstream_waiting_approved is set to true.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#failure-modules","title":"Failure modules","text":"

If a module returns ModuleStatus::FAILURE, the manager removes the failed module. Additionally, all modules after the failed module are removed, even if they did not return ModuleStatus::FAILURE. These modules are not added back to the candidate modules stack and will instead run again from the beginning. Once these modules are removed, the output of the module prior to the failed module will be used as the planner's output.

As shown in the example below, modules B, A, and C are running. When module A returns ModuleStatus::FAILURE, both module A and C are removed from the approved modules stack. Module B's output is then used as the final output of the planner.

If this case happened in the slot, is_upstream_failed_approved is set to true.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#succeeded-modules","title":"Succeeded modules","text":"

The succeeded modules return the status ModuleStatus::SUCCESS. The manager removes those modules based on Last In First Out policy. In other words, if a module added later to approved modules stack is still running (is in ModuleStatus::RUNNING), the manager doesn't remove the succeeded module. The reason for this is the same as in removal for waiting approval modules, and is to prevent sudden changes of the running module's output.

As an exception, if Lane Change module returns status ModuleStatus::SUCCESS, the manager doesn't remove any modules until all modules is in status ModuleStatus::SUCCESS. This is because when the manager removes the Lane Change (normal LC, external LC, avoidance by LC) module as succeeded module, the manager updates the information of the lane Ego is currently driving in, so root reference path (= module A's input path) changes significantly at that moment.

When the manager removes succeeded modules, the last added module's output is used as approved modules stack.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#slot-output-propagation","title":"Slot output propagation","text":"

As the initial solution, following SlotOutput is passed to the first slot.

  SlotOutput result_output = SlotOutput{\ngetReferencePath(data),\nfalse,\nfalse,\nfalse,\n};\n

If a slot turned out to be is_upstream_failed_approved, then all the subsequent slots are refreshed and have all of their approved_modules and candidate_modules cleared. The valid_output is just transferred to the end without any modification.

If a slot turned out to be is_upstream_waiting_approved, then all the subsequent slots clear their candidate_modules once and apply their approved_modules to obtain the slot output.

If a slot turned out to be is_upstream_candidate_exclusive, it means that a not simultaneously_executable_as_candidate module is running in that slot. Then all the subsequent modules just apply their approved_modules without trying to launch new candidate_modules.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#reference-path-generation","title":"Reference path generation","text":"

The reference path is generated from the centerline of the lanelet sequence obtained from the current route lanelet, and it is not only used as an input to the first added module of approved modules stack, but also used as the output of behavior_path_planner if none of the modules are running.

root reference path generation

The current route lanelet keeps track of the route lanelet currently followed by the planner. It is initialized as the closest lanelet within the route. It is then updated as ego travels along the route such that (1) it follows the previous current route lanelet and (2) it is the closest lanelet within the route.

The current route lanelet can be reset to the closest lanelet within the route, ignoring whether it follows the previous current route lanelet .

The manager needs to know the ego behavior and then generate a root reference path from the lanes that Ego should follow.

For example, during autonomous driving, even if Ego moves into the next lane in order to avoid a parked vehicle, the target lanes that Ego should follow will NOT change because Ego will return to the original lane after the avoidance maneuver. Therefore, the manager does NOT reset the current route lanelet, even if the avoidance maneuver is finished.

On the other hand, if the lane change is successful, the manager resets the current route lanelet because the lane that Ego should follow changes.

In addition, while manually driving (i.e., either the OperationModeState is different from AUTONOMOUS or the Autoware control is not engaged), the manager resets the current route lanelet at each iteration because the ego vehicle may move to an adjacent lane regardless of the decision of the autonomous driving system. The only exception is when a module is already approved, allowing testing the module's behavior while manually driving.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#drivable-area-generation","title":"Drivable area generation","text":"

Warning

Under Construction

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner/docs/behavior_path_planner_manager_design/#turn-signal-management","title":"Turn signal management","text":"

Warning

Under Construction

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/","title":"Drivable Area design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-design","title":"Drivable Area design","text":"

Drivable Area represents the area where ego vehicle can pass.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#purpose-role","title":"Purpose / Role","text":"

In order to defined the area that ego vehicle can travel safely, we generate drivable area in behavior path planner module. Our drivable area is represented by two line strings, which are left_bound line and right_bound line respectively. Both left_bound and right_bound are created from left and right boundaries of lanelets. Note that left_bound and right bound are generated by generateDrivableArea function.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#assumption","title":"Assumption","text":"

Our drivable area has several assumptions.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#limitations","title":"Limitations","text":"

Currently, when clipping left bound or right bound, it can clip the bound more than necessary and the generated path might be conservative.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#parameters-for-drivable-area-generation","title":"Parameters for drivable area generation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion","title":"Static expansion","text":"Name Unit Type Description Default value drivable_area_right_bound_offset [m] double right offset length to expand drivable area 5.0 drivable_area_left_bound_offset [m] double left offset length to expand drivable area 5.0 drivable_area_types_to_skip [-] string linestring types (as defined in the lanelet map) that will not be expanded road_border"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion","title":"Dynamic expansion","text":"Name Unit Type Description Default value enabled [-] boolean if true, dynamically expand the drivable area based on the path curvature true print_runtime [-] boolean if true, runtime is logged by the node true max_expansion_distance [m] double maximum distance by which the original drivable area can be expanded (no limit if set to 0) 0.0 smoothing.curvature_average_window [-] int window size used for smoothing the curvatures using a moving window average 3 smoothing.max_bound_rate [m/m] double maximum rate of change of the bound lateral distance over its arc length 1.0 smoothing.arc_length_range [m] double arc length range where an expansion distance is initially applied 2.0 ego.extra_wheel_base [m] double extra ego wheelbase 0.0 ego.extra_front_overhang [m] double extra ego overhang 0.5 ego.extra_width [m] double extra ego width 1.0 dynamic_objects.avoid [-] boolean if true, the drivable area is not expanded in the predicted path of dynamic objects true dynamic_objects.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.5 path_preprocessing.max_arc_length [m] double maximum arc length along the path where the ego footprint is projected (0.0 means no limit) 100.0 path_preprocessing.resample_interval [m] double fixed interval between resampled path points (0.0 means path points are directly used) 2.0 path_preprocessing.reuse_max_deviation [m] double if the path changes by more than this value, the curvatures are recalculated. Otherwise they are reused 0.5 avoid_linestring.types [-] string array linestring types in the lanelet maps that will not be crossed when expanding the drivable area [\"road_border\", \"curbstone\"] avoid_linestring.distance [m] double distance to keep between the drivable area and the linestrings to avoid 0.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This section gives details of the generation of the drivable area (left_bound and right_bound).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-lanes-generation","title":"Drivable Lanes Generation","text":"

Before generating drivable areas, drivable lanes need to be sorted. Drivable Lanes are selected in each module (Lane Follow, Avoidance, Lane Change, Goal Planner, Pull Out and etc.), so more details about selection of drivable lanes can be found in each module's document. We use the following structure to define the drivable lanes.

struct DrivalbleLanes\n{\nlanelet::ConstLanelet right_lanelet; // right most lane\nlanelet::ConstLanelet left_lanelet; // left most lane\nlanelet::ConstLanelets middle_lanelets; // middle lanes\n};\n

The image of the sorted drivable lanes is depicted in the following picture.

Note that, the order of drivable lanes become

drivable_lanes = {DrivableLane1, DrivableLanes2, DrivableLanes3, DrivableLanes4, DrivableLanes5}\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-generation","title":"Drivable Area Generation","text":"

In this section, a drivable area is created using drivable lanes arranged in the order in which vehicles pass by. We created left_bound from left boundary of the leftmost lanelet and right_bound from right boundary of the rightmost lanelet. The image of the created drivable area will be the following blue lines. Note that the drivable area is defined in the Path and PathWithLaneId messages as

std::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n

and each point of right bound and left bound has a position in the absolute coordinate system.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-expansion","title":"Drivable Area Expansion","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion_1","title":"Static Expansion","text":"

Each module can statically expand the left and right bounds of the target lanes by the parameter defined values. This enables large vehicles to pass narrow curve. The image of this process can be described as

Note that we only expand right bound of the rightmost lane and left bound of the leftmost lane.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion_1","title":"Dynamic Expansion","text":"

The drivable area can also be expanded dynamically based on a minimum width calculated from the path curvature and the ego vehicle's properties. If static expansion is also enabled, the dynamic expansion will be done after the static expansion such that both expansions are applied.

Without dynamic expansion With dynamic expansion

Next we detail the algorithm used to expand the drivable area bounds.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#1-calculate-and-smooth-the-path-curvature","title":"1 Calculate and smooth the path curvature","text":"

To avoid sudden changes of the dynamically expanded drivable area, we first try to reuse as much of the previous path and its calculated curvatures as possible. Previous path points and curvatures are reused up to the first previous path point that deviates from the new path by more than the reuse_max_deviation parameter. At this stage, the path is also resampled according to the resampled_interval and cropped according to the max_arc_length. With the resulting preprocessed path points and previous curvatures, curvatures of the new path points are calculated using the 3 points method and smoothed using a moving window average with window size curvature_average_window.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#2-for-each-path-point-calculate-the-closest-bound-segment-and-the-minimum-drivable-area-width","title":"2 For each path point, calculate the closest bound segment and the minimum drivable area width","text":"

Each path point is projected on the original left and right drivable area bounds to calculate its corresponding bound index, original distance from the bounds, and the projected point. Additionally, for each path point, the minimum drivable area width is calculated using the following equation: Where \\(W\\) is the minimum drivable area width, \\(a\\), is the front overhang of ego, \\(l\\) is the wheelbase of ego, \\(w\\) is the width of ego, and \\(k\\) is the path curvature. This equation was derived from the work of Lim, H., Kim, C., and Jo, A., \"Model Predictive Control-Based Lateral Control of Autonomous Large-Size Bus on Road with Large Curvature,\" SAE Technical Paper 2021-01-0099, 2021.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#3-calculate-maximum-expansion-distances-of-each-bound-point-based-on-dynamic-objects-and-linestring-of-the-vector-map-optional","title":"3 Calculate maximum expansion distances of each bound point based on dynamic objects and linestring of the vector map (optional)","text":"

For each drivable area bound point, we calculate its maximum expansion distance as its distance to the closest \"obstacle\" (either a map linestring with type avoid_linestrings.type, or a dynamic object footprint if dynamic_objects.avoid is set to true). If max_expansion_distance is not 0.0, it is use here if smaller than the distance to the closest obstacle.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#4-calculate-by-how-much-each-bound-point-should-be-pushed-away-from-the-path","title":"4 Calculate by how much each bound point should be pushed away from the path","text":"

For each bound point, a shift distance is calculated. such that the resulting width between corresponding left and right bound points is as close as possible to the minimum width calculated in step 2 but the individual shift distance stays bellow the previously calculated maximum expansion distance.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#5-shift-bound-points-by-the-values-calculated-in-step-4-and-remove-all-loops-in-the-resulting-bound","title":"5 Shift bound points by the values calculated in step 4 and remove all loops in the resulting bound","text":"

Finally, each bound point is shifted away from the path by the distance calculated in step 4. Once all points have been shifted, loops are removed from the bound and we obtain our final expanded drivable area.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#visualizing-maximum-drivable-area-debug","title":"Visualizing maximum drivable area (Debug)","text":"

Sometimes, the developers might get a different result between two maps that may look identical during visual inspection.

For example, in the same area, one can perform avoidance and another cannot. This might be related to the maximum drivable area issues due to the non-compliance vector map design from the user.

To debug the issue, the maximum drivable area boundary can be visualized.

The maximum drivable area can be visualize by adding the marker from /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/maximum_drivable_area

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#expansion-with-hatched-road-markings-area","title":"Expansion with hatched road markings area","text":"

If the hatched road markings area is defined in the lanelet map, the area can be used as a drivable area. Since the area is expressed as a polygon format of Lanelet2, several steps are required for correct expansion.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/","title":"Path Generation design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#path-generation-design","title":"Path Generation design","text":"

This document explains how the path is generated for lane change and avoidance, etc. The implementation can be found in path_shifter.hpp.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#overview","title":"Overview","text":"

The base idea of the path generation in lane change and avoidance is to smoothly shift the reference path, such as the center line, in the lateral direction. This is achieved by using a constant-jerk profile as in the figure below. More details on how it is used can be found in README. It is assumed that the reference path is smooth enough for this algorithm.

The figure below explains how the application of a constant lateral jerk \\(l^{'''}(s)\\) can be used to induce lateral shifting. In order to comply with the limits on lateral acceleration and velocity, zero-jerk time is employed in the figure ( \\(T_a\\) and \\(T_v\\) ). In each interval where constant jerk is applied, the shift position \\(l(s)\\) can be characterized by a third-degree polynomial. Therefore the shift length from the reference path can then be calculated by combining spline curves.

Note that, due to the rarity of the \\(T_v\\) in almost all cases of lane change and avoidance, \\(T_v\\) is not considered in the current implementation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#mathematical-derivation","title":"Mathematical Derivation","text":"

With initial longitudinal velocity \\(v_0^{\\rm lon}\\) and longitudinal acceleration \\(a^{\\rm lon}\\), longitudinal position \\(s(t)\\) and longitudinal velocity at each time \\(v^{\\rm lon}(t)\\) can be derived as:

\\[ \\begin{align} s_1&= v^{\\rm lon}_0 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_1&= v^{\\rm lon}_0 + a^{\\rm lon} T_j \\\\ s_2&= v^{\\rm lon}_1 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_2&= v^{\\rm lon}_1 + a^{\\rm lon} T_a \\\\ s_3&= v^{\\rm lon}_2 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_3&= v^{\\rm lon}_2 + a^{\\rm lon} T_j \\\\ s_4&= v^{\\rm lon}_3 T_v + \\frac{1}{2} a^{\\rm lon} T_v^2 \\\\ v_4&= v^{\\rm lon}_3 + a^{\\rm lon} T_v \\\\ s_5&= v^{\\rm lon}_4 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_5&= v^{\\rm lon}_4 + a^{\\rm lon} T_j \\\\ s_6&= v^{\\rm lon}_5 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_6&= v^{\\rm lon}_5 + a^{\\rm lon} T_a \\\\ s_7&= v^{\\rm lon}_6 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_7&= v^{\\rm lon}_6 + a^{\\rm lon} T_j \\end{align} \\]

By applying simple integral operations, the following analytical equations can be derived to describe the shift distance \\(l(t)\\) at each time under lateral jerk, lateral acceleration, and velocity constraints.

\\[ \\begin{align} l_1&= \\frac{1}{6}jT_j^3\\\\[10pt] l_2&= \\frac{1}{6}j T_j^3 + \\frac{1}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_3&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_4&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_5&= \\frac{11}{6} j T_j^3 + \\frac{5}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v \\\\[10pt] l_6&= \\frac{11}{6} j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_7&= 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v \\end{align} \\]

These equations are used to determine the shape of a path. Additionally, by applying further mathematical operations to these basic equations, the expressions of the following subsections can be derived.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-maximum-acceleration-from-transition-time-and-final-shift-length","title":"Calculation of Maximum Acceleration from transition time and final shift length","text":"

In the case where there are no limitations on lateral velocity and lateral acceleration, the maximum lateral acceleration during the shifting can be calculated as follows. The constant-jerk time is given by \\(T_j = T_{\\rm total}/4\\) because of its symmetric property. Since \\(T_a=T_v=0\\), the final shift length \\(L=l_7=2jT_j^3\\) can be determined using the above equation. The maximum lateral acceleration is then given by \\(a_{\\rm max} =jT_j\\). This results in the following expression for the maximum lateral acceleration:

\\[ \\begin{align} a_{\\rm max}^{\\rm lat} = \\frac{8L}{T_{\\rm total}^2} \\end{align} \\]"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-ta-tj-and-jerk-from-acceleration-limit","title":"Calculation of Ta, Tj and jerk from acceleration limit","text":"

In the case where there are no limitations on lateral velocity, the constant-jerk and acceleration times, as well as the required jerk can be calculated from the acceleration limit, total time, and final shift length as follows. Since \\(T_v=0\\), the final shift length is given by:

\\[ \\begin{align} L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j \\end{align} \\]

Additionally, the velocity profile reveals the following relations:

\\[ \\begin{align} a_{\\rm lim}^{\\rm lat} &= j T_j\\\\ T_{\\rm total} &= 4T_j + 2T_a \\end{align} \\]

By solving these three equations, the following can be obtained:

\\[ \\begin{align} T_j&=\\frac{T_{\\rm total}}{2} - \\frac{2L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}}\\\\[10pt] T_a&=\\frac{4L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}} - \\frac{T_{\\rm total}}{2}\\\\[10pt] jerk&=\\frac{2a_{\\rm lim} ^2T_{\\rm total}}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}^2-4L} \\end{align} \\]

where \\(T_j\\) is the constant-jerk time, \\(T_a\\) is the constant acceleration time, \\(j\\) is the required jerk, \\(a_{\\rm lim}^{\\rm lat}\\) is the lateral acceleration limit, and \\(L\\) is the final shift length.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-required-time-from-jerk-and-acceleration-constraint","title":"Calculation of Required Time from Jerk and Acceleration Constraint","text":"

In the case where there are no limitations on lateral velocity, the total time required for shifting can be calculated from the lateral jerk and lateral acceleration limits and the final shift length as follows. By solving the two equations given above:

\\[ L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j,\\quad a_{\\rm lim}^{\\rm lat} = j T_j \\]

we obtain the following expressions:

\\[ \\begin{align} T_j &= \\frac{a_{\\rm lim}^{\\rm lat}}{j}\\\\[10pt] T_a &= \\frac{1}{2}\\sqrt{\\frac{a_{\\rm lim}^{\\rm lat}}{j}^2 + \\frac{4L}{a_{\\rm lim}^{\\rm lat}}} - \\frac{3a_{\\rm lim}^{\\rm lat}}{2j} \\end{align} \\]

The total time required for shifting can then be calculated as \\(T_{\\rm total}=4T_j+2T_a\\).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/","title":"Safety Check Utils","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#safety-check-utils","title":"Safety Check Utils","text":"

Safety check function checks if the given path will collide with a given target object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#purpose-role","title":"Purpose / Role","text":"

In the behavior path planner, certain modules (e.g., lane change) need to perform collision checks to ensure the safe navigation of the ego vehicle. These utility functions assist the user in conducting safety checks with other road participants.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#assumptions","title":"Assumptions","text":"

The safety check module is based on the following assumptions:

  1. Users must provide the position, velocity, and shape of both the ego and target objects to the utility functions.
  2. The yaw angle of each point in the predicted path of both the ego and target objects should point to the next point in the path.
  3. The safety check module uses RSS distance to determine the safety of a potential collision with other objects.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#limitations","title":"Limitations","text":"

Currently the yaw angle of each point of predicted paths of a target object does not point to the next point. Therefore, the safety check function might returns incorrect result for some edge case.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#inner-working-algorithm","title":"Inner working / Algorithm","text":"

The flow of the safety check algorithm is described in the following explanations.

Here we explain each step of the algorithm flow.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#1-get-pose-of-the-target-object-at-a-given-time","title":"1. Get pose of the target object at a given time","text":"

For the first step, we obtain the pose of the target object at a given time. This can be done by interpolating the predicted path of the object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#2-check-overlap","title":"2. Check overlap","text":"

With the interpolated pose obtained in the step.1, we check if the object and ego vehicle overlaps at a given time. If they are overlapped each other, the given path is unsafe.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#3-get-front-object","title":"3. Get front object","text":"

After the overlap check, it starts to perform the safety check for the broader range. In this step, it judges if ego or target object is in front of the other vehicle. We use arc length of the front point of each object along the given path to judge which one is in front of the other. In the following example, target object (red rectangle) is running in front of the ego vehicle (black rectangle).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#4-calculate-rss-distance","title":"4. Calculate RSS distance","text":"

After we find which vehicle is running ahead of the other vehicle, we start to compute the RSS distance. With the reaction time \\(t_{reaction}\\) and safety time margin \\(t_{margin}\\), RSS distance can be described as:

\\[ rss_{dist} = v_{rear} (t_{reaction} + t_{margin}) + \\frac{v_{rear}^2}{2|a_{rear, decel}|} - \\frac{v_{front}^2}{2|a_{front, decel|}} \\]

where \\(V_{front}\\), \\(v_{rear}\\) are front and rear vehicle velocity respectively and \\(a_{rear, front}\\), \\(a_{rear, decel}\\) are front and rear vehicle deceleration. Note that RSS distance is normally used for objects traveling in the same direction, if the yaw difference between a given ego pose and object pose is more than a user-defined yaw difference threshold, the rss collision check will be skipped for that specific pair of poses.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#5-create-extended-ego-and-target-object-polygons","title":"5. Create extended ego and target object polygons","text":"

In this step, we compute extended ego and target object polygons. The extended polygons can be described as:

As the picture shows, we expand the rear object polygon. For the longitudinal side, we extend it with the RSS distance, and for the lateral side, we extend it by the lateral margin.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_safety_check/#6-check-overlap","title":"6. Check overlap","text":"

Similar to the previous step, we check the overlap of the extended rear object polygon and front object polygon. If they are overlapped each other, we regard it as the unsafe situation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/","title":"Turn Signal design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#turn-signal-design","title":"Turn Signal design","text":"

Turn Signal decider determines necessary blinkers.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#purpose-role","title":"Purpose / Role","text":"

This module is responsible for activating a necessary blinker during driving. It uses rule-based algorithm to determine blinkers, and the details of this algorithm are described in the following sections. Note that this algorithm is strictly based on the Japanese Road Traffic Law.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#assumptions","title":"Assumptions","text":"

Autoware has following order of priorities for turn signals.

  1. Activate turn signal to safely navigate ego vehicle and protect other road participants
  2. Follow traffic laws
  3. Follow human driving practices
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#limitations","title":"Limitations","text":"

Currently, this algorithm can sometimes give unnatural (not wrong) blinkers in complicated situations. This is because it tries to follow the road traffic law and cannot solve blinker conflicts clearly in that environment.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#parameters-for-turn-signal-decider","title":"Parameters for turn signal decider","text":"Name Unit Type Description Default value turn_signal_intersection_search_distance [m] double constant search distance to decide activation of blinkers at intersections 30 turn_signal_intersection_angle_threshold_degree deg double angle threshold to determined the end point of intersection required section 15 turn_signal_minimum_search_distance [m] double minimum search distance for avoidance and lane change 10 turn_signal_search_time [s] double search time for to decide activation of blinkers 3.0 turn_signal_shift_length_threshold [m] double shift length threshold to decide activation of blinkers 0.3 turn_signal_remaining_shift_length_threshold [m] double When the ego's current shift length minus its end shift length is less than this threshold, the turn signal will be turned off. 0.1 turn_signal_on_swerving [-] bool flag to activate blinkers when swerving true

Note that the default values for turn_signal_intersection_search_distance and turn_signal_search_time is strictly followed by Japanese Road Traffic Laws. So if your country does not allow to use these default values, you should change these values in configuration files.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this algorithm, it assumes that each blinker has two sections, which are desired section and required section. The image of these two sections are depicted in the following diagram.

These two sections have the following meanings.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-desired-section","title":"- Desired Section","text":"
- This section is defined by road traffic laws. It cannot be longer or shorter than the designated length defined by the law.\n- In this section, you do not have to activate the designated blinkers if it is dangerous to do so.\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-required-section","title":"- Required Section","text":"
- In this section, ego vehicle must activate designated blinkers. However, if there are blinker conflicts, it must solve them based on the algorithm we mention later in this document.\n- Required section cannot be longer than desired section.\n

When turning on the blinker, it decides whether or not to turn on the specified blinker based on the distance from the front of the ego vehicle to the start point of each section. Conversely, when turning off the blinker, it calculates the distance from the base link of the ego vehicle to the end point of each section and decide whether or not to turn it off based on that.

For left turn, right turn, avoidance, lane change, goal planner and pull out, we define these two sections, which are elaborated in the following part.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#1-left-and-right-turn","title":"1. Left and Right turn","text":"

Turn signal decider checks each lanelet on the map if it has turn_direction information. If a lanelet has this information, it activates necessary blinker based on this information.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#2-avoidance","title":"2. Avoidance","text":"

Avoidance can be separated into two sections, first section and second section. The first section is from the start point of the path shift to the end of the path shift. The second section is from the end of shift point to the end of avoidance. Note that avoidance module will not activate turn signal when its shift length is below turn_signal_shift_length_threshold.

First section

Second section

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#3-lane-change","title":"3. Lane Change","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#4-pull-out","title":"4. Pull out","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#5-goal-planner","title":"5. Goal Planner","text":"

When it comes to handle several blinkers, it gives priority to the first blinker that comes first. However, this rule sometimes activate unnatural blinkers, so turn signal decider uses the following five rules to decide the necessary turn signal.

Based on these five rules, turn signal decider can solve blinker conflicts. The following pictures show some examples of this kind of conflicts.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-several-right-and-left-turns-on-short-sections","title":"- Several right and left turns on short sections","text":"

In this scenario, ego vehicle has to pass several turns that are close each other. Since this pattern can be solved by the pattern1 rule, the overall result is depicted in the following picture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-1","title":"- Avoidance with left turn (1)","text":"

In this scene, ego vehicle has to deal with the obstacle that is on its original path as well as make a left turn. The overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-2","title":"- Avoidance with left turn (2)","text":"

Same as the previous scenario, ego vehicle has to avoid the obstacle as well as make a turn left. However, in this scene, the obstacle is parked after the intersection. Similar to the previous one, the overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-lane-change-and-left-turn","title":"- Lane change and left turn","text":"

In this scenario, ego vehicle has to do lane change before making a left turn. In the following example, ego vehicle does not activate left turn signal until it reaches the end point of the lane change path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/","title":"Behavior Path Sampling Based Planner","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#behavior-path-sampling-based-planner","title":"Behavior Path Sampling Based Planner","text":"

WARNING: This module is experimental and has not been properly tested on a real vehicle, use only on simulations.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#purpose","title":"Purpose","text":"

This package implements a node that uses sampling based planning to generate a drivable trajectory for the behavior path planner. It is heavily based off the sampling_based_planner module.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#features","title":"Features","text":"

This package is able to:

Note that the velocity is just taken over from the input path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#algorithm","title":"Algorithm","text":"

Sampling based planning is decomposed into 3 successive steps:

  1. Sampling: candidate trajectories are generated.
  2. Pruning: invalid candidates are discarded.
  3. Selection: the best remaining valid candidate is selected.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#sampling","title":"Sampling","text":"

Candidate trajectories are generated based on the current ego state and some target state. 1 sampling algorithms is currently implemented: polynomials in the frenet frame.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#pruning","title":"Pruning","text":"

The validity of each candidate trajectory is checked using a set of hard constraints.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#selection","title":"Selection","text":"

Among the valid candidate trajectories, the best one is determined using a set of soft constraints (i.e., objective functions).

Each soft constraint is associated with a weight to allow tuning of the preferences.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#limitations","title":"Limitations","text":"

The quality of the candidates generated with polynomials in frenet frame greatly depend on the reference path. If the reference path is not smooth, the resulting candidates will probably be un-drivable.

Failure to find a valid trajectory current results in a suddenly stopping trajectory.

The module has troubles generating paths that converge rapidly to the goal lanelet. Basically, after overcoming all obstacles, the module should prioritize paths that rapidly make the ego vehicle converge back to its goal lane (ie. paths with low average and final lateral deviation). However, this does not function properly at the moment.

Detection of proper merging can be rough: Sometimes, the module when detects that the ego has converged on the goal lanelet and that there are no more obstacles, the planner transitions to the goal planner, but the transition is not very smooth and could cause discomfort for the user.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#future-works","title":"Future works","text":"

Some possible improvements for this module include:

-Implementing a dynamic weight tuning algorithm: Dynamically change weights depending on the scenario (ie. to prioritize more the paths with low curvature and low avg. lat. deviation after all obstacles have been avoided).

-Implementing multi-objective optimization to improve computing time and possibly make a more dynamic soft constraints weight tuning. Related publication.

-Implement bezier curves as another method to obtain samples, see the sampling_based_planner module.

-Explore the possibility to replace several or other behavior path modules with the sampling based behavior path module.

-Perform real-life tests of this module once it has matured and some of its limitations have been solved.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_sampling_planner_module/#other-possibilities","title":"Other possibilities","text":"

The module is currently aimed at creating paths for static obstacle avoidance. However, the nature of sampling planner allows this module to be expanded or repurposed to other tasks such as lane changes, dynamic avoidance and general reaching of a goal. It is possible, with a good dynamic/scenario dependant weight tuning to use the sampling planning approach as a replacement for the other behavior path modules, assuming good candidate pruning and good soft constraints weight tuning.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/","title":"Side Shift design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#side-shift-design","title":"Side Shift design","text":"

(For remote control) Shift the path to left or right according to an external instruction.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#overview-of-the-side-shift-module-process","title":"Overview of the Side Shift Module Process","text":"
  1. Receive the required lateral offset input.
  2. Update the requested_lateral_offset_ under the following conditions: a. Verify if the last update time has elapsed. b. Ensure the required lateral offset value is different from the previous one.
  3. Insert the shift points into the path if the side shift module's status is not in the SHIFTING status.

Please be aware that requested_lateral_offset_ is continuously updated with the latest values and is not queued.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#statuses-of-the-side-shift","title":"Statuses of the Side Shift","text":"

The side shift has three distinct statuses. Note that during the SHIFTING status, the path cannot be updated:

  1. BEFORE_SHIFT: Preparing for shift.
  2. SHIFTING: Currently in the process of shifting.
  3. AFTER_SHIFT: Shift completed.

side shift status"},{"location":"planning/behavior_path_planner/autoware_behavior_path_side_shift_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/","title":"Start Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#start-planner-design","title":"Start Planner design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#purpose-role","title":"Purpose / Role","text":"

This module generates and plans a path for safely merging from the shoulder lane or side of road lane into the center of the road lane.

Specifically, it includes the following features:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-cases","title":"Use Cases","text":"

Give an typical example of how path generation is executed. Showing example of path generation starts from shoulder lane, but also from side of road lane can be generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-1-shift-pull-out","title":"Use Case 1: Shift pull out","text":"

In the shoulder lane, when there are no parked vehicles ahead and the shoulder lane is sufficiently long, a forward shift pull out path (a clothoid curve with consistent jerk) is generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-2-geometric-pull-out","title":"Use Case 2: Geometric pull out","text":"

In the shoulder lane, when there are objects in front and the lane is not sufficiently long behind, a geometric pull out path is generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-3-backward-and-shift-pull-out","title":"Use Case 3: Backward and shift pull out","text":"

In the shoulder lane, when there are parked vehicles ahead and the lane is sufficiently long behind, a path that involves reversing before generating a forward shift pull out path is created.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-4-backward-and-geometric-pull-out","title":"Use Case 4: Backward and geometric pull out","text":"

In the shoulder lane, when there is an object ahead and not enough space to reverse sufficiently, a path that involves reversing before making an geometric pull out is generated.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#use-case-5-freespace-pull-out","title":"Use Case 5: Freespace pull out","text":"

If the map is annotated with the information that a free space path can be generated in situations where both shift and geometric pull out paths are impossible to create, a path based on the free space algorithm will be generated.

As a note, the patterns for generating these paths are based on default parameters, but as will be explained in the following sections, it is possible to control aspects such as making paths that involve reversing more likely to be generated, or making geometric paths more likely to be generated, by changing the path generation policy or adjusting the margin around static objects.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#limitations","title":"Limitations","text":"

Here are some notable limitations:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#startend-conditions","title":"Start/End Conditions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#start-conditions","title":"Start Conditions","text":"

The StartPlannerModule is designed to initiate its execution based on specific criteria evaluated by the isExecutionRequested function. The module will not start under the following conditions:

  1. Start pose on the middle of the road: The module will not initiate if the start pose of the vehicle is determined to be in the middle of the road. This ensures the planner starts from a roadside position.

  2. Vehicle is far from start position: If the vehicle is far from the start position, the module will not execute. This prevents redundant execution when the new goal is given.

  3. Vehicle reached goal: The module will not start if the vehicle has already reached its goal position, avoiding unnecessary execution when the destination is attained.

  4. Vehicle in motion: If the vehicle is still moving, the module will defer starting. This ensures that planning occurs from a stable, stationary state for safety.

  5. Goal behind in same route segment: The module will not initiate if the goal position is behind the ego vehicle within the same route segment. This condition is checked to avoid complications with planning routes that require the vehicle to move backward on its current path, which is currently not supported.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#end-conditions","title":"End Conditions","text":"

The StartPlannerModule terminates when specific conditions are met, depending on the type of planner being used. The canTransitSuccessState function determines whether the module should transition to the success state based on the following criteria:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#when-the-freespace-planner-is-active","title":"When the Freespace Planner is active","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#when-any-other-type-of-planner-is-active","title":"When any other type of planner is active","text":"

The transition to the success state is determined by the following conditions:

The flowchart below illustrates the decision-making process in the canTransitSuccessState function:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#concept-of-safety-assurance","title":"Concept of safety assurance","text":"

The approach to collision safety is divided into two main components: generating paths that consider static information, and detecting collisions with dynamic obstacles to ensure the safety of the generated paths.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#1-generating-path-with-static-information","title":"1. Generating path with static information","text":"

Here's the expression of the steps start pose searching steps, considering the collision_check_margins is set at [2.0, 1.0, 0.5, 0.1] as example. The process is as follows:

  1. Generating start pose candidates

  2. Starting search at maximum margin

  3. Repeating search according to threshold levels

  4. Continuing the search

  5. Generating a stop path

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#search-priority","title":"search priority","text":"

If a safe path with sufficient clearance for static obstacles cannot be generated forward, a backward search from the vehicle's current position is conducted to locate a suitable start point for a pull out path generation.

During this backward search, different policies can be applied based on search_priority parameters:

Selecting efficient_path focuses on creating a shift pull out path, regardless of how far back the vehicle needs to move. Opting for short_back_distance aims to find a location with the least possible backward movement.

PriorityOrder is defined as a vector of pairs, where each element consists of a size_t index representing a start pose candidate index, and the planner type. The PriorityOrder vector is processed sequentially from the beginning, meaning that the pairs listed at the top of the vector are given priority in the selection process for pull out path generation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#efficient_path","title":"efficient_path","text":"

When search_priority is set to efficient_path and the preference is for prioritizing shift_pull_out, the PriorityOrder array is populated in such a way that shift_pull_out is grouped together for all start pose candidates before moving on to the next planner type. This prioritization is reflected in the order of the array, with shift_pull_out being listed before geometric_pull_out.

Index Planner Type 0 shift_pull_out 1 shift_pull_out ... ... N shift_pull_out 0 geometric_pull_out 1 geometric_pull_out ... ... N geometric_pull_out

This approach prioritizes trying all candidates with shift_pull_out before proceeding to geometric_pull_out, which may be efficient in situations where shift_pull_out is likely to be appropriate.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#short_back_distance","title":"short_back_distance","text":"

For search_priority set to short_back_distance, the array alternates between planner types for each start pose candidate, which can minimize the distance the vehicle needs to move backward if the earlier candidates are successful.

Index Planner Type 0 shift_pull_out 0 geometric_pull_out 1 shift_pull_out 1 geometric_pull_out ... ... N shift_pull_out N geometric_pull_out

This ordering is beneficial when the priority is to minimize the backward distance traveled, giving an equal chance for each planner to succeed at the closest possible starting position.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#2-collision-detection-with-dynamic-obstacles","title":"2. Collision detection with dynamic obstacles","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#example-of-safety-check-performed-range-for-shift-pull-out","title":"example of safety check performed range for shift pull out","text":"

Give an example of safety verification range for shift pull out. The safety check is performed from the start of the shift to the end of the shift. And if the vehicle footprint does not leave enough space for a rear vehicle to drive through, the safety check against dynamic objects is disabled.

As a note, no safety check is performed during backward movements. Safety verification commences at the point where the backward motion ceases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#rtc-interface","title":"RTC interface","text":"

The system operates distinctly under manual and auto mode, especially concerning the before the departure and interaction with dynamic obstacles. Below are the specific behaviors for each mode:

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#when-approval-is-required","title":"When approval is required?","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#forward-driving","title":"Forward driving","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#backward-driving-forward-driving","title":"Backward driving + forward driving","text":"

This differentiation ensures that the vehicle operates safely by requiring human intervention in manual mode(enable_rtc is true) at critical junctures and incorporating automatic safety checks in both modes to prevent unsafe operations in the presence of dynamic obstacles.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#general-parameters-for-start_planner","title":"General parameters for start_planner","text":"Name Unit Type Description Default value th_arrived_distance [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time [s] double time threshold for arrival of path termination 1.0 th_distance_to_middle_of_the_road [m] double distance threshold to determine if the vehicle is on the middle of the road 0.1 collision_check_margins [m] double Obstacle collision check margins list [2.0, 1.0, 0.5, 0.1] shift_collision_check_distance_from_end [m] double collision check distance from end shift end pose -10.0 geometric_collision_check_distance_from_end [m] double collision check distance from end geometric end pose 0.0 collision_check_margin_from_front_object [m] double collision check margin from front object 5.0 skip_rear_vehicle_check - bool flag to skip rear vehicle check (rear vehicle check is performed to skip safety check and proceed with departure when the ego vehicle is obstructing the progress of a rear vehicle) false extra_width_margin_for_rear_obstacle [m] double extra width that is added to the perceived rear obstacle's width when deciding if the rear obstacle can overtake the ego vehicle while it is merging to a lane from the shoulder lane 0.5 object_types_to_check_for_path_generation.check_car - bool flag to check car for path generation true object_types_to_check_for_path_generation.check_truck - bool flag to check truck for path generation true object_types_to_check_for_path_generation.check_bus - bool flag to check bus for path generation true object_types_to_check_for_path_generation.check_bicycle - bool flag to check bicycle for path generation true object_types_to_check_for_path_generation.check_motorcycle - bool flag to check motorcycle for path generation true object_types_to_check_for_path_generation.check_pedestrian - bool flag to check pedestrian for path generation true object_types_to_check_for_path_generation.check_unknown - bool flag to check unknown for path generation true center_line_path_interval [m] double reference center line path point interval 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#ego-vehicles-velocity-planning","title":"Ego vehicle's velocity planning","text":"

WIP

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#safety-check-in-free-space-area","title":"Safety check in free space area","text":"

WIP

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-safety-check","title":"Parameters for safety check","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#stop-condition-parameters","title":"Stop Condition Parameters","text":"

Parameters under stop_condition define the criteria for stopping conditions.

Name Unit Type Description Default value maximum_deceleration_for_stop [m/s^2] double Maximum deceleration allowed for a stop 1.0 maximum_jerk_for_stop [m/s^3] double Maximum jerk allowed for a stop 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#ego-predicted-path-parameters","title":"Ego Predicted Path Parameters","text":"

Parameters under path_safety_check.ego_predicted_path specify the ego vehicle's predicted path characteristics.

Name Unit Type Description Default value min_velocity [m/s] double Minimum velocity of the ego vehicle's predicted path 0.0 acceleration [m/s^2] double Acceleration for the ego vehicle's predicted path 1.0 max_velocity [m/s] double Maximum velocity of the ego vehicle's predicted path 1.0 time_horizon_for_front_object [s] double Time horizon for predicting front objects 10.0 time_horizon_for_rear_object [s] double Time horizon for predicting rear objects 10.0 time_resolution [s] double Time resolution for the ego vehicle's predicted path 0.5 delay_until_departure [s] double Delay until the ego vehicle departs 1.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#target-object-filtering-parameters","title":"Target Object Filtering Parameters","text":"

Parameters under target_filtering are related to filtering target objects for safety check.

Name Unit Type Description Default value safety_check_time_horizon [s] double Time horizon for predicted paths of the ego and dynamic objects 5.0 safety_check_time_resolution [s] double Time resolution for predicted paths of the ego and dynamic objects 1.0 object_check_forward_distance [m] double Forward distance for object detection 10.0 object_check_backward_distance [m] double Backward distance for object detection 100.0 ignore_object_velocity_threshold [m/s] double Velocity threshold below which objects are ignored 1.0 object_types_to_check.check_car - bool Flag to check cars true object_types_to_check.check_truck - bool Flag to check trucks true object_types_to_check.check_bus - bool Flag to check buses true object_types_to_check.check_trailer - bool Flag to check trailers true object_types_to_check.check_bicycle - bool Flag to check bicycles true object_types_to_check.check_motorcycle - bool Flag to check motorcycles true object_types_to_check.check_pedestrian - bool Flag to check pedestrians true object_types_to_check.check_unknown - bool Flag to check unknown object types false object_lane_configuration.check_current_lane - bool Flag to check the current lane true object_lane_configuration.check_right_side_lane - bool Flag to check the right side lane true object_lane_configuration.check_left_side_lane - bool Flag to check the left side lane true object_lane_configuration.check_shoulder_lane - bool Flag to check the shoulder lane true object_lane_configuration.check_other_lane - bool Flag to check other lanes false include_opposite_lane - bool Flag to include the opposite lane in check false invert_opposite_lane - bool Flag to invert the opposite lane check false check_all_predicted_path - bool Flag to check all predicted paths true use_all_predicted_path - bool Flag to use all predicted paths true use_predicted_path_outside_lanelet - bool Flag to use predicted paths outside of lanelets false"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#safety-check-parameters","title":"Safety Check Parameters","text":"

Parameters under safety_check_params define the configuration for safety check.

Name Unit Type Description Default value enable_safety_check - bool Flag to enable safety check true check_all_predicted_path - bool Flag to check all predicted paths true publish_debug_marker - bool Flag to publish debug markers false rss_params.rear_vehicle_reaction_time [s] double Reaction time for rear vehicles 2.0 rss_params.rear_vehicle_safety_time_margin [s] double Safety time margin for rear vehicles 1.0 rss_params.lateral_distance_max_threshold [m] double Maximum lateral distance threshold 2.0 rss_params.longitudinal_distance_min_threshold [m] double Minimum longitudinal distance threshold 3.0 rss_params.longitudinal_velocity_delta_time [s] double Delta time for longitudinal velocity 0.8 hysteresis_factor_expand_rate - double Rate to expand/shrink the hysteresis factor 1.0 collision_check_yaw_diff_threshold - double Maximum yaw difference between ego and object when executing rss-based collision checking 1.578"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#path-generation","title":"Path Generation","text":"

There are two path generation methods.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#shift-pull-out","title":"shift pull out","text":"

This is the most basic method of starting path planning and is used on road lanes and shoulder lanes when there is no particular obstruction.

Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.

shift pull out video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-shift-pull-out","title":"parameters for shift pull out","text":"Name Unit Type Description Default value enable_shift_pull_out [-] bool flag whether to enable shift pull out true check_shift_path_lane_departure [-] bool flag whether to check if shift path footprints are out of lane true allow_check_shift_path_lane_departure_override [-] bool flag to override/cancel lane departure check if the ego vehicle's starting pose is already out of lane false shift_pull_out_velocity [m/s] double velocity of shift pull out 2.0 pull_out_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.1 minimum_shift_pull_out_distance [m] double minimum shift pull out distance. if calculated pull out distance is shorter than this, use this for path generation. 0.0 maximum_curvature [1/m] double maximum curvature. Calculate the required pull out distance from this maximum curvature, assuming the reference path is considered a straight line and shifted by two approximate arcs. This does not compensate for curvature in a shifted path or curve. 0.07 end_pose_curvature_threshold [1/m] double The curvature threshold which is used for calculating the shift pull out distance. The shift end pose is shifted forward so that curvature on shift end pose is less than this value. This is to prevent the generated path from having a large curvature when the end pose is on a curve. If a shift end pose with a curvature below the threshold is not found, the shift pull out distance is used as the distance to the point with the lowest curvature among the points beyond a certain distance. 0.01"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#geometric-pull-out","title":"geometric pull out","text":"

Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.

geometric pull out video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-geometric-pull-out","title":"parameters for geometric pull out","text":"Name Unit Type Description Default value enable_geometric_pull_out [-] bool flag whether to enable geometric pull out true divide_pull_out_path [-] bool flag whether to divide arc paths. The path is assumed to be divided because the curvature is not continuous. But it requires a stop during the departure. false geometric_pull_out_velocity [m/s] double velocity of geometric pull out 1.0 lane_departure_margin [m] double margin of deviation to lane right 0.2 lane_departure_check_expansion_margin [m] double margin to expand the ego vehicle footprint when doing lane departure checks 0.0 pull_out_max_steer_angle [rad] double maximum steer angle for path generation 0.26"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#backward-pull-out-start-point-search","title":"backward pull out start point search","text":"

If a safe path cannot be generated from the current position, search backwards for a pull out start point at regular intervals(default: 2.0).

pull out after backward driving video

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-for-backward-pull-out-start-point-search","title":"parameters for backward pull out start point search","text":"Name Unit Type Description Default value enable_back [-] bool flag whether to search backward for start_point true search_priority [-] string In the case of efficient_path, use efficient paths even if the back distance is longer. In case of short_back_distance, use a path with as short a back distance efficient_path max_back_distance [m] double maximum back distance 30.0 backward_search_resolution [m] double distance interval for searching backward pull out start point 2.0 backward_path_update_duration [s] double time interval for searching backward pull out start point. this prevents chattering between back driving and pull_out 3.0 ignore_distance_from_lane_end [m] double If distance from shift start pose to end of shoulder lane is less than this value, this start pose candidate is ignored 15.0"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#freespace-pull-out","title":"freespace pull out","text":"

If the vehicle gets stuck with pull out along lanes, execute freespace pull out. To run this feature, you need to set parking_lot to the map, activate_by_scenario of costmap_generator to false and enable_freespace_planner to true

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#unimplemented-parts-limitations-for-freespace-pull-out","title":"Unimplemented parts / limitations for freespace pull out","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_start_planner_module/#parameters-freespace-parking","title":"Parameters freespace parking","text":"Name Unit Type Description Default value enable_freespace_planner [-] bool this flag activates a free space pullout that is executed when a vehicle is stuck due to obstacles in the lanes where the ego is located true end_pose_search_start_distance [m] double distance from ego to the start point of the search for the end point in the freespace_pull_out driving lane 20.0 end_pose_search_end_distance [m] double distance from ego to the end point of the search for the end point in the freespace_pull_out driving lane 30.0 end_pose_search_interval [m] bool interval to search for the end point in the freespace_pull_out driving lane 2.0

See freespace_planner for other parameters.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/","title":"Avoidance module for static objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#avoidance-module-for-static-objects","title":"Avoidance module for static objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#purposerole","title":"Purpose/Role","text":"

This is a rule-based avoidance module, which runs based on perception output data, HDMap, current path and route. This module is designed to create avoidance paths for static (=stopped) objects in simple situations. Currently, this module doesn't support dynamic (=moving) objects.

This module has an RTC interface, and the user can select operation mode from MANUAL/AUTO depending on the performance of the vehicle's sensors. If the user selects MANUAL mode, this module outputs a candidate avoidance path and awaits operator approval. In the case where the sensor/perception performance is insufficient and false positives occur, we recommend MANUAL mode to prevent unnecessary avoidance maneuvers.

If the user selects AUTO mode, this module modifies the current following path without operator approval. If the sensor/perception performance is sufficient, use AUTO mode.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitations","title":"Limitations","text":"

This module allows developers to design vehicle behavior in avoidance planning using specific rules. Due to the property of rule-based planning, the algorithm cannot compensate for not colliding with obstacles in complex cases. This is a trade-off between \"be intuitive and easy to design\" and \"be hard to tune but can handle many cases\". This module adopts the former policy and therefore this output should be checked more strictly in the later stage. In the .iv reference implementation, there is another avoidance module in the motion planning module that uses optimization to handle the avoidance in complex cases. (Note that, the motion planner needs to be adjusted so that the behavior result will not be changed much in the simple case and this is a typical challenge for the behavior-motion hierarchical architecture.)

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#why-is-avoidance-in-behavior-module","title":"Why is avoidance in behavior module?","text":"

This module executes avoidance over lanes, and the decision requires the lane structure information to take care of traffic rules (e.g. it needs to send an indicator signal when the vehicle crosses a lane). The difference between the motion and behavior modules in the planning stack is whether the planner takes traffic rules into account, which is why this avoidance module exists in the behavior module.

If you would like to know the overview rather than the detail, please skip the next section and refer to FAQ.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#inner-workingsalgorithms","title":"Inner workings/Algorithms","text":"

This module mainly has two parts, target filtering and path generation. At first, all objects are filtered by several conditions. In this step, the module checks avoidance feasibility and necessity. After that, this module generates avoidance path outline, which we call shift line, based on filtered objects. The shift lines are set into path shifter, which is a library for path generation, to create a smooth shift path. Additionally, this module has a feature to check non-target objects so that the ego can avoid target objects safely. This feature receives a generated avoidance path and surrounding objects, and judges the current situation. Lastly, this module updates current ego behavior.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#target-object-filtering","title":"Target object filtering","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#overview","title":"Overview","text":"

The module uses the following conditions to filter avoidance target objects.

Check condition Target class Details If conditions are not met Is an avoidance target class object? All Use can select avoidance target class from config file. Never avoid it. Is a stopped object? All Objects keep higher speed than th_moving_speed for longer period of time than th_moving_time is judged as moving. Never avoid it. Is within detection area? All The module creates detection area to filter target objects roughly based on lateral margin in config file. (see here) Never avoid it. Isn't there enough lateral distance between the object and path? All - Never avoid it. Is near the centerline of ego lane? All - It depends on other conditions. Is there a crosswalk near the object? Pedestrian, Bicycle The module doesn't avoid the Pedestrian and Bicycle nearer the crosswalk because the ego should stop in front of it if they're crossing the road. (see here) Never avoid it. Is the distance between the object and traffic light along the path longer than the threshold? Car, Truck, Bus, Trailer The module uses this condition when there is ambiguity about whether the vehicle is parked. It depends on other conditions. Is the distance between the object and crosswalk light along the path longer than threshold? Car, Truck, Bus, Trailer Same as above. It depends on other conditions. Is the stopping time longer than threshold? Car, Truck, Bus, Trailer Same as above. It depends on other conditions. Is within intersection? Car, Truck, Bus, Trailer The module assumes that there isn't any parked vehicle within intersection. It depends on other conditions. Is on ego lane? Car, Truck, Bus, Trailer - It depends on other conditions. Is a parked vehicle? Car, Truck, Bus, Trailer The module judges whether the vehicle is a parked vehicle based on its lateral offset. (see here) It depends on other conditions. Is merging into ego lane from other lane? Car, Truck, Bus, Trailer The module judges the vehicle behavior based on its yaw angle and offset direction. (see here) It depends on other conditions. Is merging into other lane from ego lane? Car, Truck, Bus, Trailer Same as above. It depends on other conditions."},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#common-conditions","title":"Common conditions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#detection-area","title":"Detection area","text":"

The module generates detection area for target filtering based on the following parameters:

      # avoidance is performed for the object type with true\ntarget_object:\n...\nlateral_margin:\nsoft_margin: 0.3                            # [m]\nhard_margin: 0.2                            # [m]\nhard_margin_for_parked_vehicle: 0.7         # [m]\n...\n# For target object filtering\ntarget_filtering:\n...\n# detection area generation parameters\ndetection_area:\nstatic: false                                 # [-]\nmin_forward_distance: 50.0                    # [m]\nmax_forward_distance: 150.0                   # [m]\nbackward_distance: 10.0                       # [m]\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#width-of-detection-area","title":"Width of detection area","text":"
  1. Get the largest lateral margin of all classes (Car, Truck, ...). The margin is the sum of soft_margin and hard_margin_for_parked_vehicle.
  2. The detection area width is the sum of ego vehicle width and the largest lateral margin.
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#longitudinal-distance-of-detection-area","title":"Longitudinal distance of detection area","text":"

If the parameter detection_area.static is set to true, the module creates detection area whose longitudinal distance is max_forward_distance.

If the parameter detection_area.static is set to false, the module creates a detection area so that the ego can avoid objects with minimum lateral jerk value. Thus, the longitudinal distance depends on maximum lateral shift length, lateral jerk constraints and current ego speed. Additionally, it has to consider the distance used for the preparation phase.

...\nconst auto max_shift_length = std::max(\nstd::abs(parameters_->max_right_shift_length), std::abs(parameters_->max_left_shift_length));\nconst auto dynamic_distance =\nPathShifter::calcLongitudinalDistFromJerk(max_shift_length, getLateralMinJerkLimit(), speed);\n\nreturn std::clamp(\n1.5 * dynamic_distance + getNominalPrepareDistance(),\nparameters_->object_check_min_forward_distance,\nparameters_->object_check_max_forward_distance);\n

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-non-vehicle-type-objects","title":"Conditions for non-vehicle type objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#for-crosswalk-users","title":"For crosswalk users","text":"

If Pedestrian and Bicycle are closer to crosswalk than threshold 2.0m (hard coded for now), the module judges they're crossing the road and never avoids them.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-vehicle-type-objects","title":"Conditions for vehicle type objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#judge-vehicle-behavior","title":"Judge vehicle behavior","text":"

The module classifies vehicles into the following three behaviors based on yaw angle and offset direction.

# params for filtering objects that are in intersection\nintersection:\nyaw_deviation: 0.349 # [rad] (default 20.0deg)\n
Behavior Details Figure NONE If the object's relative yaw angle to lane is less than threshold yaw_deviation, it is classified into NONE. MERGING See following flowchart. DEVIATING See following flowchart.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#judge-if-its-a-parked-vehicle","title":"Judge if it's a parked vehicle","text":"

Not only the length from the centerline, but also the length from the road shoulder is calculated and used for the filtering process. In this logic, it calculates ratio of actual shift length to shiftable shift length as follows. If the result is larger than threshold th_shiftable_ratio, the module judges the vehicle is a parked vehicle.

\\[ L_{d} = \\frac{W_{lane} - W_{obj}}{2}, \\\\ ratio = \\frac{L_{a}}{L_{d}} \\]

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#target-object-filtering_1","title":"Target object filtering","text":"Situation Details Ego behavior Vehicle is within intersection area defined in HDMap. The module ignores vehicles following a lane or merging into ego lane. Never avoid it. Vehicle is on ego lane. There are adjacent lanes for both sides. Never avoid it. Vehicle is merging into other lane from ego lane. Most of its footprint is on ego lane. Never avoid it. Vehicle is merging into ego lane from other lane. Most of its footprint is on ego lane. Never avoid it. Vehicle does not appear to be parked and is stopped in front of a crosswalk or traffic light. Never avoid it. Vehicle stops on ego lane while pulling over to the side of the road. Avoid it immediately. Vehicle stops on adjacent lane. Avoid it immediately. Vehicle stops on ego lane without pulling over to the side of the road. Set the parameter avoidance_for_ambiguous_vehicle.enable to true, the module avoids ambiguous vehicle. Vehicle is merging into ego lane from other lane. Set the parameter avoidance_for_ambiguous_vehicle.enable to true, the module avoids ambiguous vehicle. Vehicle is merging into other lane from ego lane. Set the parameter avoidance_for_ambiguous_vehicle.enable to true, the module avoids ambiguous vehicle."},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#flowchart","title":"Flowchart","text":"

There are three main filtering functions isSatisfiedWithCommonCondition(), isSatisfiedWithVehicleCondition() and isSatisfiedWithNonVehicleCondition(). The filtering process is executed according to the following flowchart. Additionally, the module checks avoidance necessity in isNoNeedAvoidanceBehavior() based on the object pose, ego path and lateral margin in the config file.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#common-conditions_1","title":"Common conditions","text":"

At first, the function isSatisfiedWithCommonCondition() includes conditions used for all object classes.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-vehicle","title":"Conditions for vehicle","text":"

Target class:

As a next step, the object is filtered by a condition specialized for its class.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#conditions-for-non-vehicle-objects","title":"Conditions for non-vehicle objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#when-target-object-has-gone","title":"When target object has gone","text":"

User can select the ego behavior when the target object has gone.

cancel:\nenable: true # [-]\n

If the above parameter is true, this module reverts avoidance path when the following conditions are met.

If the parameter is false, this module keeps running even after the target object has gone.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#path-generation","title":"Path generation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-to-prevent-shift-line-chattering-that-is-caused-by-perception-noise","title":"How to prevent shift line chattering that is caused by perception noise","text":"

Since the object recognition result contains noise related to position, orientation and polygon shape, if the module uses the raw object recognition results in path generation, the output path will be directly affected by the noise. Therefore, in order to reduce the influence of the noise, this module generates a polygon for each target object, and the output path is generated based on that.

The envelope polygon is a rectangle box, whose size depends on the object's polygon and buffer parameter envelope_buffer_margin. Additionally, it is always parallel to the reference path. When the module finds a target object for the first time, it initializes the polygon.

        car:\n...\nenvelope_buffer_margin: 0.5                   # [m] FOR DEVELOPER\n

The module creates a one-shot envelope polygon by using the latest object pose and raw polygon in every planning cycle. On the other hand, the module uses the envelope polygon information created in the last planning cycle in order to update the envelope polygon with consideration of the pose covariance.

If the pose covariance is smaller than the threshold, the envelope polygon would be updated according to the following logic. If the one-shot envelope polygon is not within the previous envelope polygon, the module creates a new envelope polygon. Otherwise, it keeps the previous envelope polygon.

When the pose covariance is larger than the threshold, it is compared with the maximum pose covariance of each object. If the value is smaller, the one-shot envelope polygon is used directly as the envelope polygon. Otherwise, it keeps the previous envelope polygon.

By doing this process, the envelope polygon size and pose will converge even if perception output includes noise in object pose or shape.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#relationship-between-envelope-polygon-and-avoidance-path","title":"Relationship between envelope polygon and avoidance path","text":"

The avoidance path has two shift sections, whose start or end point position depends on the envelope polygon. The end point of the avoidance shift section and start point of the return shift section are fixed based on the envelope polygon and the other side edges are dynamically changed based on ego speed, shift length, lateral jerk constraints, etc.

The lateral positions of the two points are decided so that there can be enough space (=lateral margin) between ego body and the most overhang point of the envelope polygon edge points. User can adjust lateral margin with the following parameters.

        car:\n...\nlateral_margin:\nsoft_margin: 0.3                            # [m]\nhard_margin: 0.2                            # [m]\nhard_margin_for_parked_vehicle: 0.7         # [m]\n

The longitudinal positions depends on the envelope polygon, ego vehicle specification and the following parameters. The longitudinal distance between avoidance shift section end point and envelope polygon (=front longitudinal buffer) is the sum of front_overhang defined in vehicle_info.param.yaml and longitudinal_margin if the parameter consider_front_overhang is true. If consider_front_overhang is false, only longitudinal_margin is considered. Similarly, the distance between the return shift section start point and envelope polygon (=rear longitudinal buffer) is the sum of rear_overhang and longitudinal_margin.

      target_object:\ncar:\n...\nlongitudinal_margin: 0.0                      # [m]\n\n...\navoidance:\n...\nlongitudinal:\n...\nconsider_front_overhang: true                 # [-]\nconsider_rear_overhang: true                  # [-]\n

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#lateral-margin","title":"Lateral margin","text":"

As mentioned above, user can adjust lateral margin by changing the following two types of parameters. The soft_margin is a soft constraint parameter for lateral margin. The hard_margin and hard_margin_for_parked_vehicle are hard constraint parameters.

        car:\n...\nlateral_margin:\nsoft_margin: 0.3                            # [m]\nhard_margin: 0.2                            # [m]\nhard_margin_for_parked_vehicle: 0.7         # [m]\n

Basically, this module tries to generate an avoidance path in order to keep lateral distance, which is the sum of soft_margin and hard_margin/hard_margin_for_parked_vehicle, from the avoidance target object.

But if there isn't enough space to keep soft_margin distance, this module shortens soft constraint lateral margin. The parameter soft_margin is a maximum value of soft constraint, and actual soft margin can be a value between 0.0 and soft_margin. On the other hand, this module definitely keeps hard_margin or hard_margin_for_parked_vehicle depending on the situation. Thus, the minimum value of total lateral margin is hard_margin/hard_margin_for_parked_vehicle, and the maximum value is the sum of hard_margin/hard_margin_for_parked_vehicle and soft_margin.

The following figure shows the situation where this module shortens lateral soft constraint in order not to drive in the opposite lane when user sets parameter use_lane_type to same_direction_lane.

This module avoids not only parked vehicles but also non-parked vehicles that stop temporarily for some reason (e.g. waiting for traffic light to change from red to green). Additionally, this module has two types of hard margin parameters, hard_margin and hard_margin_for_parked_vehicle and judges if it is a parked vehicle or not for each vehicle because it takes the risk of vehicle doors opening suddenly and people getting out from parked vehicles into consideration.

Users should set hard_margin_for_parked_vehicle larger than hard_margin to prevent collisions with doors or people who suddenly exit a vehicle.

This module has only one parameter soft_margin for soft lateral margin constraint.

As the hard margin parameters define the distance the user definitely wants to maintain, they are used in the logic to check whether the ego can pass the side of the target object without executing an avoidance maneuver as well.

If the lateral distance is less than hard_margin/hard_margin_for_parked_vehicle when assuming that the ego follows the current lane without an avoidance maneuver, this module thinks the ego can not pass the side of the object safely and the ego must avoid it. In this case, this module inserts a stop point until the avoidance maneuver is allowed to execute so that the ego can avoid the object after approval. (For example, the ego keeps stopping in front of such an object until the operator approves the avoidance maneuver if the module is in MANUAL mode.)

On the other hand, if the lateral distance is larger than hard_margin/hard_margin_for_parked_vehicle, this module doesn't insert a stop point even when it is waiting for approval because it thinks it is possible to pass the side of the object safely.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#when-there-is-not-enough-space","title":"When there is not enough space","text":"

This module inserts a stop point only when the ego can potentially avoid the object. So, if it is not able to keep a distance more than hard_margin/hard_margin_for_parked_vehicle, this module does nothing. The following figure shows the situation where this module is not able to keep enough lateral distance when the user sets parameter use_lane_type to same_direction_lane.

Info

In this situation, the obstacle stop feature in obstacle_cruise_planner is responsible for ego vehicle safety.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#shift-length-calculation","title":"Shift length calculation","text":"

The lateral shift length is the sum of overhang_distance, lateral margin, whose value is set in the config file, and half of ego vehicle width defined in vehicle_info.param.yaml. On the other hand, the module limits the shift length depending on the space the module can use for an avoidance maneuver and the parameters soft_drivable_bound_margin hard_drivable_bound_margin. Basically, the shift length is limited so that the ego doesn't get closer than soft_drivable_bound_margin to the drivable boundary. But the module allows the threshold to be relaxed from soft_drivable_bound_margin to hard_drivable_bound_margin when the road is narrow.

Usable lanes for the avoidance module can be selected using the config file.

      ...\n# drivable lane setting. This module is able to use not only current lane but also right/left lane\n# if the current lane(=lanelet::Lanelet) and the right/left lane share the boundary(=lanelet::Linestring) in HDMap.\n# \"current_lane\"           : use only current lane. This module doesn't use adjacent lane to avoid object.\n# \"same_direction_lane\"    : this module uses same direction lane to avoid object if needed.\n# \"opposite_direction_lane\": this module uses both same direction and opposite direction lanes.\nuse_lane_type: \"opposite_direction_lane\"\n

When user set parameter use_lane_type to opposite_direction_lane, it is possible to use opposite lane.

When user set parameter use_lane_type to same_direction_lane, the module doesn't create path that overlaps opposite lane.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#shift-line-generation","title":"Shift line generation","text":"

As mentioned above, the end point of the avoidance shift path and the start point of the return shift path, which are FIXED points, are calculated from envelope polygon. As a next step, the module adjusts the other side points depending on shift length, current ego speed and lateral jerk constrain params defined in the config file.

Since the two points are always on the centerline of the ego lane, the module only calculates longitudinal distance between the shift start and end point based on the following function. This function is defined in the path shifter library. See this page as well.

double PathShifter::calcLongitudinalDistFromJerk(\nconst double lateral, const double jerk, const double velocity)\n{\nconst double j = std::abs(jerk);\nconst double l = std::abs(lateral);\nconst double v = std::abs(velocity);\nif (j < 1.0e-8) {\nreturn 1.0e10;\n}\nreturn 4.0 * std::pow(0.5 * l / j, 1.0 / 3.0) * v;\n}\n

We call the line that connects shift start and end point shift_line, which the avoidance path is generated from with spline completion.

The start point of avoidance has another longitudinal constraint. In order to keep turning on the blinker for a few seconds before starting the avoidance maneuver, the avoidance start point must be further than the value (we call the distance prepare_length.) depending on ego speed from ego position.

longitudinal:\nmin_prepare_time: 1.0 # [s]\nmax_prepare_time: 2.0 # [s]\nmin_prepare_distance: 1.0 # [m]\n

The prepare_length is calculated as the product of ego speed and max_prepare_time. (When the ego speed is zero, min_prepare_distance is used.)

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#planning-at-red-traffic-light","title":"Planning at RED traffic light","text":"

This module takes traffic light information into account so that the ego can behave properly. Sometimes, the ego straddles the lane boundary but we want to prevent the ego from stopping in front of a red traffic signal in such a situation. This is because the ego will block adjacent lanes and it is inconvenient for other vehicles.

So, this module controls shift length and shift start/end point in order to prevent the above situation.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#control-shift-length","title":"Control shift length","text":"

At first, if the ego hasn't initiated an avoidance maneuver yet, this module limits maximum shift length and uses ONLY current lane during a red traffic signal. This prevents the ego from blocking other vehicles even if this module executes the avoidance maneuver and the ego is caught by a red traffic signal.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#control-avoidance-shift-start-point","title":"Control avoidance shift start point","text":"

Additionally, if the target object is farther than the stop line of the traffic light, this module sets the avoidance shift start point on the stop line in order to prevent the ego from stopping at a red traffic signal in the middle of an avoidance maneuver.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#control-return-shift-end-point","title":"Control return shift end point","text":"

If the ego has already initiated an avoidance maneuver, this module tries to set the return shift end point on the stop line.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#safety-check","title":"Safety check","text":"

This feature can be enabled by setting the following parameter to true.

      safety_check:\n...\nenable: true                                    # [-]\n

This module pays attention not only to avoidance target objects but also non-target objects that are near the avoidance path, and if the avoidance path is unsafe due to surrounding objects, it reverts the avoidance maneuver and yields the lane to them.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#yield-maneuver","title":"Yield maneuver","text":"

Additionally, this module basically inserts a stop point in front of an avoidance target during yielding maneuvers in order to keep enough distance to avoid the target when it is safe to do so. If the shift side lane is congested, the ego stops at a point and waits.

This feature can be enabled by setting the following parameter to true.

yield:\nenable: true # [-]\n

But if the lateral margin is larger than hard_margin (or hard_margin_for_parked_vehicle), this module doesn't insert a stop point because the ego can pass the side of the object safely without an avoidance maneuver.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#safety-check-target-lane","title":"Safety check target lane","text":"

User can select the safety check area with the following parameters. Basically, we recommend the following configuration to check only the shift side lane. If users want to confirm safety strictly, please set check_current_lane and/or check_other_side_lane to true.

      safety_check:\n...\ncheck_current_lane: false                       # [-]\ncheck_shift_side_lane: true                     # [-]\ncheck_other_side_lane: false                    # [-]\n

In the avoidance module, the function path_safety_checker::isCentroidWithinLanelet is used for filtering objects by lane.

bool isCentroidWithinLanelet(const PredictedObject & object, const lanelet::ConstLanelet & lanelet)\n{\nconst auto & object_pos = object.kinematics.initial_pose_with_covariance.pose.position;\nlanelet::BasicPoint2d object_centroid(object_pos.x, object_pos.y);\nreturn boost::geometry::within(object_centroid, lanelet.polygon2d().basicPolygon());\n}\n

Info

If check_current_lane and/or check_other_side_lane are set to true, the possibility of false positives and unnecessary yield maneuvers increase.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#safety-check-algorithm","title":"Safety check algorithm","text":"

This module uses common safety check logic implemented in path_safety_checker library. See this page.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitation-1","title":"Limitation-1","text":"

The current behavior when the module judges it is unsafe is so conservative because it is difficult to achieve aggressive maneuvers (e.g. increase speed in order to increase the distance from rear vehicle) for current planning architecture.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#limitation-2","title":"Limitation-2","text":"

The yield maneuver is executed ONLY when the vehicle has NOT initiated avoidance maneuver yet. (This module checks objects in the opposite lane but it is necessary to find very far objects to judge whether it is unsafe before avoidance maneuver.) If it detects a vehicle which is approaching the ego during an avoidance maneuver, this module doesn't revert the path or insert a stop point. For now, there is no feature to deal with this situation in this module. Thus, a new module is needed to adjust the path to avoid moving objects, or an operator must override.

Info

This module has a threshold parameter th_avoid_execution for the shift length, and judges that the vehicle is initiating avoidance if the vehicle current shift exceeds the value.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#other-features","title":"Other features","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#compensation-for-detection-lost","title":"Compensation for detection lost","text":"

In order to prevent chattering of recognition results, once an obstacle is targeted, it is held for a while even if it disappears. This is effective when recognition is unstable. However, since it will result in over-detection (increases number of false-positives), it is necessary to adjust parameters according to the recognition accuracy (if object_last_seen_threshold = 0.0, the recognition result is 100% trusted).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#drivable-area-expansion","title":"Drivable area expansion","text":"

This module supports drivable area expansion for following polygons defined in HDMap.

Please set the flags to true when user wants to make it possible to use those areas in avoidance maneuvers.

# drivable lane setting. This module is able to use not only current lane but also right/left lane\n# if the current lane(=lanelet::Lanelet) and the right/left lane share the boundary(=lanelet::Linestring) in HDMap.\n# \"current_lane\"           : use only current lane. This module doesn't use adjacent lane to avoid object.\n# \"same_direction_lane\"    : this module uses the same direction lane to avoid object if needed.\n# \"opposite_direction_lane\": this module uses both same direction and opposite direction lanes.\nuse_lane_type: \"opposite_direction_lane\"\n# drivable area setting\nuse_intersection_areas: true\nuse_hatched_road_markings: true\nuse_freespace_areas: true\n
use_lane_type: same_direction_lane use_lane_type: opposite_direction_lane intersection area The intersection area is defined on Lanelet map. See here hatched road markings The hatched road marking is defined on Lanelet map. See here freespace area The freespace area is defined on Lanelet map. (unstable)"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#future-extensionsunimplemented-parts","title":"Future extensions/Unimplemented parts","text":" "},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#debug","title":"Debug","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#show-rclcpp_debug-on-console","title":"Show RCLCPP_DEBUG on console","text":"

All debug messages are logged in the following namespaces.

User can see debug information with the following command.

ros2 service call /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/config_logger logging_demo/srv/ConfigLogger \"{logger_name: 'planning.scenario_planning.lane_driving.behavior_planning.behavior_path_planner.static_obstacle_avoidance', level: DEBUG}\"\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#visualize-debug-markers","title":"Visualize debug markers","text":"

User can enable publishing of debug markers with the following parameters.

debug:\nenable_other_objects_marker: false\nenable_other_objects_info: false\nenable_detection_area_marker: false\nenable_drivable_bound_marker: false\nenable_safety_check_marker: false\nenable_shift_line_marker: false\nenable_lane_marker: false\nenable_misc_marker: false\n

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#echoing-debug-message-to-find-out-why-the-objects-were-ignored","title":"Echoing debug message to find out why the objects were ignored","text":"

If for some reason, no shift point is generated for your object, you can check for the failure reason via ros2 topic echo.

To print the debug message, just run the following

ros2 topic echo /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance_debug_message_array\n
"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#frequently-asked-questions","title":"Frequently asked questions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#target-objects","title":"Target objects","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#does-it-avoid-static-objects-and-dynamic-objects","title":"Does it avoid static objects and dynamic objects?","text":"

This module avoids static (stopped) objects and does not support dynamic (moving) objects avoidance. Dynamic objects are handled within the dynamic obstacle avoidance module.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-type-class-of-object-would-it-avoid","title":"What type (class) of object would it avoid?","text":"

It avoids cars, trucks, buses, trailers, bicycles, motorcycles, pedestrians, and unknown objects by default. Details are in the Target object filtering section. The above objects are divided into vehicle type objects and non-vehicle type objects; the target object filtering would differ for vehicle types and non-vehicle types.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-does-it-judge-if-it-is-a-target-object-or-not","title":"How does it judge if it is a target object or not?","text":"

The conditions for vehicle type objects and non-vehicle type objects are different. However, the main idea is that static objects on road shoulders within the planned path would be avoided. Below are some examples when an avoidance path is generated for vehicle type objects.

For more details refer to vehicle type object and non-vehicle object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-is-an-ambiguous-target","title":"What is an ambiguous target?","text":"

An ambiguous target refers to objects that may not be clearly identifiable as avoidance target due to limitations of current Autoware (ex. parked vehicle in the center of a lane). This module will avoid clearly defined static objects automatically, whereas ambiguous targets would need some operator intervention.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-can-i-visualize-the-target-object","title":"How can I visualize the target object?","text":"

Target objects can be visualized using RViz, where the module's outputs, such as detected obstacles and planned avoidance paths, are displayed. For further information refer to the debug marker section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-can-i-check-the-lateral-distance-to-an-obstacle","title":"How can I check the lateral distance to an obstacle?","text":"

Currently, there isn't any topic that outputs the relative position with the ego vehicle and target object. Visual confirmation on RViz would be the only solution for now.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#does-it-avoid-multiple-objects-at-once","title":"Does it avoid multiple objects at once?","text":"

Yes, the module is capable of avoiding multiple static objects simultaneously. It generates multiple shift lines and calculates an avoidance path that navigates around each object. Details are explained in the How to decide path shape section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#area-used-when-avoiding","title":"Area used when avoiding","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#which-lanes-are-used-to-avoid-objects","title":"Which lanes are used to avoid objects?","text":"

This module is able to use not only the current lane but also adjacent lanes and opposite lanes. Usable lanes can be selected by the configuration file as noted in the shift length calculation section. It is assumed that there are no parked vehicles on the central lane in a situation where there are lanes on the left and right.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#would-it-avoid-objects-inside-intersections","title":"Would it avoid objects inside intersections?","text":"

Basically, the module assumes that there are no parked vehicles within intersections. Vehicles that follow the lane or merge into ego lane are non-target objects. Vehicles waiting to make a right/left turn within an intersection can be avoided by expanding the drivable area in the configuration file, as noted in the drivable area expansion section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#does-it-generate-avoidance-paths-for-any-road-type","title":"Does it generate avoidance paths for any road type?","text":"

Drivable area can be expanded in the configuration file, as noted in the drivable area expansion section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#path-generation_1","title":"Path generation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-is-the-avoidance-path-generated","title":"How is the avoidance path generated?","text":"

The avoidance path is generated by modifying the current reference path to navigate around detected static objects. This is done using a rule-based shift line approach that ensures the vehicle remains within safe boundaries and follows the road while avoiding obstacles. Details are explained in the appendix.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#which-way-right-or-left-is-the-avoidance-path-generated","title":"Which way (right or left) is the avoidance path generated?","text":"

The behavior of avoiding depends on the target vehicle's center of gravity. If the target object is on the left side of the ego lane the avoidance would be generated on the right side. Currently, avoiding left-shifted obstacles from the left side is not supported (same for right-shifted objects).

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#why-is-an-envelope-polygon-used-for-the-target-object","title":"Why is an envelope polygon used for the target object?","text":"

It is employed to reduce the influence of the perception/tracking noise for each target object. The envelope polygon is a rectangle, whose size depends on the object's polygon and buffer parameter and it is always parallel to the reference path. The envelope polygon is created by using the latest one-shot envelope polygon and the previous envelope polygon. Details are explained in How to prevent shift line chattering that is caused by perception noise section.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-happens-if-the-module-cannot-find-a-safe-avoidance-path","title":"What happens if the module cannot find a safe avoidance path?","text":"

If the module cannot find a safe avoidance path, the vehicle may stop or continue along its current path without performing an avoidance maneuver. If there is a target object and there is enough space to avoid, the ego vehicle would stop at a position where an avoidance path could be generated; this is called the yield manuever. On the other hand, where there is not enough space, this module has nothing to do and the obstacle cruise planner would be in charge of the object.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#there-seems-to-be-an-avoidance-path-but-the-vehicle-stops-what-is-happening","title":"There seems to be an avoidance path, but the vehicle stops. What is happening?","text":"

This situation occurs when the module is operating in AUTO mode and the target object is ambiguous or when operating in MANUAL mode. The generated avoidance path is presented as a candidate and requires operator approval before execution. If the operator does not approve the path the ego vehicle would stop where it is possible to generate an avoidance path.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#operation","title":"Operation","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#what-are-the-benefits-of-using-manual-mode-over-auto-mode","title":"What are the benefits of using MANUAL mode over AUTO mode?","text":"

MANUAL mode allows the operator to have direct control over the approval of avoidance paths, which is particularly useful in situations where sensor data may be unreliable or ambiguous. This mode helps prevent unnecessary or incorrect avoidance maneuvers by requiring human validation before execution. It is recommended for environments where false positives are likely or when the sensor/perception system's performance is limited.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#can-this-module-be-customized-for-specific-vehicle-types-or-environments","title":"Can this module be customized for specific vehicle types or environments?","text":"

The module can be customized by adjusting the rules and parameters that define how it identifies and avoids obstacles. The avoidance manuever would not be changed by specific vehicle types.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#appendix-shift-line-generation-pipeline","title":"Appendix: Shift line generation pipeline","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#flow-chart-of-the-process","title":"Flow chart of the process","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#how-to-decide-the-path-shape","title":"How to decide the path shape","text":"

Generate shift points for obstacles with a given lateral jerk. These points are integrated to generate an avoidance path. The detailed process flow for each case corresponding to the obstacle placement are described below. The actual implementation is not separated for each case, but the function corresponding to multiple obstacle case (both directions) is always running.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#one-obstacle-case","title":"One obstacle case","text":"

The lateral shift distance to the obstacle is calculated, and then the shift point is generated from the ego vehicle speed and the given lateral jerk as shown in the figure below. A smooth avoidance path is then calculated based on the shift point.

Additionally, the following processes are executed in special cases.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#lateral-jerk-relaxation-conditions","title":"Lateral jerk relaxation conditions","text":""},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#minimum-velocity-relaxation-conditions","title":"Minimum velocity relaxation conditions","text":"

There is a problem that we cannot know the actual speed during avoidance in advance. This is especially critical when the ego vehicle speed is 0. To solve that, this module provides a parameter for the minimum avoidance speed, which is used for the lateral jerk calculation when the vehicle speed is low.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#multiple-obstacle-case-one-direction","title":"Multiple obstacle case (one direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points along the reference path. The new points are filtered (e.g. remove small-impact shift points), and the avoidance path is computed for the filtered shift points.

Merge process of raw shift points: check the shift length on each path point. If the shift points are overlapped, the maximum shift value is selected for the same direction.

For the details of the shift point filtering, see filtering for shift points.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#multiple-obstacle-case-both-direction","title":"Multiple obstacle case (both direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points. If there are areas where the desired shifts conflict in different directions, the sum of the maximum shift amounts of these areas is used as the final shift amount. The rest of the process is the same as in the case of one direction.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#filtering-for-shift-points","title":"Filtering for shift points","text":"

The shift points are modified by a filtering process in order to get the expected shape of the avoidance path. It contains the following filters.

"},{"location":"planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/#appendix-all-parameters","title":"Appendix: All parameters","text":"

Location of the avoidance specific parameter configuration file: src/autoware/launcher/planning_launch/config/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/autoware_behavior_path_static_obstacle_avoidance_module/static_obstacle_avoidance.param.yaml.

Name Type Description Default Range resample_interval_for_planning float Path resample interval for avoidance planning path. 0.3 N/A resample_interval_for_output float Path resample interval for output path. Too short interval increases computational cost for latter modules. 4.0 N/A path_generation_method string Path generation method. shift_line_base ['shift_line_base', 'optimization_base', 'both'] use_lane_type string Drivable lane configuration. opposite_direction_lane ['current_lane', 'same_direction_lane', 'opposite_direction_lane'] use_hatched_road_markings boolean Extend drivable to hatched road marking area. true N/A use_intersection_areas boolean Extend drivable to intersection area. true N/A use_freespace_areas boolean Extend drivable to freespace area. true N/A car.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 car.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A car.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A car.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A car.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A car.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A truck.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 truck.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A truck.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A truck.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A truck.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A truck.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A bus.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 bus.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A bus.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A bus.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A bus.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A bus.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A trailer.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 trailer.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A trailer.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.7 N/A trailer.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A trailer.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A trailer.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A unknown.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 unknown.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A unknown.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. -0.2 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. -0.2 N/A unknown.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.1 N/A unknown.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A unknown.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A motorcycle.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 motorcycle.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A motorcycle.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.5 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.5 N/A motorcycle.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A motorcycle.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A motorcycle.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A bicycle.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 bicycle.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A bicycle.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.3 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.3 N/A bicycle.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A bicycle.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A bicycle.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A pedestrian.th_moving_speed float Objects with speed greater than this will be judged as moving ones. 1.0 \u22650.0 pedestrian.th_moving_time float Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 N/A pedestrian.longitudinal_margin float Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 N/A lateral_margin.soft_margin float Lateral distance between ego and avoidance targets. 0.7 N/A lateral_margin.hard_margin float Lateral distance between ego and avoidance targets. 0.5 N/A lateral_margin.hard_margin_for_parked_vehicle float Lateral distance between ego and avoidance targets. 0.5 N/A pedestrian.envelope_buffer_margin float The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.5 N/A pedestrian.max_expand_ratio float This value will be applied envelope_buffer_margin according to the distance between the ego and object. 0.0 N/A pedestrian.th_error_eclipse_long_radius float This value will be applied to judge whether the eclipse error is to large 0.6 N/A target_object.lower_distance_for_polygon_expansion float If the distance between the ego and object is less than this, the expand ratio will be zero. 30.0 N/A target_object.upper_distance_for_polygon_expansion float If the distance between the ego and object is larger than this, the expand ratio will be max_expand_ratio. 100.0 N/A target_type.car boolean Enable avoidance maneuver for CAR. true N/A target_type.truck boolean Enable avoidance maneuver for TRUCK. true N/A target_type.bus boolean Enable avoidance maneuver for BUS. true N/A target_type.trailer boolean Enable avoidance maneuver for TRAILER. true N/A target_type.unknown boolean Enable avoidance maneuver for UNKNOWN. true N/A target_type.bicycle boolean Enable avoidance maneuver for BICYCLE. true N/A target_type.motorcycle boolean Enable avoidance maneuver for MOTORCYCLE. true N/A target_type.pedestrian boolean Enable avoidance maneuver for PEDESTRIAN. true N/A target_filtering.object_check_goal_distance float If the distance between object and goal position is less than this parameter, the module do not return center line. 20.0 N/A target_filtering.object_check_return_pose_distance float If the distance between object and return position is less than this parameter, the module do not return center line. 20.0 N/A target_filtering.max_compensation_time float For the compensation of the detection lost. The object is registered once it is observed as an avoidance target. When the detection loses, the timer will start and the object will be un-registered when the time exceeds this limit. 2.0 N/A detection_area.static boolean If true, the detection area longitudinal range is calculated based on current ego speed. false N/A detection_area.min_forward_distance float Minimum forward distance to search the avoidance target. 50.0 N/A detection_area.max_forward_distance float Maximum forward distance to search the avoidance target. 150.0 N/A detection_area.backward_distance float Backward distance to search the avoidance target. 10.0 N/A merging_vehicle.th_overhang_distance float Distance threshold to ignore merging/deviating vehicle to/from ego driving lane. The distance represents how the object polygon overlaps ego lane, and it's calculated from polygon overhang point and lane centerline. If the distance is more than this param, the module never avoid the object. (Basically, the ego stops behind of it.) 0.5 N/A parked_vehicle.th_offset_from_centerline float Vehicles around the center line within this distance will be excluded from avoidance target. 1.0 N/A parked_vehicle.th_shiftable_ratio float Vehicles around the center line within this distance will be excluded from avoidance target. 0.8 \u22650.0\u22641.0 parked_vehicle.min_road_shoulder_width float Width considered as a road shoulder if the lane does not have a road shoulder target. 0.5 N/A avoidance_for_ambiguous_vehicle.policy string Ego behavior policy for ambiguous vehicle. true ['auto', 'manual', 'ignore'] avoidance_for_ambiguous_vehicle.closest_distance_to_wait_and_see float Start avoidance maneuver after the distance to ambiguous vehicle is less than this param. 10.0 N/A condition.th_stopped_time float Never avoid object whose stopped time is less than this param. 3.0 N/A condition.th_moving_distance float Never avoid object which moves more than this param. 1.0 N/A traffic_light.front_distance float If the distance between traffic light and vehicle is less than this parameter, this module will ignore it. 100.0 N/A crosswalk.front_distance float If the front distance between crosswalk and vehicle is less than this parameter, this module will ignore it. 30.0 N/A crosswalk.behind_distance float If the back distance between crosswalk and vehicle is less than this parameter, this module will ignore it. 30.0 N/A wait_and_see.target_behaviors array This module doesn't avoid these behaviors vehicle until it gets closer than threshold. ['MERGING', 'DEVIATING'] N/A wait_and_see.th_closest_distance float Threshold to check whether the ego gets close enough the ambiguous vehicle. 10.0 N/A intersection.yaw_deviation float Yaw deviation threshold param to judge if the object is not merging or deviating vehicle. 0.349 N/A condition.th_stopped_time float This module delays avoidance maneuver to see vehicle behavior in freespace. 5.0 N/A target_type.car boolean Enable safety_check for CAR. true N/A target_type.truck boolean Enable safety_check for TRUCK. true N/A target_type.bus boolean Enable safety_check for BUS. true N/A target_type.trailer boolean Enable safety_check for TRAILER. true N/A target_type.unknown boolean Enable safety_check for UNKNOWN. false N/A target_type.bicycle boolean Enable safety_check for BICYCLE. true N/A target_type.motorcycle boolean Enable safety_check for MOTORCYCLE. true N/A target_type.pedestrian boolean Enable safety_check for PEDESTRIAN. true N/A safety_check.enable boolean Enable to use safety check feature. true N/A safety_check.check_current_lane boolean Check objects on current driving lane. true N/A safety_check.check_shift_side_lane boolean Check objects on shift side lane. true N/A safety_check.check_other_side_lane boolean Check objects on other side lane. true N/A safety_check.check_unavoidable_object boolean Check collision between ego and unavoidable objects. true N/A safety_check.check_other_object boolean Check collision between ego and non avoidance target objects. true N/A safety_check.check_all_predicted_path boolean Check all prediction path of safety check target objects. true N/A safety_check.safety_check_backward_distance float Backward distance to search the dynamic objects. 100.0 N/A safety_check.hysteresis_factor_expand_rate float Hysteresis factor that be used for chattering prevention. 2.0 N/A safety_check.hysteresis_factor_safe_count integer Hysteresis count that be used for chattering prevention. 10 N/A safety_check.collision_check_yaw_diff_threshold float Max yaw difference between ego and object when doing collision check 3.1416 N/A safety_check.min_velocity float Minimum velocity of the ego vehicle's predicted path. 1.38 N/A safety_check.max_velocity float Maximum velocity of the ego vehicle's predicted path. 50.0 N/A safety_check.time_resolution float Time resolution for the ego vehicle's predicted path. 0.5 N/A safety_check.time_horizon_for_front_object float Time horizon for predicting front objects. 3.0 N/A safety_check.time_horizon_for_rear_object float Time horizon for predicting rear objects. 10.0 N/A safety_check.delay_until_departure float Delay until the ego vehicle departs. 1.0 N/A safety_check.extended_polygon_policy string See https://github.com/autowarefoundation/autoware.universe/pull/6336. along_path ['rectangle', 'along_path'] safety_check.expected_front_deceleration float The front object's maximum deceleration when the front vehicle perform sudden braking. -1.0 N/A safety_check.expected_rear_deceleration float The rear object's maximum deceleration when the rear vehicle perform sudden braking. -1.0 N/A safety_check.rear_vehicle_reaction_time float The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 N/A safety_check.rear_vehicle_safety_time_margin float The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 1.0 N/A safety_check.lateral_distance_max_threshold float The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 N/A safety_check.longitudinal_distance_min_threshold float The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 N/A safety_check.longitudinal_velocity_delta_time float The time multiplier that is used to compute the actual gap between vehicle at each predicted points (not RSS distance) 0.0 N/A lateral.th_avoid_execution float The lateral distance deviation threshold between the current path and suggested avoidance point to execute avoidance. 0.09 N/A lateral.th_small_shift_length float The shift lines whose lateral offset is less than this will be applied with other ones. 0.101 N/A lateral.soft_drivable_bound_margin float Keep distance to drivable bound. 0.3 N/A lateral.hard_drivable_bound_margin float Keep distance to drivable bound. 0.3 N/A lateral.max_right_shift_length float Maximum shift length for right direction 5.0 N/A lateral.max_left_shift_length float Maximum shift length for left direction. 5.0 N/A lateral.max_deviation_from_lane float Use in validation phase to check if the avoidance path is in drivable area. 0.2 N/A lateral.ratio_for_return_shift_approval float This parameter is added to allow waiting for the return of shift approval until the occlusion behind the avoid target is clear. 0.5 \u22650.0\u22641.0 longitudinal.min_prepare_time float Avoidance shift starts from point ahead of this time x ego speed at least. 1.0 N/A longitudinal.max_prepare_time float Avoidance shift starts from point ahead of this time x ego speed if possible. 2.0 N/A longitudinal.min_prepare_distance float Minimum prepare distance. 1.0 N/A longitudinal.min_slow_down_speed float Minimum slow speed for avoidance prepare section. 1.38 N/A longitudinal.buf_slow_down_speed float Buffer for controller tracking error. Basically, vehicle always cannot follow velocity profile precisely. Therefore, the module inserts lower speed than target speed that satisfies conditions to avoid object within accel/jerk constraints so that the avoidance path always can be output even if the current speed is a little bit higher than target speed. 0.56 N/A longitudinal.nominal_avoidance_speed float Nominal avoidance speed. 8.33 N/A longitudinal.consider_front_overhang boolean Flag to consider vehicle front overhang in shift line generation logic. True N/A longitudinal.consider_rear_overhang boolean Flag to consider vehicle rear overhang in shift line generation logic. True N/A goal.enable boolean Insert stop point in order to return original lane before reaching goal. true N/A goal.buffer float Buffer distance to return original lane before reaching goal. 3.0 N/A traffic_light.enable boolean Insert stop point in order to return original lane before reaching traffic light. true N/A traffic_light.buffer float Buffer distance to return original lane before reaching traffic light. 3.0 N/A stop.max_distance float Maximum stop distance in the situation where avoidance maneuver is not approved or in yield maneuver. 20.0 N/A stop.stop_buffer float Buffer distance in the situation where avoidance maneuver is not approved or in yield maneuver. 1.0 N/A yield.enable boolean Flag to enable yield maneuver. true N/A yield.enable_during_shifting boolean Flag to enable yield maneuver during shifting. false N/A cancel.enable boolean Flag to enable cancel maneuver. true N/A force.duration_time float force deactivate duration time 2.0 N/A policy.make_approval_request string policy for rtc request. select per_shift_line or per_avoidance_maneuver. per_shift_line: request approval for each shift line. per_avoidance_maneuver: request approval for avoidance maneuver (avoid + return). per_shift_line ['per_shift_line', 'per_avoidance_maneuver'] policy.deceleration string policy for vehicle slow down behavior. select best_effort or reliable. best_effort: slow down deceleration & jerk are limited by constraints but there is a possibility that the vehicle can't stop in front of the vehicle. reliable: insert stop or slow down point with ignoring decel/jerk constraints. make it possible to increase chance to avoid but uncomfortable deceleration maybe happen. best_effort ['reliable', 'best_effort'] policy.lateral_margin string policy for voidance lateral margin. select best_effort or reliable. best_effort: output avoidance path with shorten lateral margin when there is no enough longitudinal margin to avoid. reliable: module output avoidance path with safe (rtc cooperate) state only when the vehicle can avoid with expected lateral margin. best_effort ['reliable', 'best_effort'] policy.use_shorten_margin_immediately boolean if true, module doesn't wait deceleration and outputs avoidance path by best effort margin. true N/A lateral.velocity array Velocity array to decide current lateral accel/jerk limit. [1.0, 1.38, 11.1] N/A lateral.max_accel_values array Avoidance path gets sharp up to this accel limit when there is not enough distance from ego. [0.5, 0.5, 0.5] N/A lateral.min_jerk_values array Avoidance path is generated with this jerk when there is enough distance from ego. [0.2, 0.2, 0.2] N/A lateral.max_jerk_values array Avoidance path gets sharp up to this jerk limit when there is not enough distance from ego. [1.0, 1.0, 1.0] N/A longitudinal.nominal_deceleration float Nominal deceleration limit. -1.0 N/A longitudinal.nominal_jerk float Nominal jerk limit. 0.5 N/A longitudinal.max_deceleration float Max deceleration limit. -1.5 N/A longitudinal.max_jerk float Max jerk limit. 1.0 N/A longitudinal.max_acceleration float Maximum acceleration during avoidance. 0.5 N/A longitudinal.min_velocity_to_limit_max_acceleration float If the ego speed is faster than this param, the module applies acceleration limit max_acceleration. 2.78 N/A trim.quantize_size float Lateral shift length quantize size. 0.1 N/A trim.th_similar_grad_1 float Lateral shift length threshold to merge similar gradient shift lines. 0.1 N/A trim.th_similar_grad_2 float Lateral shift length threshold to merge similar gradient shift lines. 0.2 N/A trim.th_similar_grad_3 float Lateral shift length threshold to merge similar gradient shift lines. 0.5 N/A debug.enable_other_objects_marker boolean Publish other objects marker. false N/A debug.enable_other_objects_info boolean Publish other objects detail information. false N/A debug.enable_detection_area_marker boolean Publish detection area. false N/A debug.enable_drivable_bound_marker boolean Publish drivable area boundary. false N/A debug.enable_safety_check_marker boolean Publish safety check information. false N/A debug.enable_shift_line_marker boolean Publish shift line information. false N/A debug.enable_lane_marker boolean Publish lane information. false N/A debug.enable_misc_marker boolean Publish misc markers. false N/A"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#blind-spot","title":"Blind Spot","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#role","title":"Role","text":"

Blind spot module checks possible collisions with bicycles and pedestrians running on its left/right side while turing left/right before junctions.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#activation-timing","title":"Activation Timing","text":"

This function is activated when the lane id of the target path has an intersection label (i.e. the turn_direction attribute is left or right).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Sets a stop line, a pass judge line, a detection area and conflict area based on a map information and a self position.

Stop/Go state: When both conditions are met for any of each object, this module state is transited to the \"stop\" state and insert zero velocity to stop the vehicle.

In order to avoid a rapid stop, the \u201cstop\u201d judgement is not executed after the judgment line is passed.

Once a \"stop\" is judged, it will not transit to the \"go\" state until the \"go\" judgment continues for a certain period in order to prevent chattering of the state (e.g. 2 seconds).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_line_margin double [m] a margin that the vehicle tries to stop before stop_line backward_length double [m] distance from closest path point to the edge of beginning point. ignore_width_from_center_line double [m] ignore threshold that vehicle behind is collide with ego vehicle or not max_future_movement_time double [s] maximum time for considering future movement of object adjacent_extend_width double [m] if adjacent lane e.g. bicycle only lane exists, blind_spot area is expanded by this length"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_blind_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#crosswalk","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#role","title":"Role","text":"

This module judges whether the ego should stop in front of the crosswalk in order to provide safe passage for crosswalk users, such as pedestrians and bicycles, based on the objects' behavior and surround traffic.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#features","title":"Features","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#yield-the-way-to-the-pedestrians","title":"Yield the Way to the Pedestrians","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#target-object","title":"Target Object","text":"

The crosswalk module handles objects of the types defined by the following parameters in the object_filtering.target_object namespace.

Parameter Unit Type Description unknown [-] bool whether to look and stop by UNKNOWN objects pedestrian [-] bool whether to look and stop by PEDESTRIAN objects bicycle [-] bool whether to look and stop by BICYCLE objects motorcycle [-] bool whether to look and stop by MOTORCYCLE objects

In order to handle the crosswalk users crossing the neighborhood but outside the crosswalk, the crosswalk module creates an attention area around the crosswalk, shown as the yellow polygon in the figure. If the object's predicted path collides with the attention area, the object will be targeted for yield.

The neighborhood is defined by the following parameter in the object_filtering.target_object namespace.

Parameter Unit Type Description crosswalk_attention_range [m] double the detection area is defined as -X meters before the crosswalk to +X meters behind the crosswalk"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#stop-position","title":"Stop Position","text":"

First of all, stop_distance_from_object [m] is always kept at least between the ego and the target object for safety.

When the stop line exists in the lanelet map, the stop position is calculated based on the line. When the stop line does NOT exist in the lanelet map, the stop position is calculated by keeping stop_distance_from_crosswalk [m] between the ego and the crosswalk.

As an exceptional case, if a pedestrian (or bicycle) is crossing wide crosswalks seen in scramble intersections, and the pedestrian position is more than far_object_threshold meters away from the stop line, the actual stop position is determined by stop_distance_from_object and pedestrian position, not at the stop line.

In the stop_position namespace, the following parameters are defined.

Parameter Type Description stop_position_threshold [m] double If the ego vehicle has stopped near the stop line than this value, this module assumes itself to have achieved yielding. stop_distance_from_crosswalk [m] double make stop line away from crosswalk for the Lanelet2 map with no explicit stop lines far_object_threshold [m] double If objects cross X meters behind the stop line, the stop position is determined according to the object position (stop_distance_from_object meters before the object) for the case where the crosswalk width is very wide stop_distance_from_object [m] double the vehicle decelerates to be able to stop in front of object with margin"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#yield-decision","title":"Yield decision","text":"

The module makes a decision to yield only when the pedestrian traffic light is GREEN or UNKNOWN. The decision is based on the following variables, along with the calculation of the collision point.

We classify ego behavior at crosswalks into three categories according to the relative relationship between TTC and TTV [1].

The boundary of A and B is interpolated from ego_pass_later_margin_x and ego_pass_later_margin_y. In the case of the upper figure, ego_pass_later_margin_x is {0, 1, 2} and ego_pass_later_margin_y is {1, 4, 6}. In the same way, the boundary of B and C is calculated from ego_pass_first_margin_x and ego_pass_first_margin_y. In the case of the upper figure, ego_pass_first_margin_x is {3, 5} and ego_pass_first_margin_y is {0, 1}.

If the red signal is indicating to the corresponding crosswalk, the ego do not yield against the pedestrians.

In the pass_judge namespace, the following parameters are defined.

Parameter Type Description ego_pass_first_margin_x [[s]] double time to collision margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_margin_y [[s]] double time to vehicle margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_additional_margin [s] double additional time margin for ego pass first situation to suppress chattering ego_pass_later_margin_x [[s]] double time to vehicle margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_margin_y [[s]] double time to collision margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_additional_margin [s] double additional time margin for object pass first situation to suppress chattering"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#smooth-yield-decision","title":"Smooth Yield Decision","text":"

If the object is stopped near the crosswalk but has no intention of walking, a situation can arise in which the ego continues to yield the right-of-way to the object. To prevent such a deadlock situation, the ego will cancel yielding depending on the situation.

For the object stopped around the crosswalk but has no intention to walk (*1), after the ego has keep stopping to yield for a specific time (*2), the ego cancels the yield and starts driving.

*1: The time is calculated by the interpolation of distance between the object and crosswalk with distance_set_for_no_intention_to_walk and timeout_set_for_no_intention_to_walk.

In the pass_judge namespace, the following parameters are defined.

Parameter Type Description distance_set_for_no_intention_to_walk [[m]] double key sets to calculate the timeout for no intention to walk with interpolation timeout_set_for_no_intention_to_walk [[s]] double value sets to calculate the timeout for no intention to walk with interpolation

*2: In the pass_judge namespace, the following parameters are defined.

Parameter Type Description timeout_ego_stop_for_yield [s] double If the ego maintains the stop for this amount of time, then the ego proceeds, assuming it has stopped long time enough."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#new-object-handling","title":"New Object Handling","text":"

Due to the perception's limited performance where the tree or poll is recognized as a pedestrian or the tracking failure in the crowd or occlusion, even if the surrounding environment does not change, the new pedestrian (= the new ID's pedestrian) may suddenly appear unexpectedly. If this happens while the ego is going to pass the crosswalk, the ego will stop suddenly.

To deal with this issue, the option disable_yield_for_new_stopped_object is prepared. If true is set, the yield decisions around the crosswalk with a traffic light will ignore the new stopped object.

In the pass_judge namespace, the following parameters are defined.

Parameter Type Description disable_yield_for_new_stopped_object [-] bool If set to true, the new stopped object will be ignored around the crosswalk with a traffic light"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#stuck-prevention-on-the-crosswalk","title":"Stuck Prevention on the Crosswalk","text":"

The feature will make the ego not to stop on the crosswalk. When there is a low-speed or stopped vehicle ahead of the crosswalk, and there is not enough space between the crosswalk and the vehicle, the crosswalk module plans to stop before the crosswalk even if there are no pedestrians or bicycles.

min_acc, min_jerk, and max_jerk are met. If the ego cannot stop before the crosswalk with these parameters, the stop position will move forward.

In the stuck_vehicle namespace, the following parameters are defined.

Parameter Unit Type Description stuck_vehicle_velocity [m/s] double maximum velocity threshold whether the target vehicle is stopped or not max_stuck_vehicle_lateral_offset [m] double maximum lateral offset of the target vehicle position required_clearance [m] double clearance to be secured between the ego and the ahead vehicle min_acc [m/ss] double minimum acceleration to stop min_jerk [m/sss] double minimum jerk to stop max_jerk [m/sss] double maximum jerk to stop"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#safety-slow-down-behavior","title":"Safety Slow Down Behavior","text":"

In the current autoware implementation, if no target object is detected around a crosswalk, the ego vehicle will not slow down for the crosswalk. However, it may be desirable to slow down in situations, for example, where there are blind spots. Such a situation can be handled by setting some tags to the related crosswalk as instructed in the lanelet2_format_extension.md document.

Parameter Type Description slow_velocity [m/s] double target vehicle velocity when module receive slow down command from FOA max_slow_down_jerk [m/sss] double minimum jerk deceleration for safe brake max_slow_down_accel [m/ss] double minimum accel deceleration for safe brake no_relax_velocity [m/s] double if the current velocity is less than X m/s, ego always stops at the stop position(not relax deceleration constraints)"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#occlusion","title":"Occlusion","text":"

This feature makes ego slow down for a crosswalk that is occluded.

Occlusion of the crosswalk is determined using the occupancy grid. An occlusion is a square of size min_size of occluded cells (i.e., their values are between free_space_max and occupied_min) of size min_size. If an occlusion is found within range of the crosswalk, then the velocity limit at the crosswalk is set to slow_down_velocity (or more to not break limits set by max_slow_down_jerk and max_slow_down_accel). The range is calculated from the intersection between the ego path and the crosswalk and is equal to the time taken by ego to reach the crosswalk times the occluded_object_velocity. This range is meant to be large when ego is far from the crosswalk and small when ego is close.

In order to avoid flickering decisions, a time buffer can be used such that the decision to add (or remove) the slow down is only taken after an occlusion is detected (or not detected) for a consecutive time defined by the time_buffer parameter.

To ignore occlusions when the crosswalk has a traffic light, ignore_with_traffic_light should be set to true.

To ignore temporary occlusions caused by moving objects, ignore_behind_predicted_objects should be set to true. By default, occlusions behind an object with velocity higher than ignore_velocity_thresholds.default are ignored. This velocity threshold can be specified depending on the object type by specifying the object class label and velocity threshold in the parameter lists ignore_velocity_thresholds.custom_labels and ignore_velocity_thresholds.custom_thresholds. To inflate the masking behind objects, their footprint can be made bigger using extra_predicted_objects_size.

Parameter Unit Type Description enable [-] bool if true, ego will slow down around crosswalks that are occluded occluded_object_velocity [m/s] double assumed velocity of objects that may come out of the occluded space slow_down_velocity [m/s] double slow down velocity time_buffer [s] double consecutive time with/without an occlusion to add/remove the slowdown min_size [m] double minimum size of an occlusion (square side size) free_space_max [-] double maximum value of a free space cell in the occupancy grid occupied_min [-] double minimum value of an occupied cell in the occupancy grid ignore_with_traffic_light [-] bool if true, occlusions at crosswalks with traffic lights are ignored ignore_behind_predicted_objects [-] bool if true, occlusions behind predicted objects are ignored ignore_velocity_thresholds.default [m/s] double occlusions are only ignored behind objects with a higher or equal velocity ignore_velocity_thresholds.custom_labels [-] string list labels for which to define a non-default velocity threshold (see autoware_perception_msgs::msg::ObjectClassification for all the labels) ignore_velocity_thresholds.custom_thresholds [-] double list velocities of the custom labels extra_predicted_objects_size [m] double extra size added to the objects for masking the occlusions"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#others","title":"Others","text":"

In the common namespace, the following parameters are defined.

Parameter Unit Type Description show_processing_time [-] bool whether to show processing time traffic_light_state_timeout [s] double timeout threshold for traffic light signal enable_rtc [-] bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#known-issues","title":"Known Issues","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#debugging","title":"Debugging","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#visualization-of-debug-markers","title":"Visualization of debug markers","text":"

/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/crosswalk shows the following markers.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#visualization-of-time-to-collision","title":"Visualization of Time-To-Collision","text":"
ros2 run autoware_behavior_velocity_crosswalk_module time_to_collision_plotter.py\n

enables you to visualize the following figure of the ego and pedestrian's time to collision. The label of each plot is <crosswalk module id>-<pedestrian uuid>.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#trouble-shooting","title":"Trouble Shooting","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#behavior","title":"Behavior","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#parameter-tuning","title":"Parameter Tuning","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_crosswalk_module/#referencesexternal-links","title":"References/External links","text":"

[1] \u4f50\u85e4 \u307f\u306a\u307f, \u65e9\u5742 \u7965\u4e00, \u6e05\u6c34 \u653f\u884c, \u6751\u91ce \u9686\u5f66, \u6a2a\u65ad\u6b69\u884c\u8005\u306b\u5bfe\u3059\u308b\u30c9\u30e9\u30a4\u30d0\u306e\u30ea\u30b9\u30af\u56de\u907f\u884c\u52d5\u306e\u30e2\u30c7\u30eb\u5316, \u81ea\u52d5\u8eca\u6280\u8853\u4f1a\u8ad6\u6587\u96c6, 2013, 44 \u5dfb, 3 \u53f7, p. 931-936.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#detection-area","title":"Detection Area","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#role","title":"Role","text":"

If pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a detection area on the target lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description use_dead_line bool [-] weather to use dead line or not use_pass_judge_line bool [-] weather to use pass judge line or not state_clear_time double [s] when the vehicle is stopping for certain time without incoming obstacle, move to STOPPED state stop_margin double [m] a margin that the vehicle tries to stop before stop_line dead_line_margin double [m] ignore threshold that vehicle behind is collide with ego vehicle or not hold_stop_margin_distance double [m] parameter for restart prevention (See Algorithm section) distance_to_judge_over_stop_line double [m] parameter for judging that the stop line has been crossed suppress_pass_judge_when_stopping bool [m] parameter for suppressing pass judge when stopping"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#inner-workings-algorithm","title":"Inner-workings / Algorithm","text":"
  1. Gets a detection area and stop line from map information and confirms if there is pointcloud in the detection area
  2. Inserts stop point l[m] in front of the stop line
  3. Inserts a pass judge point to a point where the vehicle can stop with a max deceleration
  4. Sets velocity as zero behind the stop line when the ego-vehicle is in front of the pass judge point
  5. If the ego vehicle has passed the pass judge point already, it doesn\u2019t stop and pass through.
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_detection_area_module/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/","title":"Intersection","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersection","title":"Intersection","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#role","title":"Role","text":"

The intersection module is responsible for safely passing urban intersections by:

  1. checking collisions with upcoming vehicles
  2. recognizing the occluded area in the intersection
  3. reacting to each color/shape of associated traffic lights

This module is designed to be agnostic to left-hand/right-hand traffic rules and work for crossroads, T-shape junctions, etc. Roundabout is not formally supported in this module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#activation-condition","title":"Activation condition","text":"

This module is activated when the path contains the lanes with turn_direction tag. More precisely, if the lane_ids of the path contain the ids of those lanes, corresponding instances of intersection module are activated on each lane respectively.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#requirementslimitations","title":"Requirements/Limitations","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#attention-area","title":"Attention area","text":"

The attention area in the intersection is defined as the set of lanes that are conflicting with ego path and their preceding lanes up to common.attention_area_length meters. By default RightOfWay tag is not set, so the attention area covers all the conflicting lanes and its preceding lanes as shown in the first row. RightOfWay tag is used to rule out the lanes that each lane has priority given the traffic light relation and turn_direction priority. In the second row, purple lanes are set as the yield_lane of the ego_lane in the RightOfWay tag.

intersection_area, which is supposed to be defined on the HDMap, is an area converting the entire intersection.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#in-phaseanti-phase-signal-group","title":"In-phase/Anti-phase signal group","text":"

The terms \"in-phase signal group\" and \"anti-phase signal group\" are introduced to distinguish the lanes by the timing of traffic light regulation as shown in below figure.

The set of intersection lanes whose color is in sync with lane L1 is called the in-phase signal group of L1, and the set of remaining lanes is called the anti-phase signal group.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#how-towhy-set-rightofway-tag","title":"How-to/Why set RightOfWay tag","text":"

Ideally RightOfWay tag is unnecessary if ego has perfect knowledge of all traffic signal information because:

That allows ego to generate the attention area dynamically using the real time traffic signal information. However this ideal condition rarely holds unless the traffic signal information is provided through the infrastructure. Also there maybe be very complicated/bad intersection maps where multiple lanes overlap in a complex manner.

To help the intersection module care only a set of limited lanes, RightOfWay tag needs to be properly set.

Following table shows an example of how to set yield_lanes to each lane in a intersection w/o traffic lights. Since it is not apparent how to uniquely determine signal phase group for a set of intersection lanes in geometric/topological manner, yield_lane needs to be set manually. Straight lanes with traffic lights are exceptionally handled to detect no lanes because commonly it has priority over all the other lanes, so no RightOfWay setting is required.

turn direction of right_of_way yield_lane(with traffic light) yield_lane(without traffic light) straight not need to set yield_lane(this case is special) left/right conflicting lanes of in-phase group left(Left hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group right conflicting lanes of in-phase group right(Left hand traffic) all conflicting lanes of the anti-phase group no yield_lane left(Right hand traffic) all conflicting lanes of the anti-phase group no yield_lane right(Right hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group left conflicting lanes of in-phase group

This setting gives the following attention_area configurations.

For complex/bad intersection map like the one illustrated below, additional RightOfWay setting maybe necessary.

The bad points are:

  1. ego lane is overlapped with adjacent lane of the in-phase group. In this case you need to set this lane as yield_lane additionally because otherwise attention area is generated for its preceding lanes as well, which may cause unwanted stop.
  2. ego lane is overlapped with unrelated lane. In this case the lane is right-turn only and there is no chance of collision in theory. But you need to set this lane as yield_lane additionally for the same reason as (1).
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#possible-stop-lines","title":"Possible stop lines","text":"

Following figure illustrates important positions used in the intersection module. Note that each solid line represents ego front line position and the corresponding dot represents the actual inserted stop point position for the vehicle frame, namely the center of the rear wheel.

To precisely calculate stop positions, the path is interpolated at the certain interval of common.path_interpolation_ds.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#target-objects","title":"Target objects","text":"

For stuck vehicle detection and collision detection, this module checks car, bus, truck, trailer, motor cycle, and bicycle type objects.

Objects that satisfy all of the following conditions are considered as target objects (possible collision objects):

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#overview-of-decision-process","title":"Overview of decision process","text":"

There are several behaviors depending on the scene.

behavior scene action Safe Ego detected no occlusion and collision Ego passes the intersection StuckStop The exit of the intersection is blocked by traffic jam Ego stops before the intersection or the boundary of attention area YieldStuck Another vehicle stops to yield ego Ego stops before the intersection or the boundary of attention area NonOccludedCollisionStop Ego detects no occlusion but detects collision Ego stops at default_stopline FirstWaitBeforeOcclusion Ego detected occlusion when entering the intersection Ego stops at default_stopline at first PeekingTowardOcclusion Ego detected occlusion and but no collision within the FOV (after FirstWaitBeforeOcclusion) Ego approaches the boundary of the attention area slowly OccludedCollisionStop Ego detected both occlusion and collision (after FirstWaitBeforeOcclusion) Ego stops immediately FullyPrioritized Ego is fully prioritized by the RED/Arrow signal Ego only cares vehicles still running inside the intersection. Occlusion is ignored OverPassJudgeLine Ego is already inside the attention area and/or cannot stop before the boundary of attention area Ego does not detect collision/occlusion anymore and passes the intersection

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"

If there is any object on the path inside the intersection and at the exit of the intersection (up to stuck_vehicle.stuck_vehicle_detect_dist) lane and its velocity is less than the threshold (stuck_vehicle.stuck_vehicle_velocity_threshold), the object is regarded as a stuck vehicle. If stuck vehicles exist, this module inserts a stopline a certain distance (=default_stopline_margin) before the overlapped region with other lanes. The stuck vehicle detection area is generated based on the planned path, so the stuck vehicle stopline is not inserted if the upstream module generated an avoidance path.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#yield-stuck-vehicle-detection","title":"Yield stuck vehicle detection","text":"

If there is any stopped object on the attention lanelet between the intersection point with ego path and the position which is yield_stuck.distance_threshold before that position, the object is regarded as yielding to ego vehicle. In this case ego is given the right-of-way by the yielding object but this module inserts stopline to prevent entry into the intersection. This scene happens when the object is yielding against ego or the object is waiting before the crosswalk around the exit of the intersection.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#collision-detection","title":"Collision detection","text":"

The following process is performed for the targets objects to determine whether ego can pass the intersection safely. If it is judged that ego cannot pass the intersection with enough margin, this module inserts a stopline on the path.

  1. predict the time \\(t\\) when the object intersects with ego path for the first time from the predicted path time step. Only the predicted whose confidence is greater than collision_detection.min_predicted_path_confidence is used.
  2. detect collision between the predicted path and ego's predicted path in the following process
    1. calculate the collision interval of [\\(t\\) - collision_detection.collision_start_margin_time, \\(t\\) + collision_detection.collision_end_margin_time]
    2. calculate the passing area of ego during the collision interval from the array of (time, distance) obtained by smoothed velocity profile
    3. check if ego passing area and object predicted path interval collides
  3. if collision is detected, the module inserts a stopline
  4. if ego is over the pass_judge_line, collision checking is skipped to avoid sudden braking and/or unnecessary stop in the middle of the intersection

The parameters collision_detection.collision_start_margin_time and collision_detection.collision_end_margin_time can be interpreted as follows:

If collision is detected, the state transits to \"STOP\" immediately. On the other hand, the state does not transit to \"GO\" unless safe judgement continues for a certain period collision_detection.collision_detection_hold_time to prevent the chattering of decisions.

Currently, the intersection module uses motion_velocity_smoother feature to precisely calculate ego velocity profile along the intersection lane under longitudinal/lateral constraints. If the flag collision_detection.velocity_profile.use_upstream is true, the target velocity profile of the original path is used. Otherwise the target velocity is set to collision.velocity_profile.default_velocity. In the trajectory smoothing process the target velocity at/before ego trajectory points are set to ego current velocity. The smoothed trajectory is then converted to an array of (time, distance) which indicates the arrival time to each trajectory point on the path from current ego position. You can visualize this array by adding the lane id to debug.ttc and running

ros2 run behavior_velocity_intersection_module ttc.py --lane_id <lane_id>\n

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#about-use_upstream_velocity-flag","title":"about use_upstream_velocity flag","text":"

There are some use cases where ego should check collision before entering the intersection considering the temporal stop by walkway/crosswalk module around the exit of the intersection, because their stop position can be inside the intersection and it could bother upcoming vehicles. By setting the flag collision_detection.velocity_profile.use_upstream to true and running the walkway/crosswalk module prior to this module, ego velocity profile is calculated considering their velocity and stop positions.

As illustrated in below figure if upstream module inserted a stopline, ego position profile will remain there for the infinite time, thus it leads to the judgement that ego cannot exit the intersection during the interval [\\(t\\) - collision_detection.collision_start_margin_time, \\(t\\) + collision_detection.collision_end_margin_time]. In this way this feature considers possible collision for the infinite time if stoplines exist ahead of ego position (practically the prediction horizon is limited so the collision check horizon is bounded).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-detection","title":"Occlusion detection","text":"

If the flag occlusion.enable is true this module checks if there is sufficient field of view (FOV) on the attention area up to occlusion.occlusion_attention_area_length. If FOV is not clear enough ego first makes a brief stop at default_stopline for occlusion.temporal_stop_time_before_peeking, and then slowly creeps toward occlusion_peeking_stopline. If occlusion.creep_during_peeking.enable is true occlusion.creep_during_peeking.creep_velocity is inserted up to occlusion_peeking_stopline. Otherwise only stop line is inserted.

During the creeping if collision is detected this module inserts a stop line in front of ego immediately, and if the FOV gets sufficiently clear the intersection_occlusion wall will disappear. If occlusion is cleared and no collision is detected ego will pass the intersection.

The occlusion is detected as the common area of occlusion attention area(which is partially the same as the normal attention area) and the unknown cells of the occupancy grid map. The occupancy grid map is denoised using morphology with the window size of occlusion.denoise_kernel. The occlusion attention area lanes are discretized to line strings and they are used to generate a grid whose each cell represents the distance from ego path along the lane as shown below.

If the nearest occlusion cell value is below the threshold occlusion.occlusion_required_clearance_distance, it means that the FOV of ego is not clear. It is expected that the occlusion gets cleared as the vehicle approaches the occlusion peeking stop line.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-source-estimation-at-intersection-with-traffic-light","title":"Occlusion source estimation at intersection with traffic light","text":"

At intersection with traffic light, the whereabout of occlusion is estimated by checking if there are any objects between ego and the nearest occlusion cell. While the occlusion is estimated to be caused by some object (DYNAMICALLY occluded), intersection_wall appears at all times. If no objects are found between ego and the nearest occlusion cell (STATICALLY occluded), after ego stopped for the duration of occlusion.static_occlusion_with_traffic_light_timeout plus occlusion.occlusion_detection_hold_time, occlusion is intentionally ignored to avoid stuck.

The remaining time is visualized on the intersection_occlusion virtual wall.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-handling-at-intersection-without-traffic-light","title":"Occlusion handling at intersection without traffic light","text":"

At intersection without traffic light, if occlusion is detected, ego makes a brief stop at default_stopline and first_attention_stopline respectively. After stopping at the first_attention_area_stopline this module inserts occlusion.absence_traffic_light.creep_velocity velocity between ego and occlusion_wo_tl_pass_judge_line while occlusion is not cleared. If collision is detected, ego immediately stops. Once the occlusion is cleared or ego has passed occlusion_wo_tl_pass_judge_line this module does not detect collision and occlusion because ego footprint is already inside the intersection.

While ego is creeping, yellow intersection_wall appears in front ego.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#traffic-signal-specific-behavior","title":"Traffic signal specific behavior","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#collision-detection_1","title":"Collision detection","text":"

TTC parameter varies depending on the traffic light color/shape as follows.

traffic light color ttc(start) ttc(end) GREEN collision_detection.not_prioritized.collision_start_margin collision_detection.not_prioritized.collision_end_margin AMBER collision_detection.partially_prioritized.collision_start_end_margin collision_detection.partially_prioritized.collision_start_end_margin RED / Arrow collision_detection.fully_prioritized.collision_start_end_margin collision_detection.fully_prioritized.collision_start_end_margin"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#yield-on-green","title":"yield on GREEN","text":"

If the traffic light color changed to GREEN and ego approached the entry of the intersection lane within the distance collision_detection.yield_on_green_traffic_light.distance_to_assigned_lanelet_start and there is any object whose distance to its stopline is less than collision_detection.yield_on_green_traffic_light.object_dist_to_stopline, this module commands to stop for the duration of collision_detection.yield_on_green_traffic_light.duration at default_stopline.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#skip-on-amber","title":"skip on AMBER","text":"

If the traffic light color is AMBER but the object is expected to stop before its stopline under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration, collision checking is skipped.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#skip-on-red","title":"skip on RED","text":"

If the traffic light color is RED or Arrow signal is turned on, the attention lanes which are not conflicting with ego lane are not used for detection. And even if the object stops with a certain overshoot from its stopline, but its expected stop position under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration is more than the distance collision_detection.ignore_on_red_traffic_light.object_margin_to_path from collision point, the object is ignored.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-detection_1","title":"Occlusion detection","text":"

When the traffic light color/shape is RED/Arrow, occlusion detection is skipped.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#pass-judge-line","title":"Pass Judge Line","text":"

Generally it is not tolerable for vehicles that have lower traffic priority to stop in the middle of the unprotected area in intersections, and they need to stop at the stop line beforehand if there will be any risk of collision, which introduces two requirements:

  1. The vehicle must start braking before the boundary of the unprotected area at least by the braking distance if it is supposed to stop
  2. The vehicle must recognize upcoming vehicles and check safety beforehand with enough braking distance margin if it is supposed to go
    1. And the SAFE decision must be absolutely certain and remain to be valid for the future horizon so that the safety condition will be always satisfied while ego is driving inside the unprotected area.
  3. (TODO): Since it is almost impossible to make perfectly safe decision beforehand given the limited detection range/velocity tracking performance, intersection module should plan risk-evasive acceleration velocity profile AND/OR relax lateral acceleration limit while ego is driving inside the unprotected area, if the safety decision is \"betrayed\" later due to the following reasons:
    1. The situation turned out to be dangerous later, mainly because velocity tracking was underestimated or the object accelerated beyond TTC margin
    2. The situation turned dangerous later, mainly because the object is suddenly detected out of nowhere

The position which is before the boundary of unprotected area by the braking distance which is obtained by

\\[ \\dfrac{v_{\\mathrm{ego}}^{2}}{2a_{\\mathrm{max}}} + v_{\\mathrm{ego}} * t_{\\mathrm{delay}} \\]

is called pass_judge_line, and safety decision must be made before ego passes this position because ego does not stop anymore.

1st_pass_judge_line is before the first upcoming lane, and at intersections with multiple upcoming lanes, 2nd_pass_judge_line is defined as the position which is before the centerline of the first attention lane by the braking distance. 1st/2nd_pass_judge_line are illustrated in the following figure.

Intersection module will command to GO if

because it is expected to stop or continue stop decision if

  1. ego is before default_stopline && common.enable_pass_judge_before_default_stopline is false OR
    1. reason: default_stopline is defined on the map and should be respected
  2. ego is before 1st_pass_judge_line OR
    1. reason: it has enough braking distance margin
  3. ego judged UNSAFE previously
    1. reason: ego is now trying to stop and should continue stop decision if collision is detected in later calculation
  4. (ego is between 1st and 2nd pass_judge_line and the most probable collision is expected to happen in the 2nd attention lane)

For the 3rd condition, it is possible that ego stops with some overshoot to the unprotected area while it is trying to stop for collision detection, because ego should keep stop decision while UNSAFE decision is made even if it passed 1st_pass_judge_line during deceleration.

For the 4th condition, at intersections with 2nd attention lane, even if ego is over the 1st pass_judge_line, still intersection module commands to stop if the most probable collision is expected to happen in the 2nd attention lane.

Also if occlusion.enable is true, the position of 1st_pass_judge line changes to occlusion_peeking_stopline if ego passed the original 1st_pass_judge_line position while ego is peeking. Otherwise ego could inadvertently judge that it passed 1st_pass_judge during peeking and then abort peeking.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#data-structure","title":"Data Structure","text":"

Each data structure is defined in util_type.hpp.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersectionlanelets","title":"IntersectionLanelets","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersectionstoplines","title":"IntersectionStopLines","text":"

Each stop lines are generated from interpolated path points to obtain precise positions.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#targetobject","title":"TargetObject","text":"

TargetObject holds the object, its belonging lane and corresponding stopline information.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#module-parameters","title":"Module Parameters","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#common","title":"common","text":"Parameter Type Description .attention_area_length double [m] range for object detection .attention_area_margin double [m] margin for expanding attention area width .attention_area_angle_threshold double [rad] threshold of angle difference between the detected object and lane .use_intersection_area bool [-] flag to use intersection_area for collision detection .default_stopline_margin double [m] margin before_stop_line .stopline_overshoot_margin double [m] margin for the overshoot from stopline .max_accel double [m/ss] max acceleration for stop .max_jerk double [m/sss] max jerk for stop .delay_response_time double [s] action delay before stop .enable_pass_judge_before_default_stopline bool [-] flag not to stop before default_stopline even if ego is over pass_judge_line"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#stuck_vehicleyield_stuck","title":"stuck_vehicle/yield_stuck","text":"Parameter Type Description stuck_vehicle.turn_direction - [-] turn_direction specifier for stuck vehicle detection stuck_vehicle.stuck_vehicle_detect_dist double [m] length toward from the exit of intersection for stuck vehicle detection stuck_vehicle.stuck_vehicle_velocity_threshold double [m/s] velocity threshold for stuck vehicle detection yield_stuck.distance_threshold double [m/s] distance threshold of yield stuck vehicle from ego path along the lane"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#collision_detection","title":"collision_detection","text":"Parameter Type Description .consider_wrong_direction_vehicle bool [-] flag to detect objects in the wrong direction .collision_detection_hold_time double [s] hold time of collision detection .min_predicted_path_confidence double [-] minimum confidence value of predicted path to use for collision detection .keep_detection_velocity_threshold double [s] ego velocity threshold for continuing collision detection before pass judge line .velocity_profile.use_upstream bool [-] flag to use velocity profile planned by upstream modules .velocity_profile.minimum_upstream_velocity double [m/s] minimum velocity of upstream velocity profile to avoid zero division .velocity_profile.default_velocity double [m/s] constant velocity profile when use_upstream is false .velocity_profile.minimum_default_velocity double [m/s] minimum velocity of default velocity profile to avoid zero division .yield_on_green_traffic_light - [-] description .ignore_amber_traffic_light - [-] description .ignore_on_red_traffic_light - [-] description"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion","title":"occlusion","text":"Parameter Type Description .enable bool [-] flag to calculate occlusion detection .occlusion_attention_area_length double [m] the length of attention are for occlusion detection .free_space_max int [-] maximum value of occupancy grid cell to treat at occluded .occupied_min int [-] minimum value of occupancy grid cell to treat at occluded .denoise_kernel double [m] morphology window size for preprocessing raw occupancy grid .attention_lane_crop_curvature_threshold double [m] curvature threshold for trimming curved part of the lane .attention_lane_crop_curvature_ds double [m] discretization interval of centerline for lane curvature calculation .creep_during_peeking.enable bool [-] flag to insert creep_velocity while peeking to intersection occlusion stopline .creep_during_peeking.creep_velocity double [m/s] the command velocity while peeking to intersection occlusion stopline .peeking_offset double [m] the offset of the front of the vehicle into the attention area for peeking to occlusion .occlusion_required_clearance_distance double [m] threshold for the distance to nearest occlusion cell from ego path .possible_object_bbox [double] [m] minimum bounding box size for checking if occlusion polygon is small enough .ignore_parked_vehicle_speed_threshold double [m/s] velocity threshold for checking parked vehicle .occlusion_detection_hold_time double [s] hold time of occlusion detection .temporal_stop_time_before_peeking double [s] temporal stop duration at default_stopline before starting peeking .temporal_stop_before_attention_area bool [-] flag to temporarily stop at first_attention_stopline before peeking into attention_area .creep_velocity_without_traffic_light double [m/s] creep velocity to occlusion_wo_tl_pass_judge_line .static_occlusion_with_traffic_light_timeout double [s] the timeout duration for ignoring static occlusion at intersection with traffic light"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#trouble-shooting","title":"Trouble shooting","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#intersection-module-stops-against-unrelated-vehicles","title":"Intersection module stops against unrelated vehicles","text":"

In this case, first visualize /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection topic and check the attention_area polygon. Intersection module performs collision checking for vehicles running on this polygon, so if it extends to unintended lanes, it needs to have RightOfWay tag.

By lowering common.attention_area_length you can check which lanes are conflicting with the intersection lane. Then set part of the conflicting lanes as the yield_lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#the-stop-line-of-intersection-is-chattering","title":"The stop line of intersection is chattering","text":"

The parameter collision_detection.collision_detection_hold_time suppresses the chattering by keeping UNSAFE decision for this duration until SAFE decision is finally made. The role of this parameter is to account for unstable detection/tracking of objects. By increasing this value you can suppress the chattering. However it could elongate the stopping duration excessively.

If the chattering arises from the acceleration/deceleration of target vehicles, increase collision_detection.collision_detection.collision_end_margin_time and/or collision_detection.collision_detection.collision_end_margin_time.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#the-stop-line-is-released-too-fastslow","title":"The stop line is released too fast/slow","text":"

If the intersection wall appears too fast, or ego tends to stop too conservatively for upcoming vehicles, lower the parameter collision_detection.collision_detection.collision_start_margin_time. If it lasts too long after the target vehicle passed, then lower the parameter collision_detection.collision_detection.collision_end_margin_time.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#ego-suddenly-stops-at-intersection-with-traffic-light","title":"Ego suddenly stops at intersection with traffic light","text":"

If the traffic light color changed from AMBER/RED to UNKNOWN, the intersection module works in the GREEN color mode. So collision and occlusion are likely to be detected again.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occlusion-is-detected-overly","title":"Occlusion is detected overly","text":"

You can check which areas are detected as occlusion by visualizing /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection/occlusion_polygons.

If you do not want to detect / do want to ignore occlusion far from ego or lower the computational cost of occlusion detection, occlusion.occlusion_attention_area_length should be set to lower value.

If you want to care the occlusion nearby ego more cautiously, set occlusion.occlusion_required_clearance_distance to a larger value. Then ego will approach the occlusion_peeking_stopline more closely to assure more clear FOV.

occlusion.possible_object_bbox is used for checking if detected occlusion area is small enough that no vehicles larger than this size can exist inside. By decreasing this size ego will ignore small occluded area.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#occupancy-grid-map-tuning","title":"occupancy grid map tuning","text":"

Refer to the document of autoware_probabilistic_occupancy_grid_map for details. If occlusion tends to be detected at apparently free space, increase occlusion.free_space_max to ignore them.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#in-simple_planning_simulator","title":"in simple_planning_simulator","text":"

intersection_occlusion feature is not recommended for use in planning_simulator because the laserscan_based_occupancy_grid_map generates unnatural UNKNOWN cells in 2D manner:

Also many users do not set traffic light information frequently although it is very critical for intersection_occlusion (and in real traffic environment too).

For these reasons, occlusion.enable is false by default.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#on-real-vehicle-in-end-to-end-simulator","title":"on real vehicle / in end-to-end simulator","text":"

On real vehicle or in end-to-end simulator like AWSIM the following pointcloud_based_occupancy_grid_map configuration is highly recommended:

scan_origin_frame: \"velodyne_top\"\n\ngrid_map_type: \"OccupancyGridMapProjectiveBlindSpot\"\nOccupancyGridMapProjectiveBlindSpot:\nprojection_dz_threshold: 0.01 # [m] for avoiding null division\nobstacle_separation_threshold: 1.0 # [m] fill the interval between obstacles with unknown for this length\n

You should set the top lidar link as the scan_origin_frame. In the example it is velodyne_top. The method OccupancyGridMapProjectiveBlindSpot estimates the FOV by running projective ray-tracing from scan_origin to obstacle or up to the ground and filling the cells on the \"shadow\" of the object as UNKNOWN.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#flowchart","title":"Flowchart","text":"

WIP

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#merge-from-private","title":"Merge From Private","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#role_1","title":"Role","text":"

When an ego enters a public road from a private road (e.g. a parking lot), it needs to face and stop before entering the public road to make sure it is safe.

This module is activated when there is an intersection at the private area from which the vehicle enters the public road. The stop line is generated both when the goal is in the intersection lane and when the path goes beyond the intersection lane. The basic behavior is the same as the intersection module, but ego must stop once at the stop line.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when the following conditions are met:

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#module-parameters_1","title":"Module Parameters","text":"Parameter Type Description merge_from_private_road/stop_duration_sec double [m] time margin to change state"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#known-issue","title":"Known Issue","text":"

If ego go over the stop line for a certain distance, then it will not transit from STOP.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_intersection_module/#test-maps","title":"Test Maps","text":"

The intersections lanelet map consist of a variety of intersections including:

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#no-drivable-lane","title":"No Drivable Lane","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#role","title":"Role","text":"

This module plans the velocity of the related part of the path in case there is a no drivable lane referring to it.

A no drivable lane is a lanelet or more that are out of operation design domain (ODD), i.e., the vehicle must not drive autonomously in this lanelet. A lanelet can be no drivable (out of ODD) due to many reasons, either technical limitations of the SW and/or HW, business requirements, safety considerations, .... etc, or even a combination of those.

Some examples of No Drivable Lanes

A lanelet becomes invalid by adding a new tag under the relevant lanelet in the map file <tag k=\"no_drivable_lane\" v=\"yes\"/>.

The target of this module is to stop the vehicle before entering the no drivable lane (with configurable stop margin) or keep the vehicle stationary if autonomous mode started inside a no drivable lane. Then ask the human driver to take the responsibility of the driving task (Takeover Request / Request to Intervene)

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#activation-timing","title":"Activation Timing","text":"

This function is activated when the lane id of the target path has an no drivable lane label (i.e. the no_drivable_lane attribute is yes).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double [m] margin for ego vehicle to stop before speed_bump print_debug_info bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_drivable_lane_module/#future-work","title":"Future Work","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#no-stopping-area","title":"No Stopping Area","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#role","title":"Role","text":"

This module plans to avoid stop in 'no stopping area`.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#limitation","title":"Limitation","text":"

This module allows developers to design vehicle velocity in no_stopping_area module using specific rules. Once ego vehicle go through pass through point, ego vehicle does't insert stop velocity and does't change decision from GO. Also this module only considers dynamic object in order to avoid unnecessarily stop.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#modelparameter","title":"ModelParameter","text":"Parameter Type Description state_clear_time double [s] time to clear stop state stuck_vehicle_vel_thr double [m/s] vehicles below this velocity are considered as stuck vehicle. stop_margin double [m] margin to stop line at no stopping area dead_line_margin double [m] if ego pass this position GO stop_line_margin double [m] margin to auto-gen stop line at no stopping area detection_area_length double [m] length of searching polygon stuck_vehicle_front_margin double [m] obstacle stop max distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_no_stopping_area_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#occlusion-spot","title":"Occlusion Spot","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#role","title":"Role","text":"

This module plans safe velocity to slow down before reaching collision point that hidden object is darting out from occlusion spot where driver can't see clearly because of obstacles.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_occlusion_spot becomes true. To make pedestrian first zone map tag is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#limitation-and-todos","title":"Limitation and TODOs","text":"

This module is prototype implementation to care occlusion spot. To solve the excessive deceleration due to false positive of the perception, the logic of detection method can be selectable. This point has not been discussed in detail and needs to be improved.

TODOs are written in each Inner-workings / Algorithms (see the description below).

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#logics-working","title":"Logics Working","text":"

There are several types of occlusions, such as \"occlusions generated by parked vehicles\" and \"occlusions caused by obstructions\". In situations such as driving on road with obstacles, where people jump out of the way frequently, all possible occlusion spots must be taken into account. This module considers all occlusion spots calculated from the occupancy grid, but it is not reasonable to take into account all occlusion spots for example, people jumping out from behind a guardrail, or behind cruising vehicle. Therefore currently detection area will be limited to to use predicted object information.

Note that this decision logic is still under development and needs to be improved.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#detectionarea-polygon","title":"DetectionArea Polygon","text":"

This module considers TTV from pedestrian velocity and lateral distance to occlusion spot. TTC is calculated from ego velocity and acceleration and longitudinal distance until collision point using motion velocity smoother. To compute fast this module only consider occlusion spot whose TTV is less than TTC and only consider area within \"max lateral distance\".

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#occlusion-spot-occupancy-grid-base","title":"Occlusion Spot Occupancy Grid Base","text":"

This module considers any occlusion spot around ego path computed from the occupancy grid. Due to the computational cost occupancy grid is not high resolution and this will make occupancy grid noisy so this module add information of occupancy to occupancy grid map.

TODO: consider hight of obstacle point cloud to generate occupancy grid.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#collision-free-judgement","title":"Collision Free Judgement","text":"

obstacle that can run out from occlusion should have free space until intersection from ego vehicle

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#partition-lanelet","title":"Partition Lanelet","text":"

By using lanelet information of \"guard_rail\", \"fence\", \"wall\" tag, it's possible to remove unwanted occlusion spot.

By using static object information, it is possible to make occupancy grid more accurate.

To make occupancy grid for planning is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#possible-collision","title":"Possible Collision","text":"

obstacle that can run out from occlusion is interrupted by moving vehicle.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#about-safe-motion","title":"About safe motion","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#the-concept-of-safe-velocity-and-margin","title":"The Concept of Safe Velocity and Margin","text":"

The safe slowdown velocity is calculated from the below parameters of ego emergency braking system and time to collision. Below calculation is included but change velocity dynamically is not recommended for planner.

This module defines safe margin to consider ego distance to stop and collision path point geometrically. While ego is cruising from safe margin to collision path point, ego vehicle keeps the same velocity as occlusion spot safe velocity.

Note: This logic assumes high-precision vehicle speed tracking and margin for decel point might not be the best solution, and override with manual driver is considered if pedestrian really run out from occlusion spot.

TODO: consider one of the best choices

  1. stop in front of occlusion spot
  2. insert 1km/h velocity in front of occlusion spot
  3. slowdown this way
  4. etc... .
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#maximum-slowdown-velocity","title":"Maximum Slowdown Velocity","text":"

The maximum slowdown velocity is calculated from the below parameters of ego current velocity and acceleration with maximum slowdown jerk and maximum slowdown acceleration in order not to slowdown too much.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description pedestrian_vel double [m/s] maximum velocity assumed pedestrian coming out from occlusion point. pedestrian_radius double [m] assumed pedestrian radius which fits in occlusion spot. Parameter Type Description use_object_info bool [-] whether to reflect object info to occupancy grid map or not. use_partition_lanelet bool [-] whether to use partition lanelet map data. Parameter /debug Type Description is_show_occlusion bool [-] whether to show occlusion point markers. is_show_cv_window bool [-] whether to show open_cv debug window. is_show_processing_time bool [-] whether to show processing time. Parameter /threshold Type Description detection_area_length double [m] the length of path to consider occlusion spot stuck_vehicle_vel double [m/s] velocity below this value is assumed to stop lateral_distance double [m] maximum lateral distance to consider hidden collision Parameter /motion Type Description safety_ratio double [-] safety ratio for jerk and acceleration max_slow_down_jerk double [m/s^3] jerk for safe brake max_slow_down_accel double [m/s^2] deceleration for safe brake non_effective_jerk double [m/s^3] weak jerk for velocity planning. non_effective_acceleration double [m/s^2] weak deceleration for velocity planning. min_allowed_velocity double [m/s] minimum velocity allowed safe_margin double [m] maximum error to stop with emergency braking system. Parameter /detection_area Type Description min_occlusion_spot_size double [m] the length of path to consider occlusion spot slice_length double [m] the distance of divided detection area max_lateral_distance double [m] buffer around the ego path used to build the detection_area area. Parameter /grid Type Description free_space_max double [-] maximum value of a free space cell in the occupancy grid occupied_min double [-] buffer around the ego path used to build the detection_area area."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#rough-overview-of-the-whole-process","title":"Rough overview of the whole process","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#detail-process-for-predicted-objectnot-updated","title":"Detail process for predicted object(not updated)","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_occlusion_spot_module/#detail-process-for-occupancy-grid-base","title":"Detail process for Occupancy grid base","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#behavior-velocity-planner","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#overview","title":"Overview","text":"

behavior_velocity_planner is a planner that adjust velocity based on the traffic rules. It loads modules as plugins. Please refer to the links listed below for detail on each module.

When each module plans velocity, it considers based on base_link(center of rear-wheel axis) pose. So for example, in order to stop at a stop line with the vehicles' front on the stop line, it calculates base_link position from the distance between base_link to front and modifies path velocity from the base_link position.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/path_with_lane_id tier4_planning_msgs::msg::PathWithLaneId path with lane_id ~input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~input/vehicle_odometry nav_msgs::msg::Odometry vehicle velocity ~input/dynamic_objects autoware_perception_msgs::msg::PredictedObjects dynamic objects ~input/no_ground_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/compare_map_filtered_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud filtered by compare map. Note that this is used only when the detection method of run out module is Points. ~input/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray traffic light states"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/path autoware_planning_msgs::msg::Path path to be followed ~output/stop_reasons tier4_planning_msgs::msg::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_modules vector<string> module names to launch forward_path_length double forward path length backward_path_length double backward path length max_accel double (to be a global parameter) max acceleration of the vehicle system_delay double (to be a global parameter) delay time until output control command delay_response_time double (to be a global parameter) delay time of the vehicle's response to control commands"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner/#traffic-light-handling-in-simreal","title":"Traffic Light Handling in sim/real","text":"

The handling of traffic light information varies depending on the usage. In the below table, the traffic signal topic element for the corresponding lane is denoted as info, and if info is not available, it is denoted as null.

module \\ case info is null info is not null intersection_occlusion(is_simulation = *) GO(occlusion is ignored) intersection_occlusion uses the latest non UNKNOWN observation in the queue up to present. traffic_light(sim, is_simulation = true) GO traffic_light uses the perceived traffic light information at present directly. traffic_light(real, is_simulation = false) STOP \u2060 crosswalk with Traffic Light(is_simulation = *) default map_based_prediction(is_simulation = *) default If a pedestrian traffic light is"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner_common/","title":"Behavior Velocity Planner Common","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_planner_common/#behavior-velocity-planner-common","title":"Behavior Velocity Planner Common","text":"

This package provides a behavior velocity interface without RTC, and common functions as a library, which are used in the behavior_velocity_planner node and modules.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_rtc_interface/","title":"Behavior Velocity RTC Interface","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_rtc_interface/#behavior-velocity-rtc-interface","title":"Behavior Velocity RTC Interface","text":"

This package provides a behavior velocity interface with RTC, which are used in the behavior_velocity_planner node and modules.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#run-out","title":"Run Out","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#role","title":"Role","text":"

run_out is the module that decelerates and stops for dynamic obstacles such as pedestrians, bicycles and motorcycles.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_run_out becomes true

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#flow-chart","title":"Flow chart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#preprocess-path","title":"Preprocess path","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#calculate-the-expected-target-velocity-for-ego-vehicle","title":"Calculate the expected target velocity for ego vehicle","text":"

Calculate the expected target velocity for the ego vehicle path to calculate time to collision with obstacles more precisely. The expected target velocity is calculated with autoware velocity smoother module by using current velocity, current acceleration and velocity limits directed by the map and external API.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#extend-the-path","title":"Extend the path","text":"

The path is extended by the length of base link to front to consider obstacles after the goal.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#trim-path-from-ego-position","title":"Trim path from ego position","text":"

The path is trimmed from ego position to a certain distance to reduce calculation time. Trimmed distance is specified by parameter of detection_distance.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#preprocess-obstacles","title":"Preprocess obstacles","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#create-data-of-abstracted-dynamic-obstacle","title":"Create data of abstracted dynamic obstacle","text":"

This module can handle multiple types of obstacles by creating abstracted dynamic obstacle data layer. Currently we have 3 types of detection method (Object, ObjectWithoutPath, Points) to create abstracted obstacle data.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#abstracted-dynamic-obstacle","title":"Abstracted dynamic obstacle","text":"

Abstracted obstacle data has following information.

Name Type Description pose geometry_msgs::msg::Pose pose of the obstacle classifications std::vector<autoware_perception_msgs::msg::ObjectClassification> classifications with probability shape autoware_perception_msgs::msg::Shape shape of the obstacle predicted_paths std::vector<DynamicObstacle::PredictedPath> predicted paths with confidence. this data doesn't have time step because we use minimum and maximum velocity instead. min_velocity_mps float minimum velocity of the obstacle. specified by parameter of dynamic_obstacle.min_vel_kmph max_velocity_mps float maximum velocity of the obstacle. specified by parameter of dynamic_obstacle.max_vel_kmph

Enter the maximum/minimum velocity of the object as a parameter, adding enough margin to the expected velocity. This parameter is used to create polygons for collision detection.

Future work: Determine the maximum/minimum velocity from the estimated velocity with covariance of the object

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#3-types-of-detection-method","title":"3 types of detection method","text":"

We have 3 types of detection method to meet different safety and availability requirements. The characteristics of them are shown in the table below. Method of Object has high availability (less false positive) because it detects only objects whose predicted path is on the lane. However, sometimes it is not safe because perception may fail to detect obstacles or generate incorrect predicted paths. On the other hand, method of Points has high safety (less false negative) because it uses pointcloud as input. Since points don't have a predicted path, the path that moves in the direction normal to the path of ego vehicle is considered to be the predicted path of abstracted dynamic obstacle data. However, without proper adjustment of filter of points, it may detect a lot of points and it will result in very low availability. Method of ObjectWithoutPath has the characteristics of an intermediate of Object and Points.

Method Description Object use an object with the predicted path for collision detection. ObjectWithoutPath use an object but not use the predicted path for collision detection. replace the path assuming that an object jumps out to the lane at specified velocity. Points use filtered points for collision detection. the path is created assuming that points jump out to the lane. points are regarded as an small circular shaped obstacle.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-outside-of-partition","title":"Exclude obstacles outside of partition","text":"

This module can exclude the obstacles outside of partition such as guardrail, fence, and wall. We need lanelet map that has the information of partition to use this feature. By this feature, we can reduce unnecessary deceleration by obstacles that are unlikely to jump out to the lane. You can choose whether to use this feature by parameter of use_partition_lanelet.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-by-label","title":"Exclude obstacles by label","text":"

This module only acts on target obstacles defined by the target_obstacle_types parameter. If an obstacle of a type not specified by said parameter is detected, it will be ignored by this module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-that-are-already-on-the-ego-vehicles-path","title":"Exclude obstacles that are already on the ego vehicle's path","text":"

In the cases were an obstacle is already on the ego vehicle's path, it cannot \"cut in\" into the ego's path anymore (which is the situation this module tries to handle) so it might be useful to exclude obstacles already on the vehicle's footprint path. By setting the parameter exclude_obstacles_already_in_path to true, this module will exclude the obstacles that are considered to be already on the ego vehicle's path for more than keep_obstacle_on_path_time_threshold. The module considers the ego vehicle's closest path point to each obstacle's current position, and determines the lateral distance between the obstacle and the right and left sides of the ego vehicle. If the obstacle is located within the left and right extremes of the vehicle for that pose, then it is considered to be inside the ego vehicle's footprint path and it is excluded. The virtual width of the vehicle used to detect if an obstacle is within the path or not, can be adjusted by the ego_footprint_extra_margin parameter.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#exclude-obstacles-that-cross-the-ego-vehicles-cut-line","title":"Exclude obstacles that cross the ego vehicle's \"cut line\"","text":"

This module can exclude obstacles that have predicted paths that will cross the back side of the ego vehicle. It excludes obstacles if their predicted path crosses the ego's \"cut line\". The \"cut line\" is a virtual line segment that is perpendicular to the ego vehicle and that passes through the ego's base link.

You can choose whether to use this feature by setting the parameter use_ego_cut_line to true or false. The width of the line can be tuned with the parameter ego_cut_line_length.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#collision-detection","title":"Collision detection","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#detect-collision-with-dynamic-obstacles","title":"Detect collision with dynamic obstacles","text":"

Along the ego vehicle path, determine the points where collision detection is to be performed for each detection_span.

The travel times to the each points are calculated from the expected target velocity.

For the each points, collision detection is performed using the footprint polygon of the ego vehicle and the polygon of the predicted location of the obstacles. The predicted location of the obstacles is described as rectangle or polygon that has the range calculated by min velocity, max velocity and the ego vehicle's travel time to the point. If the input type of the dynamic obstacle is Points, the obstacle shape is defined as a small cylinder.

Multiple points are detected as collision points because collision detection is calculated between two polygons. So we select the point that is on the same side as the obstacle and close to ego vehicle as the collision point.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#insert-velocity","title":"Insert velocity","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#insert-velocity-to-decelerate-for-obstacles","title":"Insert velocity to decelerate for obstacles","text":"

If the collision is detected, stop point is inserted on distance of base link to front + stop margin from the selected collision point. The base link to front means the distance between base_link (center of rear-wheel axis) and front of the car. Stop margin is determined by the parameter of stop_margin.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#insert-velocity-to-approach-the-obstacles","title":"Insert velocity to approach the obstacles","text":"

If you select the method of Points or ObjectWithoutPath, sometimes ego keeps stopping in front of the obstacle. To avoid this problem, This feature has option to approach the obstacle with slow velocity after stopping. If the parameter of approaching.enable is set to true, ego will approach the obstacle after ego stopped for state.stop_time_thresh seconds. The maximum velocity of approaching can be specified by the parameter of approaching.limit_vel_kmph. The decision to approach the obstacle is determined by a simple state transition as following image.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#limit-velocity-with-specified-jerk-and-acc-limit","title":"Limit velocity with specified jerk and acc limit","text":"

The maximum slowdown velocity is calculated in order not to slowdown too much. See the Occlusion Spot document for more details. You can choose whether to use this feature by parameter of slow_down_limit.enable.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description detection_method string [-] candidate: Object, ObjectWithoutPath, Points target_obstacle_types vector of string [-] specifies which obstacle types will be considered by the module, if the obstacles classification type is not written here, it will be ignored. candidate: [\"PEDESTRIAN\", \"BICYCLE\",\"MOTORCYCLE\"] use_partition_lanelet bool [-] whether to use partition lanelet map data specify_decel_jerk bool [-] whether to specify jerk when ego decelerates stop_margin double [m] the vehicle decelerates to be able to stop with this margin passing_margin double [m] the vehicle begins to accelerate if the vehicle's front in predicted position is ahead of the obstacle + this margin deceleration_jerk double [m/s^3] ego decelerates with this jerk when stopping for obstacles detection_distance double [m] ahead distance from ego to detect the obstacles detection_span double [m] calculate collision with this span to reduce calculation time min_vel_ego_kmph double [km/h] min velocity to calculate time to collision Parameter /detection_area Type Description margin_ahead double [m] ahead margin for detection area polygon margin_behind double [m] behind margin for detection area polygon Parameter /dynamic_obstacle Type Description use_mandatory_area double [-] whether to use mandatory detection area assume_fixed_velocity.enable double [-] If enabled, the obstacle's velocity is assumed to be within the minimum and maximum velocity values specified below assume_fixed_velocity.min_vel_kmph double [km/h] minimum velocity for dynamic obstacles assume_fixed_velocity.max_vel_kmph double [km/h] maximum velocity for dynamic obstacles diameter double [m] diameter of obstacles. used for creating dynamic obstacles from points height double [m] height of obstacles. used for creating dynamic obstacles from points max_prediction_time double [sec] create predicted path until this time time_step double [sec] time step for each path step. used for creating dynamic obstacles from points or objects without path points_interval double [m] divide obstacle points into groups with this interval, and detect only lateral nearest point. used only for Points method Parameter /approaching Type Description enable bool [-] whether to enable approaching after stopping margin double [m] distance on how close ego approaches the obstacle limit_vel_kmph double [km/h] limit velocity for approaching after stopping Parameter /state Type Description stop_thresh double [m/s] threshold to decide if ego is stopping stop_time_thresh double [sec] threshold for stopping time to transit to approaching state disable_approach_dist double [m] end the approaching state if distance to the obstacle is longer than this value keep_approach_duration double [sec] keep approach state for this duration to avoid chattering of state transition Parameter /slow_down_limit Type Description enable bool [-] whether to enable to limit velocity with max jerk and acc max_jerk double [m/s^3] minimum jerk deceleration for safe brake. max_acc double [m/s^2] minimum accel deceleration for safe brake. Parameter /ignore_momentary_detection Type Description enable bool [-] whether to ignore momentary detection time_threshold double [sec] ignores detections that persist for less than this duration"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_run_out_module/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#speed-bump","title":"Speed Bump","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#role","title":"Role","text":"

This module plans the velocity of the related part of the path in case there is speed bump regulatory element referring to it.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#activation-timing","title":"Activation Timing","text":"

The manager launch speed bump scene module when there is speed bump regulatory element referring to the reference path.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description slow_start_margin double [m] margin for ego vehicle to slow down before speed_bump slow_end_margin double [m] margin for ego vehicle to accelerate after speed_bump print_debug_info bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#speed-calculation","title":"Speed Calculation","text":" Parameter Type Description min_height double [m] minimum height assumption of the speed bump max_height double [m] maximum height assumption of the speed bump min_speed double [m/s] minimum speed assumption of slow down speed max_speed double [m/s] maximum speed assumption of slow down speed"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Note: If in speed bump annotation slow_down_speed tag is used then calculating the speed wrt the speed bump height will be ignored. In such case, specified slow_down_speed value in [kph] is being used.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_speed_bump_module/#future-work","title":"Future Work","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/","title":"Stop Line","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#stop-line","title":"Stop Line","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#role","title":"Role","text":"

This module plans the vehicle's velocity to ensure it stops just before stop lines and can resume movement after stopping.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a stop line in a target lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double Margin that the vehicle tries to stop in before stop_line stop_duration_sec double [s] Time parameter for the ego vehicle to stop before stop line hold_stop_margin_distance double [m] Parameter for restart prevention (See Algorithm section). Also, when the ego vehicle is within this distance from a stop line, the ego state becomes STOPPED from APPROACHING use_initialization_stop_state bool Flag to determine whether to return to the approaching state when the vehicle moves away from a stop line."},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#flowchart","title":"Flowchart","text":"

This algorithm is based on segment. segment consists of two node points. It's useful for removing boundary conditions because if segment(i) exists we can assume node(i) and node(i+1) exist.

First, this algorithm finds a collision between reference path and stop line. Then, we can get collision segment and collision point.

Next, based on collision point, it finds offset segment by iterating backward points up to a specific offset length. The offset length is stop_margin(parameter) + base_link to front(to adjust head pose to stop line). Then, we can get offset segment and offset from segment start.

After that, we can calculate an offset point from offset segment and offset. This will be stop_pose.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_stop_line_module/#restart-prevention","title":"Restart Prevention","text":"

If the vehicle requires X meters (e.g. 0.5 meters) to stop once it starts moving due to poor vehicle control performance, it may overshoot the stopping position, which must be strictly observed. This happens when the vehicle begins to move in order to approach a nearby stop point (e.g. 0.3 meters away).

This module includes the parameter hold_stop_margin_distance to prevent redundant restarts in such a situation. If the vehicle is stopped within hold_stop_margin_distance meters of the stop point (_front_to_stop_line < hold_stop_margin_distance), the module determines that the vehicle has already stopped at the stop point and will maintain the current stopping position, even if the vehicle has come to a stop due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#template","title":"Template","text":"

A template for behavior velocity modules based on the autoware_behavior_velocity_speed_bump_module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#autoware-behavior-velocity-module-template","title":"Autoware Behavior Velocity Module Template","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#scene","title":"Scene","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemodule-class","title":"TemplateModule Class","text":"

The TemplateModule class serves as a foundation for creating a scene module within the Autoware behavior velocity planner. It defines the core methods and functionality needed for the module's behavior. You should replace the placeholder code with actual implementations tailored to your specific behavior velocity module.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#constructor","title":"Constructor","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#modifypathvelocity-method","title":"modifyPathVelocity Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#createdebugmarkerarray-method","title":"createDebugMarkerArray Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#createvirtualwalls-method","title":"createVirtualWalls Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#manager","title":"Manager","text":"

The managing of your modules is defined in manager.hpp and manager.cpp. The managing is handled by two classes:

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemodulemanager-class","title":"TemplateModuleManager Class","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#constructor-templatemodulemanager","title":"Constructor TemplateModuleManager","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#getmodulename-method","title":"getModuleName() Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#launchnewmodules-method","title":"launchNewModules() Method","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#getmoduleexpiredfunction-method","title":"getModuleExpiredFunction() Method","text":"

Please note that the specific functionality of the methods launchNewModules() and getModuleExpiredFunction() would depend on the details of your behavior velocity modules and how they are intended to be managed within the Autoware system. You would need to implement these methods according to your module's requirements.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemoduleplugin-class","title":"TemplateModulePlugin Class","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#templatemoduleplugin-class_1","title":"TemplateModulePlugin Class","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_template_module/#example-usage","title":"Example Usage","text":"

In the following example, we take each point of the path, and multiply it by 2. Essentially duplicating the speed. Note that the velocity smoother will further modify the path speed after all the behavior velocity modules are executed.

bool TemplateModule::modifyPathVelocity(\n[[maybe_unused]] PathWithLaneId * path, [[maybe_unused]] StopReason * stop_reason)\n{\nfor (auto & p : path->points) {\np.point.longitudinal_velocity_mps *= 2.0;\n}\n\nreturn false;\n}\n
"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#traffic-light","title":"Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#role","title":"Role","text":"

Judgement whether a vehicle can go into an intersection or not by traffic light status, and planning a velocity of the stop if necessary. This module is designed for rule-based velocity decision that is easy for developers to design its behavior. It generates proper velocity for traffic light scene.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#limitations","title":"Limitations","text":"

This module allows developers to design STOP/GO in traffic light module using specific rules. Due to the property of rule-based planning, the algorithm is greatly depends on object detection and perception accuracy considering traffic light. Also, this module only handles STOP/Go at traffic light scene, so rushing or quick decision according to traffic condition is future work.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is traffic light in ego lane.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#algorithm","title":"Algorithm","text":"
  1. Obtains a traffic light mapped to the route and a stop line correspond to the traffic light from a map information.

  2. Uses the highest reliability one of the traffic light recognition result and if the color of that was not green or corresponding arrow signal, generates a stop point.

  3. When vehicle current velocity is

  4. When it to be judged that vehicle can\u2019t stop before stop line, autoware chooses one of the following behaviors

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#dilemma-zone","title":"Dilemma Zone","text":" "},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double [m] margin before stop point tl_state_timeout double [s] time out for detected traffic light result. stop_time_hysteresis double [s] time threshold to decide stop planning for chattering prevention yellow_lamp_period double [s] time for yellow lamp enable_pass_judge bool [-] whether to use pass judge"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_traffic_light_module/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#virtual-traffic-light","title":"Virtual Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#role","title":"Role","text":"

Autonomous vehicles have to cooperate with the infrastructures such as:

The following items are example cases:

  1. Traffic control by traffic lights with V2X support

  2. Intersection coordination of multiple vehicles by FMS.

It's possible to make each function individually, however, the use cases can be generalized with these three elements.

  1. start: Start a cooperation procedure after the vehicle enters a certain zone.
  2. stop: Stop at a defined stop line according to the status received from infrastructures.
  3. end: Finalize the cooperation procedure after the vehicle reaches the exit zone. This should be done within the range of stable communication.

This module sends/receives status from infrastructures and plans the velocity of the cooperation result.

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#system-configuration-diagram","title":"System Configuration Diagram","text":"

Planner and each infrastructure communicate with each other using common abstracted messages.

FMS: Intersection coordination when multiple vehicles are in operation and the relevant lane is occupied

Support different communication methods for different infrastructures

Have different meta-information for each geographic location

FMS: Fleet Management System

"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description max_delay_sec double [s] maximum allowed delay for command near_line_distance double [m] threshold distance to stop line to check ego stop. dead_line_margin double [m] threshold distance that this module continue to insert stop line. hold_stop_margin_distance double [m] parameter for restart prevention (See following section) check_timeout_after_stop_line bool [-] check timeout to stop when linkage is disconnected"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#map-format","title":"Map Format","text":" \\[ \\begin{align} l_{\\mathrm{min}} = -\\frac{v_0^2}{2 a_{\\mathrm{min}}} \\end{align} \\]"},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_virtual_traffic_light_module/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_walkway_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_walkway_module/#walkway","title":"Walkway","text":""},{"location":"planning/behavior_velocity_planner/autoware_behavior_velocity_walkway_module/#role","title":"Role","text":"

This module decide to stop before the ego will cross the walkway including crosswalk to enter or exit the private area.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/","title":"Index","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#dynamic-obstacle-stop","title":"Dynamic Obstacle Stop","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#role","title":"Role","text":"

dynamic_obstacle_stop is a module that stops the ego vehicle from entering the immediate path of a dynamic object.

The immediate path of an object is the area that the object would traverse during a given time horizon, assuming constant velocity and heading.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#activation-timing","title":"Activation Timing","text":"

This module is activated if the launch parameter launch_dynamic_obstacle_stop_module is set to true in the motion planning launch file.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The module insert a stop point where the ego trajectory collides with the immediate path of an object. The overall module flow can be summarized with the following 4 steps.

  1. Filter dynamic objects.
  2. Calculate immediate path rectangles of the dynamic objects.
  3. Find earliest collision where ego collides with an immediate path rectangle.
  4. Insert stop point before the collision.

In addition to these 4 steps, 2 mechanisms are in place to make the stop point of this module more stable: an hysteresis and a decision duration buffer.

The hysteresis parameter is used when a stop point was already being inserted in the previous iteration and it increases the range where dynamic objects are considered close enough to the ego trajectory to be used by the module.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#filter-dynamic-objects","title":"Filter dynamic objects","text":"

An object is considered by the module only if it meets all of the following conditions:

An object is considered unavoidable if it is heading towards the ego vehicle such that even if ego stops, a collision would still occur (assuming the object keeps driving in a straight line).

For the last condition, the object is considered close enough if its lateral distance from the ego trajectory is less than the threshold parameter minimum_object_distance_from_ego_trajectory plus half the width of ego and of the object (including the extra_object_width parameter). In addition, the value of the hysteresis parameter is added to the minimum distance if a stop point was inserted in the previous iteration.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#calculate-immediate-path-rectangles","title":"Calculate immediate path rectangles","text":"

For each considered object, a rectangle is created representing its immediate path. The rectangle has the width of the object plus the extra_object_width parameter and its length is the current speed of the object multiplied by the time_horizon.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#find-earliest-collision","title":"Find earliest collision","text":"

We build the ego trajectory footprints as the set of ego footprint polygons projected on each trajectory point. We then calculate the intersections between these ego trajectory footprints and the previously calculated immediate path rectangles. An intersection is ignored if the object is not driving toward ego, i.e., the absolute angle between the object and the trajectory point is larger than \\(\\frac{3 \\pi}{4}\\).

The collision point with the lowest arc length when projected on the ego trajectory will be used to calculate the final stop point.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#insert-stop-point","title":"Insert stop point","text":"

Before inserting a stop point, we calculate the range of trajectory arc lengths where it can be inserted. The minimum is calculated to satisfy the acceleration and jerk constraints of the vehicle. If a stop point was inserted in the previous iteration of the module, its arc length is used as the maximum. Finally, the stop point arc length is calculated to be the arc length of the previously found collision point minus the stop_distance_buffer and the ego vehicle longitudinal offset, clamped between the minimum and maximum values.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#duration-buffer","title":"Duration buffer","text":"

To prevent chatter caused by noisy perception, two duration parameters are used.

Timers and collision points are tracked for each dynamic object independently.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_dynamic_obstacle_stop_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description extra_object_width double [m] extra width around detected objects minimum_object_velocity double [m/s] objects with a velocity bellow this value are ignored stop_distance_buffer double [m] extra distance to add between the stop point and the collision point time_horizon double [s] time horizon used for collision checks hysteresis double [m] once a collision has been detected, this hysteresis is used on the collision detection add_stop_duration_buffer double [s] duration where a collision must be continuously detected before a stop decision is added remove_stop_duration_buffer double [s] duration between no collision being detected and the stop decision being remove minimum_object_distance_from_ego_trajectory double [m] minimum distance between the footprints of ego and an object to consider for collision ignore_unavoidable_collisions bool [-] if true, ignore collisions that cannot be avoided by stopping (assuming the obstacle continues going straight)"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/","title":"Obstacle Velocity Limiter","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacle-velocity-limiter","title":"Obstacle Velocity Limiter","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#purpose","title":"Purpose","text":"

This node limits the velocity when driving in the direction of an obstacle. For example, it allows to reduce the velocity when driving close to a guard rail in a curve.

Without this node With this node"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Using a parameter min_ttc (minimum time to collision), the node set velocity limits such that no collision with an obstacle would occur, even without new control inputs for a duration of min_ttc.

To achieve this, the motion of the ego vehicle is simulated forward in time at each point of the trajectory to create a corresponding footprint. If the footprint collides with some obstacle, the velocity at the trajectory point is reduced such that the new simulated footprint do not have any collision.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#simulated-motion-footprint-and-collision-distance","title":"Simulated Motion, Footprint, and Collision Distance","text":"

The motion of the ego vehicle is simulated at each trajectory point using the heading, velocity, and steering defined at the point. Footprints are then constructed from these simulations and checked for collision. If a collision is found, the distance from the trajectory point is used to calculate the adjusted velocity that would produce a collision-free footprint. Parameter simulation.distance_method allow to switch between an exact distance calculation and a less expensive approximation using a simple euclidean distance.

Two models can be selected with parameter simulation.model for simulating the motion of the vehicle: a simple particle model and a more complicated bicycle model.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#particle-model","title":"Particle Model","text":"

The particle model uses the constant heading and velocity of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to a straight line and the footprint to a rectangle.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#footprint","title":"Footprint","text":"

The rectangle footprint is built from 2 lines parallel to the simulated forward motion and at a distance of half the vehicle width.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#distance","title":"Distance","text":"

When a collision point is found within the footprint, the distance is calculated as described in the following figure.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#bicycle-model","title":"Bicycle Model","text":"

The bicycle model uses the constant heading, velocity, and steering of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to an arc around the circle of curvature associated with the steering. Uncertainty in the steering can be introduced with the simulation.steering_offset parameter which will generate a range of motion from a left-most to a right-most steering. This results in 3 curved lines starting from the same trajectory point. A parameter simulation.nb_points is used to adjust the precision of these lines, with a minimum of 2 resulting in straight lines and higher values increasing the precision of the curves.

By default, the steering values contained in the trajectory message are used. Parameter trajectory_preprocessing.calculate_steering_angles allows to recalculate these values when set to true.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#footprint_1","title":"Footprint","text":"

The footprint of the bicycle model is created from lines parallel to the left and right simulated motion at a distance of half the vehicle width. In addition, the two points on the left and right of the end point of the central simulated motion are used to complete the polygon.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#distance_1","title":"Distance","text":"

The distance to a collision point is calculated by finding the curvature circle passing through the trajectory point and the collision point.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacle-detection","title":"Obstacle Detection","text":"

Obstacles are represented as points or linestrings (i.e., sequence of points) around the obstacles and are constructed from an occupancy grid, a pointcloud, or the lanelet map. The lanelet map is always checked for obstacles but the other source is switched using parameter obstacles.dynamic_source.

To efficiently find obstacles intersecting with a footprint, they are stored in a R-tree. Two trees are used, one for the obstacle points, and one for the obstacle linestrings (which are decomposed into segments to simplify the R-tree).

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacle-masks","title":"Obstacle masks","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#dynamic-obstacles","title":"Dynamic obstacles","text":"

Moving obstacles such as other cars should not be considered by this module. These obstacles are detected by the perception modules and represented as polygons. Obstacles inside these polygons are ignored.

Only dynamic obstacles with a velocity above parameter obstacles.dynamic_obstacles_min_vel are removed.

To deal with delays and precision errors, the polygons can be enlarged with parameter obstacles.dynamic_obstacles_buffer.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacles-outside-of-the-safety-envelope","title":"Obstacles outside of the safety envelope","text":"

Obstacles that are not inside any forward simulated footprint are ignored if parameter obstacles.filter_envelope is set to true. The safety envelope polygon is built from all the footprints and used as a positive mask on the occupancy grid or pointcloud.

This option can reduce the total number of obstacles which reduces the cost of collision detection. However, the cost of masking the envelope is usually too high to be interesting.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#obstacles-on-the-ego-path","title":"Obstacles on the ego path","text":"

If parameter obstacles.ignore_obstacles_on_path is set to true, a polygon mask is built from the trajectory and the vehicle dimension. Any obstacle in this polygon is ignored.

The size of the polygon can be increased using parameter obstacles.ignore_extra_distance which is added to the vehicle lateral offset.

This option is a bit expensive and should only be used in case of noisy dynamic obstacles where obstacles are wrongly detected on the ego path, causing unwanted velocity limits.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#lanelet-map","title":"Lanelet Map","text":"

Information about static obstacles can be stored in the Lanelet map using the value of the type tag of linestrings. If any linestring has a type with one of the value from parameter obstacles.static_map_tags, then it will be used as an obstacle.

Obstacles from the lanelet map are not impacted by the masks.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#occupancy-grid","title":"Occupancy Grid","text":"

Masking is performed by iterating through the cells inside each polygon mask using the autoware::grid_map_utils::PolygonIterator function. A threshold is then applied to only keep cells with an occupancy value above parameter obstacles.occupancy_grid_threshold. Finally, the image is converted to an image and obstacle linestrings are extracted using the opencv function findContour.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#pointcloud","title":"Pointcloud","text":"

Masking is performed using the pcl::CropHull function. Points from the pointcloud are then directly used as obstacles.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#velocity-adjustment","title":"Velocity Adjustment","text":"

If a collision is found, the velocity at the trajectory point is adjusted such that the resulting footprint would no longer collide with an obstacle: \\(velocity = \\frac{dist\\_to\\_collision}{min\\_ttc}\\)

To prevent sudden deceleration of the ego vehicle, the parameter max_deceleration limits the deceleration relative to the current ego velocity. For a trajectory point occurring at a duration t in the future (calculated from the original velocity profile), the adjusted velocity cannot be set lower than \\(v_{current} - t * max\\_deceleration\\).

Furthermore, a parameter min_adjusted_velocity provides a lower bound on the modified velocity.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#trajectory-preprocessing","title":"Trajectory preprocessing","text":"

The node only modifies part of the input trajectory, starting from the current ego position. Parameter trajectory_preprocessing.start_distance is used to adjust how far ahead of the ego position the velocities will start being modified. Parameters trajectory_preprocessing.max_length and trajectory_preprocessing.max_duration are used to control how much of the trajectory will see its velocity adjusted.

To reduce computation cost at the cost of precision, the trajectory can be downsampled using parameter trajectory_preprocessing.downsample_factor. For example a value of 1 means all trajectory points will be evaluated while a value of 10 means only 1/10th of the points will be evaluated.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#parameters","title":"Parameters","text":"Name Type Description min_ttc float [s] required minimum time with no collision at each point of the trajectory assuming constant heading and velocity. distance_buffer float [m] required distance buffer with the obstacles. min_adjusted_velocity float [m/s] minimum adjusted velocity this node can set. max_deceleration float [m/s\u00b2] maximum deceleration an adjusted velocity can cause. trajectory_preprocessing.start_distance float [m] controls from which part of the trajectory (relative to the current ego pose) the velocity is adjusted. trajectory_preprocessing.max_length float [m] controls the maximum length (starting from the start_distance) where the velocity is adjusted. trajectory_preprocessing.max_distance float [s] controls the maximum duration (measured from the start_distance) where the velocity is adjusted. trajectory_preprocessing.downsample_factor int trajectory downsampling factor to allow tradeoff between precision and performance. trajectory_preprocessing.calculate_steering_angle bool if true, the steering angles of the trajectory message are not used but are recalculated. simulation.model string model to use for forward simulation. Either \"particle\" or \"bicycle\". simulation.distance_method string method to use for calculating distance to collision. Either \"exact\" or \"approximation\". simulation.steering_offset float offset around the steering used by the bicycle model. simulation.nb_points int number of points used to simulate motion with the bicycle model. obstacles.dynamic_source string source of dynamic obstacle used for collision checking. Can be \"occupancy_grid\", \"point_cloud\", or \"static_only\" (no dynamic obstacle). obstacles.occupancy_grid_threshold int value in the occupancy grid above which a cell is considered an obstacle. obstacles.dynamic_obstacles_buffer float buffer around dynamic obstacles used when masking an obstacle in order to prevent noise. obstacles.dynamic_obstacles_min_vel float velocity above which to mask a dynamic obstacle. obstacles.static_map_tags string list linestring of the lanelet map with this tags are used as obstacles. obstacles.filter_envelope bool wether to use the safety envelope to filter the dynamic obstacles source."},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The velocity profile produced by this node is not meant to be a realistic velocity profile and can contain sudden jumps of velocity with no regard for acceleration and jerk. This velocity profile is meant to be used as an upper bound on the actual velocity of the vehicle.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":"

The critical case for this node is when an obstacle is falsely detected very close to the trajectory such that the corresponding velocity suddenly becomes very low. This can cause a sudden brake and two mechanisms can be used to mitigate these errors.

Parameter min_adjusted_velocity allow to set a minimum to the adjusted velocity, preventing the node to slow down the vehicle too much. Parameter max_deceleration allow to set a maximum deceleration (relative to the current ego velocity) that the adjusted velocity would incur.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_obstacle_velocity_limiter_module/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/","title":"Out of Lane","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#out-of-lane","title":"Out of Lane","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#role","title":"Role","text":"

There are cases where the ego vehicle footprint goes out of the driving lane, for example when taking a narrow turn with a large vehicle. The out_of_lane module adds deceleration and stop points to the ego trajectory in order to prevent collisions from occurring in these out of lane cases.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#activation","title":"Activation","text":"

This module is activated if the launch parameter launch_out_of_lane_module is set to true.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This module calculates if out of lane collisions occur and insert stop point before the collisions if necessary.

The algorithm assumes the input ego trajectory contains accurate time_from_start values in order to calculate accurate time to collisions with the predicted objects.

Next we explain the inner-workings of the module in more details.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#1-ego-trajectory-footprints","title":"1. Ego trajectory footprints","text":"

In this first step, the ego footprint is projected at each trajectory point and its size is modified based on the ego.extra_..._offset parameters.

The length of the trajectory used for generating the footprints is limited by the max_arc_length parameter.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#2-other-lanelets","title":"2. Other lanelets","text":"

In the second step, we calculate the lanelets where collisions should be avoided. We consider all lanelets around the ego vehicle that are not crossed by the trajectory linestring (sequence of trajectory points) or their preceding lanelets.

In the debug visualization, these other lanelets are shown as blue polygons.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#3-out-of-lane-areas","title":"3. Out of lane areas","text":"

Next, for each trajectory point, we create the corresponding out of lane areas by intersection the other lanelets (from step 2) with the trajectory point footprint (from step 1). Each area is associated with the lanelets overlapped by the area and with the corresponding ego trajectory point.

In the debug visualization, the out of lane area polygon is connected to the corresponding trajectory point by a line.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#4-predicted-objects-filtering","title":"4. Predicted objects filtering","text":"

We filter objects and their predicted paths with the following conditions:

cut_predicted_paths_beyond_red_lights = false cut_predicted_paths_beyond_red_lights = true

In the debug visualization, the filtered predicted paths are shown in green and the stop lines of red traffic lights are shown in red.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#5-time-to-collisions","title":"5. Time to collisions","text":"

For each out of lane area, we calculate the times when a dynamic object will overlap the area based on its filtered predicted paths.

In the case where parameter mode is set to threshold and the calculated time is less than threshold.time_threshold parameter, then we decide to avoid the out of lane area.

In the case where parameter mode is set to ttc, we calculate the time to collision by comparing the predicted time of the object with the time_from_start field contained in the trajectory point. If the time to collision is bellow the ttc.threshold parameter value, we decide to avoid the out of lane area.

In the debug visualization, the ttc (in seconds) is displayed on top of its corresponding trajectory point. The color of the text is red if the collision should be avoided and green otherwise.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#6-calculate-the-stop-or-slowdown-point","title":"6. Calculate the stop or slowdown point","text":"

First, the minimum stopping distance of the ego vehicle is calculated based on the jerk and deceleration constraints set by the velocity smoother parameters.

We then search for the furthest pose along the trajectory where the ego footprint stays inside of the ego lane (calculate in step 2) and constraint the search to be between the minimum stopping distance and the 1st trajectory point with a collision to avoid (as determined in the previous step). The search is done by moving backward along the trajectory with a distance step set by the action.precision parameter.

We first do this search for a footprint expanded with the ego.extra_..._offset, action.longitudinal_distance_buffer and action.lateral_distance_buffer parameters. If no valid pose is found, we search again while only considering the extra offsets but without considering the distance buffers. If still no valid pose is found, we use the base ego footprint without any offset. In case no pose is found, we fallback to using the pose before the detected collision without caring if it is out of lane or not.

Whether it is decided to slow down or stop is determined by the distance between the ego vehicle and the trajectory point to avoid. If this distance is bellow the actions.slowdown.threshold, a velocity of actions.slowdown.velocity will be used. If the distance is bellow the actions.stop.threshold, a velocity of 0m/s will be used.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#about-stability-of-the-stopslowdown-pose","title":"About stability of the stop/slowdown pose","text":"

As the input trajectory can change significantly between iterations, it is expected that the decisions of this module will also change. To make the decision more stable, a stop or slowdown pose is used for a minimum duration set by the action.min_duration parameter. If during that time a new pose closer to the ego vehicle is generated, then it replaces the previous one. Otherwise, the stop or slowdown pose will only be discarded after no out of lane collision is detection for the set duration.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_out_of_lane_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Description mode string [-] mode used to consider a dynamic object. Candidates: threshold, intervals, ttc skip_if_already_overlapping bool [-] if true, do not run this module when ego already overlaps another lane max_arc_length double [m] maximum trajectory arc length that is checked for out_of_lane collisions Parameter /threshold Type Description time_threshold double [s] consider objects that will reach an overlap within this time Parameter /ttc Type Description threshold double [s] consider objects with an estimated time to collision bellow this value while ego is on the overlap Parameter /objects Type Description minimum_velocity double [m/s] ignore objects with a velocity lower than this value predicted_path_min_confidence double [-] minimum confidence required for a predicted path to be considered cut_predicted_paths_beyond_red_lights bool [-] if true, predicted paths are cut beyond the stop line of red traffic lights ignore_behind_ego bool [-] if true, objects behind the ego vehicle are ignored Parameter /action Type Description precision double [m] precision when inserting a stop pose in the trajectory longitudinal_distance_buffer double [m] safety distance buffer to keep in front of the ego vehicle lateral_distance_buffer double [m] safety distance buffer to keep on the side of the ego vehicle min_duration double [s] minimum duration needed before a decision can be canceled slowdown.distance_threshold double [m] insert a slow down when closer than this distance from an overlap slowdown.velocity double [m] slow down velocity stop.distance_threshold double [m] insert a stop when closer than this distance from an overlap Parameter /ego Type Description extra_front_offset double [m] extra front distance to add to the ego footprint extra_rear_offset double [m] extra rear distance to add to the ego footprint extra_left_offset double [m] extra left distance to add to the ego footprint extra_right_offset double [m] extra right distance to add to the ego footprint"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/","title":"Motion Velocity Planner","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#motion-velocity-planner","title":"Motion Velocity Planner","text":""},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#overview","title":"Overview","text":"

motion_velocity_planner is a planner to adjust the trajectory velocity based on the obstacles around the vehicle. It loads modules as plugins. Please refer to the links listed below for detail on each module.

Each module calculates stop and slow down points to be inserted in the ego trajectory. These points are assumed to correspond to the base_link frame of the ego vehicle as it follows the trajectory. This means that to stop before a wall, a stop point is inserted in the trajectory at a distance ahead of the wall equal to the vehicle front offset (wheelbase + front overhang, see the vehicle dimensions.

"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#input-topics","title":"Input topics","text":"Name Type Description ~/input/trajectory autoware_planning_msgs::msg::Trajectory input trajectory ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map ~/input/vehicle_odometry nav_msgs::msg::Odometry vehicle position and velocity ~/input/accel geometry_msgs::msg::AccelWithCovarianceStamped vehicle acceleration ~/input/dynamic_objects autoware_perception_msgs::msg::PredictedObjects dynamic objects ~/input/no_ground_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray traffic light states ~/input/occupancy_grid nav_msgs::msg::OccupancyGrid occupancy grid"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory autoware_planning_msgs::msg::Trajectory Ego trajectory with updated velocity profile ~/output/planning_factors/<MODULE_NAME> tier4_planning_msgs::msg::PlanningFactorsArray factors causing change in the ego velocity profile"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#services","title":"Services","text":"Name Type Description ~/service/load_plugin autoware_motion_velocity_planner_node::srv::LoadPlugin To request loading a plugin ~/service/unload_plugin autoware_motion_velocity_planner_node::srv::UnloadPlugin To request unloaded a plugin"},{"location":"planning/motion_velocity_planner/autoware_motion_velocity_planner_node/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_modules vector\\<string> module names to launch

In addition, the following parameters should be provided to the node:

"},{"location":"planning/sampling_based_planner/autoware_bezier_sampler/","title":"B\u00e9zier sampler","text":""},{"location":"planning/sampling_based_planner/autoware_bezier_sampler/#bezier-sampler","title":"B\u00e9zier sampler","text":"

Implementation of b\u00e9zier curves and their generation following the sampling strategy from https://ieeexplore.ieee.org/document/8932495

"},{"location":"planning/sampling_based_planner/autoware_frenet_planner/","title":"Frenet planner","text":""},{"location":"planning/sampling_based_planner/autoware_frenet_planner/#frenet-planner","title":"Frenet planner","text":"

Trajectory generation in Frenet frame.

"},{"location":"planning/sampling_based_planner/autoware_frenet_planner/#description","title":"Description","text":"

Original paper

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#path-sampler","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#purpose","title":"Purpose","text":"

This package implements a node that uses sampling based planning to generate a drivable trajectory.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#feature","title":"Feature","text":"

This package is able to:

Note that the velocity is just taken over from the input path.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#input","title":"input","text":"Name Type Description ~/input/path autoware_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry nav_msgs/msg/Odometry Current state of the ego vehicle ~/input/objects autoware_perception_msgs/msg/PredictedObjects objects to avoid"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#output","title":"output","text":"Name Type Description ~/output/trajectory autoware_planning_msgs/msg/Trajectory generated trajectory that is feasible to drive and collision-free"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#algorithm","title":"Algorithm","text":"

Sampling based planning is decomposed into 3 successive steps:

  1. Sampling: candidate trajectories are generated.
  2. Pruning: invalid candidates are discarded.
  3. Selection: the best remaining valid candidate is selected.
"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#sampling","title":"Sampling","text":"

Candidate trajectories are generated based on the current ego state and some target state. 2 sampling algorithms are currently implemented: sampling with b\u00e9zier curves or with polynomials in the frenet frame.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#pruning","title":"Pruning","text":"

The validity of each candidate trajectory is checked using a set of hard constraints.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#selection","title":"Selection","text":"

Among the valid candidate trajectories, the best one is determined using a set of soft constraints (i.e., objective functions).

Each soft constraint is associated with a weight to allow tuning of the preferences.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#limitations","title":"Limitations","text":"

The quality of the candidates generated with polynomials in frenet frame greatly depend on the reference path. If the reference path is not smooth, the resulting candidates will probably be undriveable.

Failure to find a valid trajectory current results in a suddenly stopping trajectory.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#comparison-with-the-autoware_path_optimizer","title":"Comparison with the autoware_path_optimizer","text":"

The autoware_path_optimizer uses an optimization based approach, finding the optimal solution of a mathematical problem if it exists. When no solution can be found, it is often hard to identify the issue due to the intermediate mathematical representation of the problem.

In comparison, the sampling based approach cannot guarantee an optimal solution but is much more straightforward, making it easier to debug and tune.

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#how-to-tune-parameters","title":"How to Tune Parameters","text":"

The sampling based planner mostly offers a trade-off between the consistent quality of the trajectory and the computation time. To guarantee that a good trajectory is found requires generating many candidates which linearly increases the computation time.

TODO

"},{"location":"planning/sampling_based_planner/autoware_path_sampler/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#computation-time","title":"Computation time","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#robustness","title":"Robustness","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#other-options","title":"Other options","text":""},{"location":"planning/sampling_based_planner/autoware_path_sampler/#how-to-debug","title":"How To Debug","text":"

TODO

"},{"location":"planning/sampling_based_planner/autoware_sampler_common/","title":"Sampler Common","text":""},{"location":"planning/sampling_based_planner/autoware_sampler_common/#sampler-common","title":"Sampler Common","text":"

Common functions for sampling based planners. This includes classes for representing paths and trajectories, hard and soft constraints, conversion between cartesian and frenet frames, ...

"},{"location":"sensing/autoware_cuda_utils/","title":"autoware_cuda_utils","text":""},{"location":"sensing/autoware_cuda_utils/#autoware_cuda_utils","title":"autoware_cuda_utils","text":""},{"location":"sensing/autoware_cuda_utils/#purpose","title":"Purpose","text":"

This package contains a library of common functions related to CUDA.

"},{"location":"sensing/autoware_gnss_poser/","title":"gnss_poser","text":""},{"location":"sensing/autoware_gnss_poser/#gnss_poser","title":"gnss_poser","text":""},{"location":"sensing/autoware_gnss_poser/#purpose","title":"Purpose","text":"

The gnss_poser is a node that subscribes gnss sensing messages and calculates vehicle pose with covariance.

This node subscribes to NavSatFix to publish the pose of base_link. The data in NavSatFix represents the antenna's position. Therefore, it performs a coordinate transformation using the tf from base_link to the antenna's position. The frame_id of the antenna's position refers to NavSatFix's header.frame_id. (Note that header.frame_id in NavSatFix indicates the antenna's frame_id, not the Earth or reference ellipsoid. See also NavSatFix definition.)

If the transformation from base_link to the antenna cannot be obtained, it outputs the pose of the antenna position without performing coordinate transformation.

"},{"location":"sensing/autoware_gnss_poser/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_gnss_poser/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_gnss_poser/#input","title":"Input","text":"Name Type Description /map/map_projector_info autoware_map_msgs::msg::MapProjectorInfo map projection info ~/input/fix sensor_msgs::msg::NavSatFix gnss status message ~/input/autoware_orientation autoware_sensing_msgs::msg::GnssInsOrientationStamped orientation click here for more details"},{"location":"sensing/autoware_gnss_poser/#output","title":"Output","text":"Name Type Description ~/output/pose geometry_msgs::msg::PoseStamped vehicle pose calculated from gnss sensing data ~/output/gnss_pose_cov geometry_msgs::msg::PoseWithCovarianceStamped vehicle pose with covariance calculated from gnss sensing data ~/output/gnss_fixed tier4_debug_msgs::msg::BoolStamped gnss fix status"},{"location":"sensing/autoware_gnss_poser/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_gnss_poser/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range base_frame string frame id for base_frame base_link N/A gnss_base_frame string frame id for gnss_base_frame gnss_base_link N/A map_frame string frame id for map_frame map N/A use_gnss_ins_orientation boolean use Gnss-Ins orientation true N/A gnss_pose_pub_method integer 0: Instant Value 1: Average Value 2: Median Value. If 0 is chosen buffer_epoch parameter loses affect. 0 \u22650\u22642 buff_epoch integer Buffer epoch 1 \u22650"},{"location":"sensing/autoware_gnss_poser/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_gnss_poser/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_gnss_poser/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_gnss_poser/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_gnss_poser/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_image_diagnostics/","title":"image_diagnostics","text":""},{"location":"sensing/autoware_image_diagnostics/#image_diagnostics","title":"image_diagnostics","text":""},{"location":"sensing/autoware_image_diagnostics/#purpose","title":"Purpose","text":"

The image_diagnostics is a node that check the status of the input raw image.

"},{"location":"sensing/autoware_image_diagnostics/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Below figure shows the flowchart of image diagnostics node. Each image is divided into small blocks for block state assessment.

Each small image block state is assessed as below figure.

After all image's blocks state are evaluated, the whole image status is summarized as below.

"},{"location":"sensing/autoware_image_diagnostics/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_image_diagnostics/#input","title":"Input","text":"Name Type Description input/raw_image sensor_msgs::msg::Image raw image"},{"location":"sensing/autoware_image_diagnostics/#output","title":"Output","text":"Name Type Description image_diag/debug/gray_image sensor_msgs::msg::Image gray image image_diag/debug/dft_image sensor_msgs::msg::Image discrete Fourier transformation image image_diag/debug/diag_block_image sensor_msgs::msg::Image each block state colorization image_diag/image_state_diag autoware_internal_debug_msgs::msg::Int32Stamped image diagnostics status value /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics"},{"location":"sensing/autoware_image_diagnostics/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_image_diagnostics/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_image_diagnostics/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":" "},{"location":"sensing/autoware_image_transport_decompressor/","title":"image_transport_decompressor","text":""},{"location":"sensing/autoware_image_transport_decompressor/#image_transport_decompressor","title":"image_transport_decompressor","text":""},{"location":"sensing/autoware_image_transport_decompressor/#purpose","title":"Purpose","text":"

The image_transport_decompressor is a node that decompresses images.

"},{"location":"sensing/autoware_image_transport_decompressor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_image_transport_decompressor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_image_transport_decompressor/#input","title":"Input","text":"Name Type Description ~/input/compressed_image sensor_msgs::msg::CompressedImage compressed image"},{"location":"sensing/autoware_image_transport_decompressor/#output","title":"Output","text":"Name Type Description ~/output/raw_image sensor_msgs::msg::Image decompressed image"},{"location":"sensing/autoware_image_transport_decompressor/#parameters","title":"Parameters","text":"Name Type Description Default Range encoding string The image encoding to use for the decompressed image default N/A"},{"location":"sensing/autoware_image_transport_decompressor/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_image_transport_decompressor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_imu_corrector/","title":"autoware_imu_corrector","text":""},{"location":"sensing/autoware_imu_corrector/#autoware_imu_corrector","title":"autoware_imu_corrector","text":""},{"location":"sensing/autoware_imu_corrector/#imu_corrector","title":"imu_corrector","text":"

imu_corrector_node is a node that correct imu data.

  1. Correct yaw rate offset \\(b\\) by reading the parameter.
  2. Correct yaw rate standard deviation \\(\\sigma\\) by reading the parameter.

Mathematically, we assume the following equation:

\\[ \\tilde{\\omega}(t) = \\omega(t) + b(t) + n(t) \\]

where \\(\\tilde{\\omega}\\) denotes observed angular velocity, \\(\\omega\\) denotes true angular velocity, \\(b\\) denotes an offset, and \\(n\\) denotes a gaussian noise. We also assume that \\(n\\sim\\mathcal{N}(0, \\sigma^2)\\).

"},{"location":"sensing/autoware_imu_corrector/#input","title":"Input","text":"Name Type Description ~input sensor_msgs::msg::Imu raw imu data"},{"location":"sensing/autoware_imu_corrector/#output","title":"Output","text":"Name Type Description ~output sensor_msgs::msg::Imu corrected imu data"},{"location":"sensing/autoware_imu_corrector/#parameters","title":"Parameters","text":"Name Type Description angular_velocity_offset_x double roll rate offset in imu_link [rad/s] angular_velocity_offset_y double pitch rate offset imu_link [rad/s] angular_velocity_offset_z double yaw rate offset imu_link [rad/s] angular_velocity_stddev_xx double roll rate standard deviation imu_link [rad/s] angular_velocity_stddev_yy double pitch rate standard deviation imu_link [rad/s] angular_velocity_stddev_zz double yaw rate standard deviation imu_link [rad/s] acceleration_stddev double acceleration standard deviation imu_link [m/s^2]"},{"location":"sensing/autoware_imu_corrector/#gyro_bias_estimator","title":"gyro_bias_estimator","text":"

gyro_bias_validator is a node that validates the bias of the gyroscope. It subscribes to the sensor_msgs::msg::Imu topic and validate if the bias of the gyroscope is within the specified range.

Note that the node calculates bias from the gyroscope data by averaging the data only when the vehicle is stopped.

"},{"location":"sensing/autoware_imu_corrector/#input_1","title":"Input","text":"Name Type Description ~/input/imu_raw sensor_msgs::msg::Imu raw imu data ~/input/pose geometry_msgs::msg::PoseWithCovarianceStamped ndt pose

Note that the input pose is assumed to be accurate enough. For example when using NDT, we assume that the NDT is appropriately converged.

Currently, it is possible to use methods other than NDT as a pose_source for Autoware, but less accurate methods are not suitable for IMU bias estimation.

In the future, with careful implementation for pose errors, the IMU bias estimated by NDT could potentially be used not only for validation but also for online calibration.

"},{"location":"sensing/autoware_imu_corrector/#output_1","title":"Output","text":"Name Type Description ~/output/gyro_bias geometry_msgs::msg::Vector3Stamped bias of the gyroscope [rad/s]"},{"location":"sensing/autoware_imu_corrector/#parameters_1","title":"Parameters","text":"

Note that this node also uses angular_velocity_offset_x, angular_velocity_offset_y, angular_velocity_offset_z parameters from imu_corrector.param.yaml.

Name Type Description gyro_bias_threshold double threshold of the bias of the gyroscope [rad/s] timer_callback_interval_sec double seconds about the timer callback function [sec] diagnostics_updater_interval_sec double period of the diagnostics updater [sec] straight_motion_ang_vel_upper_limit double upper limit of yaw angular velocity, beyond which motion is not considered straight [rad/s]"},{"location":"sensing/autoware_pcl_extensions/","title":"autoware_pcl_extensions","text":""},{"location":"sensing/autoware_pcl_extensions/#autoware_pcl_extensions","title":"autoware_pcl_extensions","text":""},{"location":"sensing/autoware_pcl_extensions/#purpose","title":"Purpose","text":"

The autoware_pcl_extensions is a pcl extension library. The voxel grid filter in this package works with a different algorithm than the original one.

"},{"location":"sensing/autoware_pcl_extensions/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pcl_extensions/#original-algorithm-1","title":"Original Algorithm [1]","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with their centroid
"},{"location":"sensing/autoware_pcl_extensions/#extended-algorithm","title":"Extended Algorithm","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with the closest point to their centroid
"},{"location":"sensing/autoware_pcl_extensions/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pcl_extensions/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pcl_extensions/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pcl_extensions/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pcl_extensions/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pcl_extensions/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://pointclouds.org/documentation/tutorials/voxel_grid.html

"},{"location":"sensing/autoware_pcl_extensions/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/","title":"autoware_pointcloud_preprocessor","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#autoware_pointcloud_preprocessor","title":"autoware_pointcloud_preprocessor","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#purpose","title":"Purpose","text":"

The autoware_pointcloud_preprocessor is a package that includes the following filters:

"},{"location":"sensing/autoware_pointcloud_preprocessor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each filter's algorithm is in the following links.

Filter Name Description Detail concatenate_data subscribe multiple pointclouds and concatenate them into a pointcloud link crop_box_filter remove points within a given box link distortion_corrector compensate pointcloud distortion caused by ego vehicle's movement during 1 scan link downsample_filter downsampling input pointcloud link outlier_filter remove points caused by hardware problems, rain drops and small insects as a noise link passthrough_filter remove points on the outside of a range in given field (e.g. x, y, z, intensity) link pointcloud_accumulator accumulate pointclouds for a given amount of time link vector_map_filter remove points on the outside of lane by using vector map link vector_map_inside_area_filter remove points inside of vector map area that has given type by parameter link"},{"location":"sensing/autoware_pointcloud_preprocessor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/autoware_pointcloud_preprocessor/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"sensing/autoware_pointcloud_preprocessor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

autoware::pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"sensing/autoware_pointcloud_preprocessor/#measuring-the-performance","title":"Measuring the performance","text":"

In Autoware, point cloud data from each LiDAR sensor undergoes preprocessing in the sensing pipeline before being input into the perception pipeline. The preprocessing stages are illustrated in the diagram below:

Each stage in the pipeline incurs a processing delay. Mostly, we've used ros2 topic delay /topic_name to measure the time between the message header and the current time. This approach works well for small-sized messages. However, when dealing with large point cloud messages, this method introduces an additional delay. This is primarily because accessing these large point cloud messages externally impacts the pipeline's performance.

Our sensing/perception nodes are designed to run within composable node containers, leveraging intra-process communication. External subscriptions to these messages (like using ros2 topic delay or rviz2) impose extra delays and can even slow down the pipeline by subscribing externally. Therefore, these measurements will not be accurate.

To mitigate this issue, we've adopted a method where each node in the pipeline reports its pipeline latency time. This approach ensures the integrity of intra-process communication and provides a more accurate measure of delays in the pipeline.

"},{"location":"sensing/autoware_pointcloud_preprocessor/#benchmarking-the-pipeline","title":"Benchmarking The Pipeline","text":"

The nodes within the pipeline report the pipeline latency time, indicating the duration from the sensor driver's pointcloud output to the node's output. This data is crucial for assessing the pipeline's health and efficiency.

When running Autoware, you can monitor the pipeline latency times for each node in the pipeline by subscribing to the following ROS 2 topics:

These topics provide the pipeline latency times, giving insights into the delays at various stages of the pipeline from the sensor output of LidarX to each subsequent node.

"},{"location":"sensing/autoware_pointcloud_preprocessor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"sensing/autoware_pointcloud_preprocessor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/","title":"blockage_diag","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#blockage_diag","title":"blockage_diag","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#purpose","title":"Purpose","text":"

To ensure the performance of LiDAR and safety for autonomous driving, the abnormal condition diagnostics feature is needed. LiDAR blockage is abnormal condition of LiDAR when some unwanted objects stitch to and block the light pulses and return signal. This node's purpose is to detect the existing of blockage on LiDAR and its related size and location.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#inner-workings-algorithmsblockage-detection","title":"Inner-workings / Algorithms(Blockage detection)","text":"

This node bases on the no-return region and its location to decide if it is a blockage.

The logic is showed as below

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#inner-workings-algorithmsdust-detection","title":"Inner-workings /Algorithms(Dust detection)","text":"

About dust detection, morphological processing is implemented. If the lidar's ray cannot be acquired due to dust in the lidar area where the point cloud is considered to return from the ground, black pixels appear as noise in the depth image. The area of noise is found by erosion and dilation these black pixels.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#input","title":"Input","text":"Name Type Description ~/input/pointcloud_raw_ex sensor_msgs::msg::PointCloud2 The raw point cloud data is used to detect the no-return region"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#output","title":"Output","text":"Name Type Description ~/output/blockage_diag/debug/blockage_mask_image sensor_msgs::msg::Image The mask image of detected blockage ~/output/blockage_diag/debug/ground_blockage_ratio autoware_internal_debug_msgs::msg::Float32Stamped The area ratio of blockage region in ground region ~/output/blockage_diag/debug/sky_blockage_ratio autoware_internal_debug_msgs::msg::Float32Stamped The area ratio of blockage region in sky region ~/output/blockage_diag/debug/lidar_depth_map sensor_msgs::msg::Image The depth map image of input point cloud ~/output/blockage_diag/debug/single_frame_dust_mask sensor_msgs::msg::Image The mask image of detected dusty area in latest single frame ~/output/blockage_diag/debug/multi_frame_dust_mask sensor_msgs::msg::Image The mask image of continuous detected dusty area ~/output/blockage_diag/debug/blockage_dust_merged_image sensor_msgs::msg::Image The merged image of blockage detection(red) and multi frame dusty area detection(yellow) results ~/output/blockage_diag/debug/ground_dust_ratio autoware_internal_debug_msgs::msg::Float32Stamped The ratio of dusty area divided by area where ray usually returns from the ground."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#parameters","title":"Parameters","text":"Name Type Description Default Range blockage_ratio_threshold float The threshold of blockage area ratio. If the blockage value exceeds this threshold, the diagnostic state will be set to ERROR. 0.1 \u22650 blockage_count_threshold float The threshold of number continuous blockage frames 50 \u22650 blockage_buffering_frames integer The number of buffering about blockage detection [range:1-200] 2 \u22651\u2264200 blockage_buffering_interval integer The interval of buffering about blockage detection 1 \u22650 enable_dust_diag boolean enable dust diagnostic false N/A publish_debug_image boolean publish debug image false N/A dust_ratio_threshold float The threshold of dusty area ratio 0.2 \u22650 dust_count_threshold integer The threshold of number continuous frames include dusty area 10 \u22650 dust_kernel_size integer The kernel size of morphology processing in dusty area detection 2 \u22650 dust_buffering_frames integer The number of buffering about dusty area detection [range:1-200] 10 \u22651\u2264200 dust_buffering_interval integer The interval of buffering about dusty area detection 1 \u22650 max_distance_range float Maximum view range for the LiDAR 200.0 \u22650 horizontal_resolution float The horizontal resolution of depth map image [deg/pixel] 0.4 \u22650 blockage_kernel integer The kernel size of morphology processing the detected blockage area 10 \u22650 angle_range array The effective range of LiDAR [0.0, 360.0] N/A vertical_bins integer The LiDAR channel 40 \u22650 is_channel_order_top2down boolean If the lidar channels are indexed from top to down true N/A horizontal_ring_id integer The id of horizontal ring of the LiDAR 18 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Only Hesai Pandar40P and Hesai PandarQT were tested. For a new LiDAR, it is necessary to check order of channel id in vertical distribution manually and modify the code.
  2. About dusty area detection, False positives occur when there are water puddles on the road surface due to rain, etc. Also, the area of the ray to the sky is currently undetectable.
"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/blockage-diag/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/","title":"concatenate_data","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#concatenate_data","title":"concatenate_data","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#purpose","title":"Purpose","text":"

Many self-driving cars combine multiple LiDARs to expand the sensing range. Therefore, a function to combine a plurality of point clouds is required.

To combine multiple sensor data with a similar timestamp, the message_filters is often used in the ROS-based system, but this requires the assumption that all inputs can be received. Since safety must be strongly considered in autonomous driving, the point clouds concatenate node must be designed so that even if one sensor fails, the remaining sensor information can be output.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The figure below represents the reception time of each sensor data and how it is combined in the case.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#input","title":"Input","text":"Name Type Description ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped The vehicle odometry is used to interpolate the timestamp of each sensor data"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::Pointcloud2 concatenated point clouds"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#parameters","title":"Parameters","text":"Name Type Default Value Description input/points vector of string [] input topic names that type must be sensor_msgs::msg::Pointcloud2 input_frame string \"\" input frame id output_frame string \"\" output frame id has_static_tf_only bool false flag to listen TF only once max_queue_size int 5 max queue size of input/output topics"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timeout_sec double 0.1 tolerance of time to publish next pointcloud [s]When this time limit is exceeded, the filter concatenates and publishes pointcloud, even if not all the point clouds are subscribed. input_offset vector of double [] This parameter can control waiting time for each input sensor pointcloud [s]. You must to set the same length of offsets with input pointclouds numbers. For its tuning, please see actual usage page. publish_synchronized_pointcloud bool false If true, publish the time synchronized pointclouds. All input pointclouds are transformed and then re-published as message named <original_msg_name>_synchronized. input_twist_topic_type std::string twist Topic type for twist. Currently support twist or odom."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#actual-usage","title":"Actual Usage","text":"

For the example of actual usage of this node, please refer to the preprocessor.launch.py file.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#how-to-tuning-timeout_sec-and-input_offset","title":"How to tuning timeout_sec and input_offset","text":"

The values in timeout_sec and input_offset are used in the timer_callback to control concatenation timings.

Name Description How to tune timeout_sec timeout sec for default timer To avoid mis-concatenation, at least this value must be shorter than sampling time. input_offset timeout extension when a pointcloud comes to buffer. The amount of waiting time will be timeout_sec - input_offset. So, you will need to set larger value for the last-coming pointcloud and smaller for fore-coming."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#node-separation-options-for-future","title":"Node separation options for future","text":"

Since the pointcloud concatenation has two process, \"time synchronization\" and \"pointcloud concatenation\", it is possible to separate these processes.

In the future, Nodes will be completely separated in order to achieve node loosely coupled nature, but currently both nodes can be selected for backward compatibility (See this PR).

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/concatenate-data/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is necessary to assume that the vehicle odometry value exists, the sensor data and odometry timestamp are correct, and the TF from base_link to sensor_frame is also correct.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/","title":"crop_box_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#crop_box_filter","title":"crop_box_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#purpose","title":"Purpose","text":"

The crop_box_filter is a node that removes points with in a given box region. This filter is used to remove the points that hit the vehicle itself.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

pcl::CropBox is used, which filters all points inside a given box.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range min_x float minimum x-coordinate value for crop range in meters -1.0 N/A min_y float minimum y-coordinate value for crop range in meters -1.0 N/A min_z float minimum z-coordinate value for crop range in meters -1.0 N/A max_x float maximum x-coordinate value for crop range in meters 1.0 N/A max_y float maximum y-coordinate value for crop range in meters 1.0 N/A max_z float maximum z-coordinate value for crop range in meters 1.0 N/A negative boolean if true, remove points within the box from the pointcloud; otherwise, remove points outside the box. false N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/crop-box-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/","title":"distortion_corrector","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#distortion_corrector","title":"distortion_corrector","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#purpose","title":"Purpose","text":"

The distortion_corrector is a node that compensates for pointcloud distortion caused by the ego-vehicle's movement during one scan.

Since the LiDAR sensor scans by rotating an internal laser, the resulting point cloud will be distorted if the ego-vehicle moves during a single scan (as shown by the figure below). The node corrects this by interpolating sensor data using the odometry of the ego-vehicle.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The node uses twist information (linear and angular velocity) from the ~/input/twist topic to correct each point in the point cloud. If the user sets use_imu to true, the node will replace the twist's angular velocity with the angular velocity from IMU.

The node supports two different modes of distortion correction: 2D distortion correction and 3D distortion correction. The main difference is that the 2D distortion corrector only utilizes the x-axis of linear velocity and the z-axis of angular velocity to correct the point positions. On the other hand, the 3D distortion corrector utilizes all linear and angular velocity components to correct the point positions.

Please note that the processing time difference between the two distortion methods is significant; the 3D corrector takes 50% more time than the 2D corrector. Therefore, it is recommended that in general cases, users should set use_3d_distortion_correction to false. However, in scenarios such as a vehicle going over speed bumps, using the 3D corrector can be beneficial.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 Topic of the distorted pointcloud. ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped Topic of the twist information. ~/input/imu sensor_msgs::msg::Imu Topic of the IMU data."},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#output","title":"Output","text":"Name Type Description ~/output/pointcloud sensor_msgs::msg::PointCloud2 Topic of the undistorted pointcloud"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range base_frame string The undistortion algorithm is based on a base frame, which must be the same as the twist frame. base_link N/A use_imu boolean Use IMU angular velocity, otherwise, use twist angular velocity. true N/A use_3d_distortion_correction boolean Use 3d distortion correction algorithm, otherwise, use 2d distortion correction algorithm. false N/A update_azimuth_and_distance boolean Flag to update the azimuth and distance values of each point after undistortion. If set to false, the azimuth and distance values will remain unchanged after undistortion, resulting in a mismatch with the updated x, y, z coordinates. false N/A has_static_tf_only boolean Flag to indicate if only static TF is used. false N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#launch","title":"Launch","text":"
ros2 launch autoware_pointcloud_preprocessor distortion_corrector.launch.xml\n
"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":" Velodyne azimuth coordinate Hesai azimuth coordinate"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/distortion-corrector/#referencesexternal-links","title":"References/External links","text":"

https://docs.opencv.org/3.4/db/de0/group__core__utils.html#ga7b356498dd314380a0c386b059852270

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/","title":"downsample_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#downsample_filter","title":"downsample_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#purpose","title":"Purpose","text":"

The downsample_filter is a node that reduces the number of points.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter","title":"Approximate Downsample Filter","text":"

pcl::VoxelGridNearestCentroid is used. The algorithm is described in autoware_pcl_extensions

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter","title":"Random Downsample Filter","text":"

pcl::RandomSample is used, which points are sampled with uniform probability.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter","title":"Voxel Grid Downsample Filter","text":"

pcl::VoxelGrid is used, which points in each voxel are approximated with their centroid.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#pickup-based-voxel-grid-downsample-filter","title":"Pickup Based Voxel Grid Downsample Filter","text":"

This algorithm samples a single actual point existing within the voxel, not the centroid. The computation cost is low compared to Centroid Based Voxel Grid Filter.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

These implementations inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#note-parameters","title":"Note Parameters","text":"

These implementations inherit autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#core-parameters","title":"Core Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter_1","title":"Approximate Downsample Filter","text":"Name Type Description Default Range voxel_size_x float voxel size along the x-axis [m] 0.3 \u22650.0 voxel_size_y float voxel size along the y-axis [m] 0.3 \u22650.0 voxel_size_z float voxel size along the z-axis [m] 0.1 \u22650.0"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter_1","title":"Random Downsample Filter","text":"Name Type Description Default Range sample_num integer number of indices to be sampled 1500 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter_1","title":"Voxel Grid Downsample Filter","text":"Name Type Description Default Range voxel_size_x float the voxel size along x-axis [m] 0.3 \u22650 voxel_size_y float the voxel size along y-axis [m] 0.3 \u22650 voxel_size_z float the voxel size along z-axis [m] 0.1 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#pickup-based-voxel-grid-downsample-filter_1","title":"Pickup Based Voxel Grid Downsample Filter","text":"Name Type Description Default Range voxel_size_x float voxel size along the x-axis [m] 1 \u22650 voxel_size_y float voxel size along the y-axis [m] 1 \u22650 voxel_size_z float voxel size along the z-axis [m] 1 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This implementation uses the robin_hood.h hashing library by martinus, available under the MIT License at martinus/robin-hood-hashing on GitHub. Special thanks to martinus for this contribution.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/downsample-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/","title":"dual_return_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#dual_return_outlier_filter","title":"dual_return_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as fog and rain and publish visibility as a diagnostic topic.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This node can remove rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. The dual_return_outlier_filter is named because it removes noise using data that contains two types of return values separated by attenuation factor, as shown in the figure below.

Therefore, in order to use this node, the sensor driver must publish custom data including return_type. please refer to PointXYZIRCAEDT data structure.

Another feature of this node is that it publishes visibility as a diagnostic topic. With this function, for example, in heavy rain, the sensing module can notify that the processing performance has reached its limit, which can lead to ensuring the safety of the vehicle.

In some complicated road scenes where normal objects also reflect the light in two stages, for instance plants, leaves, some plastic net etc, the visibility faces some drop in fine weather condition. To deal with that, optional settings of a region of interest (ROI) are added.

  1. Fixed_xyz_ROI mode: Visibility estimation based on the weak points in a fixed cuboid surrounding region of ego-vehicle, defined by x, y, z in base_link perspective.
  2. Fixed_azimuth_ROI mode: Visibility estimation based on the weak points in a fixed surrounding region of ego-vehicle, defined by azimuth and distance of LiDAR perspective.

When select 2 fixed ROI modes, due to the range of weak points is shrink, the sensitivity of visibility is decrease so that a trade of between weak_first_local_noise_threshold and visibility_threshold is needed.

The figure below describe how the node works.

The below picture shows the ROI options.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#output","title":"Output","text":"Name Type Description /dual_return_outlier_filter/frequency_image sensor_msgs::msg::Image The histogram image that represent visibility /dual_return_outlier_filter/visibility autoware_internal_debug_msgs::msg::Float32Stamped A representation of visibility with a value from 0 to 1 /dual_return_outlier_filter/pointcloud_noise sensor_msgs::msg::Pointcloud2 The pointcloud removed as noise"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range x_max float Maximum of x in meters for Fixed_xyz_ROI mode 18.0 N/A x_min float Minimum of x in meters for Fixed_xyz_ROI mode -12.0 N/A y_max float Maximum of y in meters for Fixed_xyz_ROI mode 2.0 N/A y_min float Minimum of y in meters for Fixed_xyz_ROI mode -2.0 N/A z_max float Maximum of z in meters for Fixed_xyz_ROI mode 10.0 N/A z_min float Minimum of z in meters for Fixed_xyz_ROI mode 0.0 N/A min_azimuth_deg float The left limit of azimuth in degrees for Fixed_azimuth_ROI mode 135.0 \u22650\u2264360 max_azimuth_deg float The right limit of azimuth in degrees for Fixed_azimuth_ROI mode 225.0 \u22650\u2264360 max_distance float The limit distance in meters for Fixed_azimuth_ROI mode 12.0 \u22650.0 vertical_bins integer The number of vertical bins for the visibility histogram 128 \u22651 max_azimuth_diff float The azimuth difference threshold in degrees for ring_outlier_filter 50.0 \u22650.0 weak_first_distance_ratio float The maximum allowed ratio between consecutive weak point distances 1.004 \u22650.0 general_distance_ratio float The maximum allowed ratio between consecutive normal point distances 1.03 \u22650.0 weak_first_local_noise_threshold integer If the number of outliers among weak points is less than the weak_first_local_noise_threshold in the (max_azimuth - min_azimuth) / horizontal_bins interval, all points within the interval will not be filtered out. 2 \u22650 roi_mode string roi mode Fixed_xyz_ROI ['Fixed_xyz_ROI', 'No_ROI', 'Fixed_azimuth_ROI'] visibility_error_threshold float When the proportion of white pixels in the binary histogram falls below this parameter the diagnostic status becomes ERR 0.5 \u22650.0\u22641.0 visibility_warn_threshold float When the proportion of white pixels in the binary histogram falls below this parameter the diagnostic status becomes WARN 0.7 \u22650.0\u22641.0"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Not recommended for use as it is under development. Input data must be PointXYZIRCAEDT type data including return_type.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/","title":"outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#outlier_filter","title":"outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#purpose","title":"Purpose","text":"

The outlier_filter is a package for filtering outlier of points.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Filter Name Description Detail radius search 2d outlier filter A method of removing point cloud noise based on the number of points existing within a certain radius link ring outlier filter A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points link voxel grid outlier filter A method of removing point cloud noise based on the number of points existing within a voxel link dual return outlier filter (under development) A method of removing rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. link"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/","title":"passthrough_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#passthrough_filter","title":"passthrough_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#purpose","title":"Purpose","text":"

The passthrough_filter is a node that removes points on the outside of a range in a given field (e.g. x, y, z, intensity, ring, etc).

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range filter_limit_min integer minimum allowed field value 0 \u22650 filter_limit_max integer maximum allowed field value 127 \u22650 filter_field_name string filtering field name channel N/A keep_organized boolean flag to keep indices structure false N/A filter_limit_negative boolean flag to return whether the data is inside limit or not false N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/passthrough-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/","title":"pointcloud_accumulator","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#pointcloud_accumulator","title":"pointcloud_accumulator","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#purpose","title":"Purpose","text":"

The pointcloud_accumulator is a node that accumulates pointclouds for a given amount of time.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range accumulation_time_sec float accumulation period [s] 2 \u22650 pointcloud_buffer_size integer buffer size 50 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#radius_search_2d_outlier_filter","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

RadiusOutlierRemoval filter which removes all indices in its input cloud that don\u2019t have at least some number of neighbors within a certain range.

The description above is quoted from [1]. pcl::search::KdTree [2] is used to implement this package.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range min_neighbors integer If points in the circle centered on reference point is less than min_neighbors, a reference point is judged as outlier 5 \u22650 search_radius float Searching number of points included in search_radius 0.2 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Since the method is to count the number of points contained in the cylinder with the direction of gravity as the direction of the cylinder axis, it is a prerequisite that the ground has been removed.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#referencesexternal-links","title":"References/External links","text":"

[1] https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html

[2] https://pcl.readthedocs.io/projects/tutorials/en/latest/kdtree_search.html#kdtree-search

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/","title":"ring_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#ring_outlier_filter","title":"ring_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points

Another feature of this node is that it calculates visibility score based on outlier pointcloud and publish score as a topic.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#visibility-score-calculation-algorithm","title":"visibility score calculation algorithm","text":"

The pointcloud is divided into vertical bins (rings) and horizontal bins (azimuth divisions). The algorithm starts by splitting the input point cloud into separate rings based on the ring value of each point. Then, for each ring, it iterates through the points and calculates the frequency of points within each horizontal bin. The frequency is determined by incrementing a counter for the corresponding bin based on the point's azimuth value. The frequency values are stored in a frequency image matrix, where each cell represents a specific ring and azimuth bin. After calculating the frequency image, the algorithm applies a noise threshold to create a binary image. Points with frequency values above the noise threshold are considered valid, while points below the threshold are considered noise. Finally, the algorithm calculates the visibility score by counting the number of non-zero pixels in the frequency image and dividing it by the total number of pixels (vertical bins multiplied by horizontal bins).

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range distance_ratio float distance_ratio 1.03 \u22650.0 object_length_threshold float object_length_threshold 0.1 \u22650.0 num_points_threshold integer num_points_threshold 4 \u22650 max_rings_num integer max_rings_num 128 \u22651 max_points_num_per_ring integer Set this value large enough such that HFoV / resolution < max_points_num_per_ring 4000 \u22650 publish_outlier_pointcloud boolean Flag to publish outlier pointcloud and visibility score. Due to performance concerns, please set to false during experiments. false N/A min_azimuth_deg float The left limit of azimuth for visibility score calculation 0.0 \u22650.0 max_azimuth_deg float The right limit of azimuth for visibility score calculation 360.0 \u22650.0\u2264360.0 max_distance float The limit distance for visibility score calculation 12.0 \u22650.0 vertical_bins integer The number of vertical bin for visibility histogram 128 \u22651 horizontal_bins integer The number of horizontal bin for visibility histogram 36 \u22651 noise_threshold integer The threshold value for distinguishing noise from valid points in the frequency image 2 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This nodes requires that the points of the input point cloud are in chronological order and that individual points follow the memory layout specified by PointXYZIRCAEDT.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/ring-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/","title":"vector_map_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#vector_map_filter","title":"vector_map_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#purpose","title":"Purpose","text":"

The vector_map_filter is a node that removes points on the outside of lane by using vector map.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range voxel_size_x float voxel size along x-axis [m] 0.04 \u22650 voxel_size_y float voxel size along y-axis [m] 0.04 \u22650"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#vector_map_inside_area_filter","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#purpose","title":"Purpose","text":"

The vector_map_inside_area_filter is a node that removes points inside the vector map area that has given type by parameter.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, so please see also README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 input points ~/input/vector_map autoware_map_msgs::msg::LaneletMapBin vector map used for filtering points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range polygon_type string polygon type to be filtered no_obstacle_segmentation_area N/A use_z_filter boolean use z value for filtering false N/A z_threshold float z threshold for filtering 0.0 N/A"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/vector-map-inside-area-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#voxel_grid_outlier_filter","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Removing point cloud noise based on the number of points existing within a voxel. The radius_search_2d_outlier_filter is better for accuracy, but this method has the advantage of low calculation cost.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits autoware::pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range voxel_size_x float the voxel size along x-axis [m] 0.3 \u22650 voxel_size_y float the voxel size along y-axis [m] 0.3 \u22650 voxel_size_z float the voxel size along z-axis [m] 0.1 \u22650 voxel_points_threshold integer the minimum number of points in each voxel 2 \u22651"},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/autoware_pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2_node","title":"radar_scan_to_pointcloud2_node","text":""},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan"},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#output-topics","title":"Output topics","text":"Name Type Description output/amplitude_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is amplitude. output/doppler_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is doppler velocity."},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#parameters","title":"Parameters","text":"Name Type Description publish_amplitude_pointcloud bool Whether publish radar pointcloud whose intensity is amplitude. Default is true. publish_doppler_pointcloud bool Whether publish radar pointcloud whose intensity is doppler velocity. Default is false."},{"location":"sensing/autoware_radar_scan_to_pointcloud2/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_scan_to_pointcloud2 radar_scan_to_pointcloud2.launch.xml\n
"},{"location":"sensing/autoware_radar_static_pointcloud_filter/","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/autoware_radar_static_pointcloud_filter/#radar_static_pointcloud_filter","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/autoware_radar_static_pointcloud_filter/#radar_static_pointcloud_filter_node","title":"radar_static_pointcloud_filter_node","text":"

Extract static/dynamic radar pointcloud by using doppler velocity and ego motion. Calculation cost is O(n). n is the number of radar pointcloud.

"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan input/odometry nav_msgs::msg::Odometry Ego vehicle odometry topic"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#output-topics","title":"Output topics","text":"Name Type Description output/static_radar_scan radar_msgs::msg::RadarScan static radar pointcloud output/dynamic_radar_scan radar_msgs::msg::RadarScan dynamic radar pointcloud"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#parameters","title":"Parameters","text":"Name Type Description doppler_velocity_sd double Standard deviation for radar doppler velocity. [m/s]"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_static_pointcloud_filter radar_static_pointcloud_filter.launch.xml\n
"},{"location":"sensing/autoware_radar_static_pointcloud_filter/#algorithm","title":"Algorithm","text":""},{"location":"sensing/autoware_radar_threshold_filter/","title":"radar_threshold_filter","text":""},{"location":"sensing/autoware_radar_threshold_filter/#radar_threshold_filter","title":"radar_threshold_filter","text":""},{"location":"sensing/autoware_radar_threshold_filter/#radar_threshold_filter_node","title":"radar_threshold_filter_node","text":"

Remove noise from radar return by threshold.

Calculation cost is O(n). n is the number of radar return.

"},{"location":"sensing/autoware_radar_threshold_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs/msg/RadarScan.msg Radar pointcloud data"},{"location":"sensing/autoware_radar_threshold_filter/#output-topics","title":"Output topics","text":"Name Type Description output/radar radar_msgs/msg/RadarScan.msg Filtered radar pointcloud"},{"location":"sensing/autoware_radar_threshold_filter/#parameters","title":"Parameters","text":" Name Type Description is_amplitude_filter bool if this parameter is true, apply amplitude filter (publish amplitude_min < amplitude < amplitude_max) amplitude_min double [dBm^2] amplitude_max double [dBm^2] is_range_filter bool if this parameter is true, apply range filter (publish range_min < range < range_max) range_min double [m] range_max double [m] is_azimuth_filter bool if this parameter is true, apply angle filter (publish azimuth_min < range < azimuth_max) azimuth_min double [rad] azimuth_max double [rad] is_z_filter bool if this parameter is true, apply z position filter (publish z_min < z < z_max) z_min double [m] z_max double [m]"},{"location":"sensing/autoware_radar_threshold_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch autoware_radar_threshold_filter radar_threshold_filter.launch.xml\n
"},{"location":"sensing/autoware_radar_tracks_noise_filter/","title":"autoware_radar_tracks_noise_filter","text":""},{"location":"sensing/autoware_radar_tracks_noise_filter/#autoware_radar_tracks_noise_filter","title":"autoware_radar_tracks_noise_filter","text":"

This package contains a radar object filter module for radar_msgs/msg/RadarTrack. This package can filter noise objects in RadarTracks.

"},{"location":"sensing/autoware_radar_tracks_noise_filter/#algorithm","title":"Algorithm","text":"

The core algorithm of this package is RadarTrackCrossingNoiseFilterNode::isNoise() function. See the function and the parameters for details.

Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis velocity. Some radar can estimate y-axis velocity inside the device, but it sometimes lack precision. In y-axis threshold filter, if y-axis velocity of RadarTrack is more than velocity_y_threshold, it treats as noise objects.

"},{"location":"sensing/autoware_radar_tracks_noise_filter/#input","title":"Input","text":"Name Type Description ~/input/tracks radar_msgs/msg/RadarTracks.msg 3D detected tracks."},{"location":"sensing/autoware_radar_tracks_noise_filter/#output","title":"Output","text":"Name Type Description ~/output/noise_tracks radar_msgs/msg/RadarTracks.msg Noise objects ~/output/filtered_tracks radar_msgs/msg/RadarTracks.msg Filtered objects"},{"location":"sensing/autoware_radar_tracks_noise_filter/#parameters","title":"Parameters","text":"Name Type Description Default value velocity_y_threshold double Y-axis velocity threshold [m/s]. If y-axis velocity of RadarTrack is more than velocity_y_threshold, it treats as noise objects. 7.0"},{"location":"sensing/autoware_vehicle_velocity_converter/","title":"autoware_vehicle_velocity_converter","text":""},{"location":"sensing/autoware_vehicle_velocity_converter/#autoware_vehicle_velocity_converter","title":"autoware_vehicle_velocity_converter","text":""},{"location":"sensing/autoware_vehicle_velocity_converter/#purpose","title":"Purpose","text":"

This package converts autoware_vehicle_msgs::msg::VehicleReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro odometer node.

"},{"location":"sensing/autoware_vehicle_velocity_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/autoware_vehicle_velocity_converter/#input","title":"Input","text":"Name Type Description velocity_status autoware_vehicle_msgs::msg::VehicleReport vehicle velocity"},{"location":"sensing/autoware_vehicle_velocity_converter/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance converted from VehicleReport"},{"location":"sensing/autoware_vehicle_velocity_converter/#parameters","title":"Parameters","text":"Name Type Description speed_scale_factor double speed scale factor (ideal value is 1.0) frame_id string frame id for output message velocity_stddev_xx double standard deviation for vx angular_velocity_stddev_zz double standard deviation for yaw rate"},{"location":"sensing/livox/autoware_livox_tag_filter/","title":"livox_tag_filter","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#livox_tag_filter","title":"livox_tag_filter","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#purpose","title":"Purpose","text":"

The livox_tag_filter is a node that removes noise from pointcloud by using the following tags:

"},{"location":"sensing/livox/autoware_livox_tag_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/livox/autoware_livox_tag_filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/livox/autoware_livox_tag_filter/#parameters","title":"Parameters","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#node-parameters","title":"Node Parameters","text":"Name Type Description ignore_tags vector ignored tags (See the following table)"},{"location":"sensing/livox/autoware_livox_tag_filter/#tag-parameters","title":"Tag Parameters","text":"Bit Description Options 0~1 Point property based on spatial position 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Low confidence level of the noise 2~3 Point property based on intensity 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Reserved 4~5 Return number 00: return 0 01: return 1 10: return 2 11: return 3 6~7 Reserved

You can download more detail description about the livox from external link [1].

"},{"location":"sensing/livox/autoware_livox_tag_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://www.livoxtech.com/downloads

"},{"location":"sensing/livox/autoware_livox_tag_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"simulator/autoware_carla_interface/","title":"autoware_carla_interface","text":""},{"location":"simulator/autoware_carla_interface/#autoware_carla_interface","title":"autoware_carla_interface","text":""},{"location":"simulator/autoware_carla_interface/#ros-2autowareuniverse-bridge-for-carla-simulator","title":"ROS 2/Autoware.universe bridge for CARLA simulator","text":"

Thanks to https://github.com/gezp for ROS 2 Humble support for CARLA Communication. This ros package enables communication between Autoware and CARLA for autonomous driving simulation.

"},{"location":"simulator/autoware_carla_interface/#supported-environment","title":"Supported Environment","text":"ubuntu ros carla autoware 22.04 humble 0.9.15 Main"},{"location":"simulator/autoware_carla_interface/#setup","title":"Setup","text":""},{"location":"simulator/autoware_carla_interface/#install","title":"Install","text":"
  1. Download maps (y-axis inverted version) to arbitrary location
  2. Change names and create the map folder (example: Town01) inside autoware_map. (point_cloud/Town01.pcd -> autoware_map/Town01/pointcloud_map.pcd, vector_maps/lanelet2/Town01.osm-> autoware_map/Town01/lanelet2_map.osm)
  3. Create map_projector_info.yaml on the folder and add projector_type: Local on the first line.
"},{"location":"simulator/autoware_carla_interface/#build","title":"Build","text":"
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
"},{"location":"simulator/autoware_carla_interface/#run","title":"Run","text":"
  1. Run carla, change map, spawn object if you need

    cd CARLA\n./CarlaUE4.sh -prefernvidia -quality-level=Low -RenderOffScreen\n
  2. Run ros nodes

    ros2 launch autoware_launch e2e_simulator.launch.xml map_path:=$HOME/autoware_map/Town01 vehicle_model:=sample_vehicle sensor_model:=awsim_sensor_kit simulator_type:=carla carla_map:=Town01\n
  3. Set initial pose (Init by GNSS)

  4. Set goal position
  5. Wait for planning
  6. Engage
"},{"location":"simulator/autoware_carla_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The InitializeInterface class is key to setting up both the CARLA world and the ego vehicle. It fetches configuration parameters through the autoware_carla_interface.launch.xml.

The main simulation loop runs within the carla_ros2_interface class. This loop ticks simulation time inside the CARLA simulator at fixed_delta_seconds time, where data is received and published as ROS 2 messages at frequencies defined in self.sensor_frequencies.

Ego vehicle commands from Autoware are processed through the autoware_raw_vehicle_cmd_converter, which calibrates these commands for CARLA. The calibrated commands are then fed directly into CARLA control via CarlaDataProvider.

"},{"location":"simulator/autoware_carla_interface/#configurable-parameters-for-world-loading","title":"Configurable Parameters for World Loading","text":"

All the key parameters can be configured in autoware_carla_interface.launch.xml.

Name Type Default Value Description host string \"localhost\" Hostname for the CARLA server port int \"2000\" Port number for the CARLA server timeout int 20 Timeout for the CARLA client ego_vehicle_role_name string \"ego_vehicle\" Role name for the ego vehicle vehicle_type string \"vehicle.toyota.prius\" Blueprint ID of the vehicle to spawn. The Blueprint ID of vehicles can be found in CARLA Blueprint ID spawn_point string None Coordinates for spawning the ego vehicle (None is random). Format = [x, y, z, roll, pitch, yaw] carla_map string \"Town01\" Name of the map to load in CARLA sync_mode bool True Boolean flag to set synchronous mode in CARLA fixed_delta_seconds double 0.05 Time step for the simulation (related to client FPS) objects_definition_file string \"$(find-pkg-share autoware_carla_interface)/objects.json\" Sensor parameters file that are used for spawning sensor in CARLA use_traffic_manager bool True Boolean flag to set traffic manager in CARLA max_real_delta_seconds double 0.05 Parameter to limit the simulation speed below fixed_delta_seconds config_file string \"$(find-pkg-share autoware_carla_interface)/raw_vehicle_cmd_converter.param.yaml\" Control mapping file to be used in autoware_raw_vehicle_cmd_converter. Current control are calibrated based on vehicle.toyota.prius Blueprints ID in CARLA. Changing the vehicle type may need a recalibration."},{"location":"simulator/autoware_carla_interface/#configurable-parameters-for-sensors","title":"Configurable Parameters for Sensors","text":"

Below parameters can be configured in carla_ros.py.

Name Type Default Value Description self.sensor_frequencies dict {\"top\": 11, \"left\": 11, \"right\": 11, \"camera\": 11, \"imu\": 50, \"status\": 50, \"pose\": 2} (line 67) Calculates the time interval since the last publication and checks if this interval meets the minimum required to not exceed the desired frequency. It will only affect ROS publishing frequency not CARLA sensor tick. "},{"location":"simulator/autoware_carla_interface/#world-loading","title":"World Loading","text":"

The carla_ros.py sets up the CARLA world:

  1. Client Connection:

    client = carla.Client(self.local_host, self.port)\nclient.set_timeout(self.timeout)\n
  2. Load the Map:

    Map loaded in CARLA world with map according to carla_map parameter.

    client.load_world(self.map_name)\nself.world = client.get_world()\n
  3. Spawn Ego Vehicle:

    Vehicle are spawn according to vehicle_type, spawn_point, and agent_role_name parameter.

    spawn_point = carla.Transform()\npoint_items = self.spawn_point.split(\",\")\nif len(point_items) == 6:\n   spawn_point.location.x = float(point_items[0])\n   spawn_point.location.y = float(point_items[1])\n   spawn_point.location.z = float(point_items[2]) + 2\n   spawn_point.rotation.roll = float(point_items[3])\n   spawn_point.rotation.pitch = float(point_items[4])\n   spawn_point.rotation.yaw = float(point_items[5])\nCarlaDataProvider.request_new_actor(self.vehicle_type, spawn_point, self.agent_role_name)\n
"},{"location":"simulator/autoware_carla_interface/#traffic-light-recognition","title":"Traffic Light Recognition","text":"

The maps provided by the Carla Simulator (Carla Lanelet2 Maps) currently lack proper traffic light components for Autoware and have different latitude and longitude coordinates compared to the pointcloud map. To enable traffic light recognition, follow the steps below to modify the maps.

"},{"location":"simulator/autoware_carla_interface/#tips","title":"Tips","text":""},{"location":"simulator/autoware_carla_interface/#known-issues-and-future-works","title":"Known Issues and Future Works","text":""},{"location":"simulator/dummy_perception_publisher/","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#dummy_perception_publisher","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#purpose","title":"Purpose","text":"

This node publishes the result of the dummy detection with the type of perception.

"},{"location":"simulator/dummy_perception_publisher/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/dummy_perception_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/dummy_perception_publisher/#input","title":"Input","text":"Name Type Description /tf tf2_msgs/TFMessage TF (self-pose) input/object tier4_simulation_msgs::msg::DummyObject dummy detection objects"},{"location":"simulator/dummy_perception_publisher/#output","title":"Output","text":"Name Type Description output/dynamic_object tier4_perception_msgs::msg::DetectedObjectsWithFeature dummy detection objects output/points_raw sensor_msgs::msg::PointCloud2 point cloud of objects output/debug/ground_truth_objects autoware_perception_msgs::msg::TrackedObjects ground truth objects"},{"location":"simulator/dummy_perception_publisher/#parameters","title":"Parameters","text":"Name Type Default Value Explanation visible_range double 100.0 sensor visible range [m] detection_successful_rate double 0.8 sensor detection rate. (min) 0.0 - 1.0(max) enable_ray_tracing bool true if True, use ray tracking use_object_recognition bool true if True, publish objects topic use_base_link_z bool true if True, node uses z coordinate of ego base_link publish_ground_truth bool false if True, publish ground truth objects use_fixed_random_seed bool false if True, use fixed random seed random_seed int 0 random seed"},{"location":"simulator/dummy_perception_publisher/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/fault_injection/","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#fault_injection","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#purpose","title":"Purpose","text":"

This package is used to convert pseudo system faults from PSim to Diagnostics and notify Autoware. The component diagram is as follows:

"},{"location":"simulator/fault_injection/#test","title":"Test","text":"
source install/setup.bash\ncd fault_injection\nlaunch_test test/test_fault_injection_node.test.py\n
"},{"location":"simulator/fault_injection/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/fault_injection/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/fault_injection/#input","title":"Input","text":"Name Type Description ~/input/simulation_events tier4_simulation_msgs::msg::SimulationEvents simulation events"},{"location":"simulator/fault_injection/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs::msg::DiagnosticArray Dummy diagnostics"},{"location":"simulator/fault_injection/#parameters","title":"Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/learning_based_vehicle_model/","title":"Learned Model","text":""},{"location":"simulator/learning_based_vehicle_model/#learned-model","title":"Learned Model","text":"

This is the design document for the Python learned model used in the simple_planning_simulator package.

"},{"location":"simulator/learning_based_vehicle_model/#purpose-use-cases","title":"Purpose / Use cases","text":"

This library creates an interface between models in Python and PSIM (C++). It is used to quickly deploy learned Python models in PSIM without a need for complex C++ implementation.

"},{"location":"simulator/learning_based_vehicle_model/#design","title":"Design","text":"

The idea behind this package is that the model we want to use for simulation consists of multiple sub-models (e.g., steering model, drive model, vehicle kinematics, etc.). These sub-models are implemented in Python and can be trainable. Each sub-model has string names for all of its inputs/outputs, which are used to create model interconnections automatically (see image below). This allows us to easily switch sub-models for better customization of the simulator.

"},{"location":"simulator/learning_based_vehicle_model/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To use this package python3 and pybind11 need to be installed. The only assumption on Python sub-models is their interface.

class PythonSubmodelInterface:\n\n    def forward(self, action, state):  # Required\n\"\"\"\n        Calculate forward pass through the model and returns next_state.\n        \"\"\"\n        return list()\n\n    def get_state_names(self):  # Required\n\"\"\"\n        Return list of string names of the model states (outputs).\n        \"\"\"\n        return list()\n\n    def get_action_names(self):  # Required\n\"\"\"\n        Return list of string names of the model actions (inputs).\n        \"\"\"\n        return list()\n\n    def reset(self):  # Required\n\"\"\"\n        Reset model. This function is called after load_params().\n        \"\"\"\n        pass\n\n    def load_params(self, path):  # Required\n\"\"\"\n        Load parameters of the model.\n        Inputs:\n            - path: Path to a parameter file to load by the model.\n        \"\"\"\n        pass\n\n    def dtSet(self, dt):  # Required\n\"\"\"\n        Set dt of the model.\n        Inputs:\n            - dt: time step\n        \"\"\"\n        pass\n
"},{"location":"simulator/learning_based_vehicle_model/#api","title":"API","text":"

To successfully create a vehicle model an InterconnectedModel class needs to be set up correctly.

"},{"location":"simulator/learning_based_vehicle_model/#interconnectedmodel-class","title":"InterconnectedModel class","text":""},{"location":"simulator/learning_based_vehicle_model/#constructor","title":"Constructor","text":"

The constructor takes no arguments.

"},{"location":"simulator/learning_based_vehicle_model/#void-addsubmodelstdtuplestdstring-stdstring-stdstring-model_descriptor","title":"void addSubmodel(std::tuple<std::string, std::string, std::string> model_descriptor)","text":"

Add a new sub-model to the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#void-generateconnectionsstdvectorchar-in_names-stdvectorchar-out_names","title":"void generateConnections(std::vector<char *> in_names, std::vector<char*> out_names)","text":"

Generate connections between sub-models and inputs/outputs of the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#void-initstatestdvectordouble-new_state","title":"void initState(std::vector<double> new_state)","text":"

Set the initial state of the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#stdvectordouble-updatepymodelstdvectordouble-psim_input","title":"std::vector<double> updatePyModel(std::vector<double> psim_input)","text":"

Calculate the next state of the model by calculating the next state of all of the sub-models.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#dtsetdouble-dt","title":"dtSet(double dt)","text":"

Set the time step of the model.

Inputs:

Outputs:

"},{"location":"simulator/learning_based_vehicle_model/#example","title":"Example","text":"

Firstly we need to set up the model.

InterconnectedModel vehicle;\n\n// Example of model descriptors\nstd::tuple<char*, char*, char*> model_descriptor_1 = {\n(char*)\"path_to_python_module_with_model_class_1\",\n(char*)nullptr,  // If no param file is needed you can pass 'nullptr'\n(char*)\"ModelClass1\"\n};\n\nstd::tuple<char*, char*, char*> model_descriptor_2 =   {\n(char*)\"path_to_python_module_with_model_class_2\",\n(char*)\"/path_to/param_file\",\n(char*)\"ModelClass2\"  // Name of the python class. Needs to use the interface from 'Assumptions'\n};\n\n// Create sub-models based on descriptors\nvehicle.addSubmodel(model_descriptor_1);\nvehicle.addSubmodel(model_descriptor_2);\n\n// Define STATE and INPUT names of the system\nstd::vector<char*> state_names = {(char*)\"STATE_NAME_1\", (char*)\"STATE_NAME_2\"};\nstd::vector<char*> input_names = {(char*)\"INPUT_NAME_1\", (char*)\"INPUT_NAME_2\"};\n\n// Automatically connect sub-systems with model input\nvehicle.generateConnections(input_names, state_names);\n\n// Set the time step of the model\nvehicle.dtSet(dt);\n

After the model is correctly set up, we can use it the following way.

// Example of an model input\nstd::vector<double> vehicle_input = {0.0, 1.0}; // INPUT_NAME_1, INPUT_NAME_2\n\n// Example of an model state\nstd::vector<double> current_state = {0.2, 0.5}; // STATE_NAME_1, STATE_NAME_2\n\n// Set model state\nvehicle.initState(current_state);\n\n// Calculate the next state of the model\nstd::vector<double> next_state = vehicle.updatePyModel(vehicle_input);\n
"},{"location":"simulator/learning_based_vehicle_model/#references-external-links","title":"References / External links","text":""},{"location":"simulator/learning_based_vehicle_model/#related-issues","title":"Related issues","text":""},{"location":"simulator/simple_planning_simulator/","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#simple_planning_simulator","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node simulates the vehicle motion for a vehicle command in 2D using a simple vehicle model.

"},{"location":"simulator/simple_planning_simulator/#design","title":"Design","text":"

The purpose of this simulator is for the integration test of planning and control modules. This does not simulate sensing or perception, but is implemented in pure c++ only and works without GPU.

"},{"location":"simulator/simple_planning_simulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"simulator/simple_planning_simulator/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"simulator/simple_planning_simulator/#input","title":"input","text":""},{"location":"simulator/simple_planning_simulator/#output","title":"output","text":""},{"location":"simulator/simple_planning_simulator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/simple_planning_simulator/#common-parameters","title":"Common Parameters","text":"Name Type Description Default value simulated_frame_id string set to the child_frame_id in output tf \"base_link\" origin_frame_id string set to the frame_id in output tf \"odom\" initialize_source string If \"ORIGIN\", the initial pose is set at (0,0,0). If \"INITIAL_POSE_TOPIC\", node will wait until the input/initialpose topic is published. \"INITIAL_POSE_TOPIC\" add_measurement_noise bool If true, the Gaussian noise is added to the simulated results. true pos_noise_stddev double Standard deviation for position noise 0.01 rpy_noise_stddev double Standard deviation for Euler angle noise 0.0001 vel_noise_stddev double Standard deviation for longitudinal velocity noise 0.0 angvel_noise_stddev double Standard deviation for angular velocity noise 0.0 steer_noise_stddev double Standard deviation for steering angle noise 0.0001"},{"location":"simulator/simple_planning_simulator/#vehicle-model-parameters","title":"Vehicle Model Parameters","text":""},{"location":"simulator/simple_planning_simulator/#vehicle_model_type-options","title":"vehicle_model_type options","text":"

The following models receive ACTUATION_CMD generated from raw_vehicle_cmd_converter. Therefore, when these models are selected, the raw_vehicle_cmd_converter is also launched.

The IDEAL model moves ideally as commanded, while the DELAY model moves based on a 1st-order with time delay model. The STEER means the model receives the steer command. The VEL means the model receives the target velocity command, while the ACC model receives the target acceleration command. The GEARED suffix means that the motion will consider the gear command: the vehicle moves only one direction following the gear.

The table below shows which models correspond to what parameters. The model names are written in abbreviated form (e.g. IDEAL_STEER_VEL = I_ST_V).

Name Type Description I_ST_V I_ST_A I_ST_A_G D_ST_V D_ST_A D_ST_A_G D_ST_A_G_WO_FG D_ST_M_ACC_G L_S_V A_C Default value unit acc_time_delay double dead time for the acceleration input x x x x o o o o x x 0.1 [s] steer_time_delay double dead time for the steering input x x x o o o o o x o 0.24 [s] vel_time_delay double dead time for the velocity input x x x o x x x x x x 0.25 [s] acc_time_constant double time constant of the 1st-order acceleration dynamics x x x x o o o o x x 0.1 [s] steer_time_constant double time constant of the 1st-order steering dynamics x x x o o o o o x o 0.27 [s] steer_dead_band double dead band for steering angle x x x o o o o x x o 0.0 [rad] vel_time_constant double time constant of the 1st-order velocity dynamics x x x o x x x x x x 0.5 [s] vel_lim double limit of velocity x x x o o o o o x x 50.0 [m/s] vel_rate_lim double limit of acceleration x x x o o o o o x x 7.0 [m/ss] steer_lim double limit of steering angle x x x o o o o o x o 1.0 [rad] steer_rate_lim double limit of steering angle change rate x x x o o o o o x o 5.0 [rad/s] steer_bias double bias for steering angle x x x o o o o o x o 0.0 [rad] debug_acc_scaling_factor double scaling factor for accel command x x x x o o o x x x 1.0 [-] debug_steer_scaling_factor double scaling factor for steer command x x x x o o o x x x 1.0 [-] acceleration_map_path string path to csv file for acceleration map which converts velocity and ideal acceleration to actual acceleration x x x x x x x o x x - [-] model_module_paths string path to a python module where the model is implemented x x x x x x x x o x - [-] model_param_paths string path to the file where model parameters are stored (can be empty string if no parameter file is required) x x x x x x x x o x - [-] model_class_names string name of the class that implements the model x x x x x x x x o x - [-]

Note: Parameters model_module_paths, model_param_paths, and model_class_names need to have the same length.

The acceleration_map is used only for DELAY_STEER_MAP_ACC_GEARED and it shows the acceleration command on the vertical axis and the current velocity on the horizontal axis, with each cell representing the converted acceleration command that is actually used in the simulator's motion calculation. Values in between are linearly interpolated.

Example of acceleration_map.csv

default,  0.00,  1.39,  2.78,  4.17,  5.56,  6.94,  8.33,  9.72, 11.11, 12.50, 13.89, 15.28, 16.67\n-4.0,    -4.40, -4.36, -4.38, -4.12, -4.20, -3.94, -3.98, -3.80, -3.77, -3.76, -3.59, -3.50, -3.40\n-3.5,    -4.00, -3.91, -3.85, -3.64, -3.68, -3.55, -3.42, -3.24, -3.25, -3.00, -3.04, -2.93, -2.80\n-3.0,    -3.40, -3.37, -3.33, -3.00, -3.00, -2.90, -2.88, -2.65, -2.43, -2.44, -2.43, -2.39, -2.30\n-2.5,    -2.80, -2.72, -2.72, -2.62, -2.41, -2.43, -2.26, -2.18, -2.11, -2.03, -1.96, -1.91, -1.85\n-2.0,    -2.30, -2.24, -2.12, -2.02, -1.92, -1.81, -1.67, -1.58, -1.51, -1.49, -1.40, -1.35, -1.30\n-1.5,    -1.70, -1.61, -1.47, -1.46, -1.40, -1.37, -1.29, -1.24, -1.10, -0.99, -0.83, -0.80, -0.78\n-1.0,    -1.30, -1.28, -1.10, -1.09, -1.04, -1.02, -0.98, -0.89, -0.82, -0.61, -0.52, -0.54, -0.56\n-0.8,    -0.96, -0.90, -0.82, -0.74, -0.70, -0.65, -0.63, -0.59, -0.55, -0.44, -0.39, -0.39, -0.35\n-0.6,    -0.77, -0.71, -0.67, -0.65, -0.58, -0.52, -0.51, -0.50, -0.40, -0.33, -0.30, -0.31, -0.30\n-0.4,    -0.45, -0.40, -0.45, -0.44, -0.38, -0.35, -0.31, -0.30, -0.26, -0.30, -0.29, -0.31, -0.25\n-0.2,    -0.24, -0.24, -0.25, -0.22, -0.23, -0.25, -0.27, -0.29, -0.24, -0.22, -0.17, -0.18, -0.12\n 0.0,     0.00,  0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08, -0.08, -0.10, -0.10, -0.10\n 0.2,     0.16,  0.12,  0.02,  0.02,  0.00,  0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08\n 0.4,     0.38,  0.30,  0.22,  0.25,  0.24,  0.23,  0.20,  0.16,  0.16,  0.14,  0.10,  0.05,  0.05\n 0.6,     0.52,  0.52,  0.51,  0.49,  0.43,  0.40,  0.35,  0.33,  0.33,  0.33,  0.32,  0.34,  0.34\n 0.8,     0.82,  0.81,  0.78,  0.68,  0.63,  0.56,  0.53,  0.48,  0.43,  0.41,  0.37,  0.38,  0.40\n 1.0,     1.00,  1.08,  1.01,  0.88,  0.76,  0.69,  0.66,  0.58,  0.54,  0.49,  0.45,  0.40,  0.40\n 1.5,     1.52,  1.50,  1.38,  1.26,  1.14,  1.03,  0.91,  0.82,  0.67,  0.61,  0.51,  0.41,  0.41\n 2.0,     1.80,  1.80,  1.64,  1.43,  1.25,  1.11,  0.96,  0.81,  0.70,  0.59,  0.51,  0.42,  0.42\n

"},{"location":"simulator/simple_planning_simulator/#actuation_cmd-model","title":"ACTUATION_CMD model","text":"

The simple_planning_simulator usually operates by receiving Control commands, but when the ACTUATION_CMD* model is selected, it receives Actuation commands instead of Control commands. This model can simulate the motion using the vehicle command that is actually sent to the real vehicle. Therefore, when this model is selected, the raw_vehicle_cmd_converter is also launched. Please refer to the actuation_cmd_sim.md for more details.

Note: The steering/velocity/acceleration dynamics is modeled by a first order system with a deadtime in a delay model. The definition of the time constant is the time it takes for the step response to rise up to 63% of its final value. The deadtime is a delay in the response to a control input.

"},{"location":"simulator/simple_planning_simulator/#example-of-learned_steer_vel-model","title":"Example of LEARNED_STEER_VEL model","text":"

We created a few basic models to showcase how LEARNED_STEER_VEL works.

  1. Install a library that contains basic Python models. (branch: v0.1_autoware)

  2. In a file src/vehicle/sample_vehicle_launch/sample_vehicle_description/config/simulator_model.param.yaml set vehicle_model_type to LEARNED_STEER_VEL. In the same file set the following parameters. These models are for testing and do not require any parameter file.

model_module_paths:\n[\n\"control_analysis_pipeline.autoware_models.vehicle.kinematic\",\n\"control_analysis_pipeline.autoware_models.steering.steer_example\",\n\"control_analysis_pipeline.autoware_models.drive.drive_example\",\n]\nmodel_param_paths: [\"\", \"\", \"\"]\nmodel_class_names: [\"KinematicModel\", \"SteerExample\", \"DriveExample\"]\n
"},{"location":"simulator/simple_planning_simulator/#default-tf-configuration","title":"Default TF configuration","text":"

Since the vehicle outputs odom->base_link tf, this simulator outputs the tf with the same frame_id configuration. In the simple_planning_simulator.launch.py, the node that outputs the map->odom tf, that usually estimated by the localization module (e.g. NDT), will be launched as well. Since the tf output by this simulator module is an ideal value, odom->map will always be 0.

"},{"location":"simulator/simple_planning_simulator/#caveat-pitch-calculation","title":"(Caveat) Pitch calculation","text":"

Ego vehicle pitch angle is calculated in the following manner.

NOTE: driving against the line direction (as depicted in image's bottom row) is not supported and only shown for illustration purposes.

"},{"location":"simulator/simple_planning_simulator/#error-detection-and-handling","title":"Error detection and handling","text":"

The only validation on inputs being done is testing for a valid vehicle model type.

"},{"location":"simulator/simple_planning_simulator/#security-considerations","title":"Security considerations","text":""},{"location":"simulator/simple_planning_simulator/#references-external-links","title":"References / External links","text":"

This is originally developed in the Autoware.AI. See the link below.

https://github.com/Autoware-AI/simulation/tree/master/wf_simulator

"},{"location":"simulator/simple_planning_simulator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/","title":"ACTUATION_CMD model","text":""},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd-model","title":"ACTUATION_CMD model","text":"

The simple_planning_simulator usually operates by receiving Control commands, but when the ACTUATION_CMD* model is selected, it receives Actuation commands instead of Control commands. This model can simulate the motion using the vehicle command that is actually sent to the real vehicle. Therefore, when this model is selected, the raw_vehicle_cmd_converter is also launched.

"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd","title":"ACTUATION_CMD","text":"

This model receives the accel/brake commands and converts them using the map to calculate the motion of the model. The steer command is used as it is. Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: false\n
"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd_steer_map","title":"ACTUATION_CMD_STEER_MAP","text":"

This model is inherited from ACTUATION_CMD and receives steering arbitrary value as the actuation command. The value is converted to the steering tire rate to calculate the motion of the model. An arbitrary value is like EPS (Electric Power Steering) Voltage.

Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: true\n
"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd_vgr","title":"ACTUATION_CMD_VGR","text":"

This model is inherited from ACTUATION_CMD and steering wheel angle is sent as the actuation command. The value is converted to the steering tire angle to calculate the motion of the model.

Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: true\n

"},{"location":"simulator/simple_planning_simulator/docs/actuation_cmd_sim/#actuation_cmd_mechanical","title":"ACTUATION_CMD_MECHANICAL","text":"

This model is inherited from ACTUATION_CMD_VGR nad has mechanical dynamics and controller for that to simulate the mechanical structure and software of the real vehicle.

Please make sure that the raw_vehicle_cmd_converter is configured as follows.

convert_accel_cmd: true\nconvert_brake_cmd: true\nconvert_steer_cmd: true\n

The mechanical structure of the vehicle is as follows.

The vehicle side software assumes that it has limiters, PID controllers, power steering, etc. for the input. The conversion in the power steering is approximated by a polynomial. Steering Dynamics is a model that represents the motion of the tire angle when the Steering Torque is input. It is represented by the following formula.

\\begin{align}\n\\dot{\\theta} &= \\omega \\\\\n\\dot{\\omega} &= \\frac{1}{I} (T_{\\text{input}} - D \\omega - K \\theta - \\text{sign}(\\omega) F_{\\text{friction}} ) \\\\\n\\end{align}\n

In this case,

Also, this dynamics has a dead zone. The steering rotation direction is different from the steering torque input direction, and the steering torque input is less than the dead zone threshold, it enters the dead zone. Once it enters the dead zone, it is judged to be in the dead zone until there is a steering input above the dead zone threshold. When in the dead zone, the steering tire angle does not move.

Please refer to the following file for the values of the parameters that have been system-identified using the actual vehicle's driving data. The blue line is the control input, the green line is the actual vehicle's tire angle output, and the red line is the simulator's tire angle output. mechanical_sample_param

This model has a smaller sum of errors with the observed values of the actual vehicle than when tuned with a normal first-order lag model. For details, please refer to #9252.

The parameters used in the ACTUATION_CMD are as follows.

Name Type Description unit accel_time_delay double dead time for the acceleration input [s] accel_time_constant double time constant of the 1st-order acceleration dynamics [s] brake_time_delay double dead time for the brake input [s] brake_time_constant double time constant of the 1st-order brake dynamics [s] convert_accel_cmd bool If true, it is assumed that the command is received converted to an accel actuation value, and it is converted back to acceleration value inside the simulator. [-] convert_brake_cmd bool If true, it is assumed that the command is received converted to a brake actuation value, and it is converted back to acceleration value inside the simulator. [-] vgr_coef_a double the value of the coefficient a of the variable gear ratio [-] vgr_coef_b double the value of the coefficient b of the variable gear ratio [-] vgr_coef_c double the value of the coefficient c of the variable gear ratio [-] enable_pub_steer bool whether to publish the steering tire angle. if false, it is expected to be converted and published from actuation_status in other nodes (e.g. raw_vehicle_cmd_converter) [-]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/","title":"tier4_dummy_object_rviz_plugin","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#tier4_dummy_object_rviz_plugin","title":"tier4_dummy_object_rviz_plugin","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin is used to generate dummy pedestrians, cars, and obstacles in planning simulator.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#overview","title":"Overview","text":"

The CarInitialPoseTool sends a topic for generating a dummy car. The PedestrianInitialPoseTool sends a topic for generating a dummy pedestrian. The UnknownInitialPoseTool sends a topic for generating a dummy obstacle. The DeleteAllObjectsTool deletes the dummy cars, pedestrians, and obstacles displayed by the above three tools.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#output","title":"Output","text":"Name Type Description /simulation/dummy_perception_publisher/object_info tier4_simulation_msgs::msg::DummyObject The topic on which to publish dummy object info"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"simulator/tier4_dummy_object_rviz_plugin/#carpose","title":"CarPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 4.0 X standard deviation for initial pose [m] width_ float 1.8 Y standard deviation for initial pose [m] height_ float 2.0 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#buspose","title":"BusPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 10.5 X standard deviation for initial pose [m] width_ float 2.5 Y standard deviation for initial pose [m] height_ float 3.5 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#pedestrianpose","title":"PedestrianPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#unknownpose","title":"UnknownPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#deleteallobjects","title":"DeleteAllObjects","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Using a planning simulator

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select + on the tool tab.
  2. Select one of the following: tier4_dummy_object_rviz_plugin and press OK.
  3. Select the new item in the tool tab (2D Dummy Car in the example) and click on it in rviz.
"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#interactive-manipulation","title":"Interactive manipulation","text":"

You can interactively manipulate the object.

  1. Select \"Tool Properties\" in rviz.
  2. Select the corresponding object tab in the Tool Properties.
  3. Turn the \"Interactive\" checkbox on.
  4. Select the item in the tool tab in you haven't chosen yet.
  5. Key commands are as follows.
action key command ADD Shift + Click Right Button MOVE Hold down Right Button + Drug and Drop DELETE Alt + Click Right Button"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"simulator/tier4_dummy_object_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"simulator/vehicle_door_simulator/","title":"vehicle_door_simulator","text":""},{"location":"simulator/vehicle_door_simulator/#vehicle_door_simulator","title":"vehicle_door_simulator","text":"

This package is for testing operations on vehicle devices such as doors.

"},{"location":"system/autoware_component_monitor/","title":"autoware_component_monitor","text":""},{"location":"system/autoware_component_monitor/#autoware_component_monitor","title":"autoware_component_monitor","text":"

The autoware_component_monitor package allows monitoring system usage of component containers. The composable node inside the package is attached to a component container, and it publishes CPU and memory usage of the container.

"},{"location":"system/autoware_component_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/autoware_component_monitor/#input","title":"Input","text":"

None.

"},{"location":"system/autoware_component_monitor/#output","title":"Output","text":"Name Type Description ~/component_system_usage autoware_internal_msgs::msg::ResourceUsageReport CPU, Memory usage etc."},{"location":"system/autoware_component_monitor/#parameters","title":"Parameters","text":""},{"location":"system/autoware_component_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range publish_rate float Publish rate in Hz 5 N/A"},{"location":"system/autoware_component_monitor/#how-to-use","title":"How to use","text":"

Add it as a composable node in your launch file:

<launch>\n<group>\n<push-ros-namespace namespace=\"your_namespace\"/>\n...\n\n    <load_composable_node target=\"$(var container_name)\">\n<composable_node pkg=\"autoware_component_monitor\"\nplugin=\"autoware::component_monitor::ComponentMonitor\"\nname=\"component_monitor\">\n<param from=\"$(find-pkg-share autoware_component_monitor)/config/component_monitor.param.yaml\"/>\n</composable_node>\n</load_composable_node>\n\n...\n  </group>\n</launch>\n
"},{"location":"system/autoware_component_monitor/#quick-testing","title":"Quick testing","text":"

You can test the package by running the following command:

ros2 component load <container_name> autoware_component_monitor autoware::component_monitor::ComponentMonitor -p publish_rate:=10.0 --node-namespace <namespace>\n\n# Example usage\nros2 component load /pointcloud_container autoware_component_monitor autoware::component_monitor::ComponentMonitor -p publish_rate:=10.0 --node-namespace /pointcloud_container\n
"},{"location":"system/autoware_component_monitor/#how-it-works","title":"How it works","text":"

The package uses the top command under the hood. top -b -n 1 -E k -p PID command is run at 10 Hz to get the system usage of the process.

Here is a sample output:

top - 13:57:26 up  3:14,  1 user,  load average: 1,09, 1,10, 1,04\nTasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie\n%Cpu(s):  0,0 us,  0,8 sy,  0,0 ni, 99,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 st\nKiB Mem : 65532208 total, 35117428 free, 17669824 used, 12744956 buff/cache\nKiB Swap: 39062524 total, 39062524 free,        0 used. 45520816 avail Mem\n\n    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND\n   3352 meb       20   0 2905940   1,2g  39292 S   0,0   2,0  23:24.01 awesome\n

We get 5th, 8th fields from the last line, which are RES, %CPU respectively.

"},{"location":"system/autoware_default_adapi/","title":"autoware_default_adapi","text":""},{"location":"system/autoware_default_adapi/#autoware_default_adapi","title":"autoware_default_adapi","text":""},{"location":"system/autoware_default_adapi/#notes","title":"Notes","text":"

Components that relay services must be executed by the Multi-Threaded Executor.

"},{"location":"system/autoware_default_adapi/#features","title":"Features","text":"

This package is a default implementation AD API.

"},{"location":"system/autoware_default_adapi/#web-server-script","title":"Web server script","text":"

This is a sample to call API using HTTP.

"},{"location":"system/autoware_default_adapi/#guide-message-script","title":"Guide message script","text":"

This is a debug script to check the conditions for transition to autonomous mode.

$ ros2 run autoware_default_adapi guide.py\n\nThe vehicle pose is not estimated. Please set an initial pose or check GNSS.\nThe route is not set. Please set a goal pose.\nThe topic rate error is detected. Please check [control,planning] components.\nThe vehicle is ready. Please change the operation mode to autonomous.\nThe vehicle is driving autonomously.\nThe vehicle has reached the goal of the route. Please reset a route.\n
"},{"location":"system/autoware_default_adapi/document/autoware-state/","title":"Autoware state compatibility","text":""},{"location":"system/autoware_default_adapi/document/autoware-state/#autoware-state-compatibility","title":"Autoware state compatibility","text":""},{"location":"system/autoware_default_adapi/document/autoware-state/#overview","title":"Overview","text":"

Since /autoware/state was so widely used, this packages creates it from the states of AD API for backwards compatibility. The diagnostic checks that ad_service_state_monitor used to perform have been replaced by component_state_monitor. The service /autoware/shutdown to change autoware state to finalizing is also supported for compatibility.

"},{"location":"system/autoware_default_adapi/document/autoware-state/#conversion","title":"Conversion","text":"

This is the correspondence between AD API states and autoware states. The launch state is the data that this node holds internally.

"},{"location":"system/autoware_default_adapi/document/fail-safe/","title":"Fail-safe API","text":""},{"location":"system/autoware_default_adapi/document/fail-safe/#fail-safe-api","title":"Fail-safe API","text":""},{"location":"system/autoware_default_adapi/document/fail-safe/#overview","title":"Overview","text":"

The fail-safe API simply relays the MRM state. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/interface/","title":"Interface API","text":""},{"location":"system/autoware_default_adapi/document/interface/#interface-api","title":"Interface API","text":""},{"location":"system/autoware_default_adapi/document/interface/#overview","title":"Overview","text":"

The interface API simply returns a version number. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/localization/","title":"Localization API","text":""},{"location":"system/autoware_default_adapi/document/localization/#localization-api","title":"Localization API","text":""},{"location":"system/autoware_default_adapi/document/localization/#overview","title":"Overview","text":"

Unify the location initialization method to the service. The topic /initialpose from rviz is now only subscribed to by adapter node and converted to API call. This API call is forwarded to the pose initializer node so it can centralize the state of pose initialization. For other nodes that require initialpose, pose initializer node publishes as /initialpose3d. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/motion/","title":"Motion API","text":""},{"location":"system/autoware_default_adapi/document/motion/#motion-api","title":"Motion API","text":""},{"location":"system/autoware_default_adapi/document/motion/#overview","title":"Overview","text":"

Provides a hook for when the vehicle starts. It is typically used for announcements that call attention to the surroundings. Add a pause function to the vehicle_cmd_gate, and API will control it based on vehicle stopped and start requested. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/motion/#states","title":"States","text":"

The implementation has more detailed state transitions to manage pause state synchronization. The correspondence with the AD API state is as follows.

"},{"location":"system/autoware_default_adapi/document/operation-mode/","title":"Operation mode API","text":""},{"location":"system/autoware_default_adapi/document/operation-mode/#operation-mode-api","title":"Operation mode API","text":""},{"location":"system/autoware_default_adapi/document/operation-mode/#overview","title":"Overview","text":"

Introduce operation mode. It handles autoware engage, gate_mode, external_cmd_selector and control_mode abstractly. When the mode is changed, it will be in-transition state, and if the transition completion condition to that mode is not satisfied, it will be returned to the previous mode. Also, currently, the condition for mode change is only WaitingForEngage in /autoware/state, and the engage state is shared between modes. After introducing the operation mode, each mode will have a transition available flag. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_default_adapi/document/operation-mode/#states","title":"States","text":"

The operation mode has the following state transitions. Disabling autoware control and changing operation mode when autoware control is disabled can be done immediately. Otherwise, enabling autoware control and changing operation mode when autoware control is enabled causes the state will be transition state. If the mode change completion condition is not satisfied within the timeout in the transition state, it will return to the previous mode.

"},{"location":"system/autoware_default_adapi/document/operation-mode/#compatibility","title":"Compatibility","text":"

Ideally, vehicle_cmd_gate and external_cmd_selector should be merged so that the operation mode can be handled directly. However, currently the operation mode transition manager performs the following conversions to match the implementation. The transition manager monitors each topic in the previous interface and synchronizes the operation mode when it changes. When the operation mode is changed with the new interface, the transition manager disables synchronization and changes the operation mode using the previous interface.

"},{"location":"system/autoware_default_adapi/document/routing/","title":"Routing API","text":""},{"location":"system/autoware_default_adapi/document/routing/#routing-api","title":"Routing API","text":""},{"location":"system/autoware_default_adapi/document/routing/#overview","title":"Overview","text":"

Unify the route setting method to the service. This API supports two waypoint formats, poses and lanelet segments. The goal and checkpoint topics from rviz is only subscribed to by adapter node and converted to API call. This API call is forwarded to the mission planner node so it can centralize the state of routing. For other nodes that require route, mission planner node publishes as /planning/mission_planning/route. See the autoware-documentation for AD API specifications.

"},{"location":"system/autoware_processing_time_checker/","title":"Processing Time Checker","text":""},{"location":"system/autoware_processing_time_checker/#macro-rendering-error","title":"Macro Rendering Error","text":"

File: system/autoware_processing_time_checker/README.md

KeyError: 'default'

Traceback (most recent call last):\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 688, in render\n    return md_template.render(**page_variables)\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1295, in render\n    self.environment.handle_exception()\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 942, in handle_exception\n    raise rewrite_traceback_stack(source=source)\n  File \"<template>\", line 32, in top-level template code\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 72, in json_to_markdown\n    return format_json(data)\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 63, in format_json\n    markdown_table = tabulate(extract_parameter_info(parameters), headers=\"keys\", tablefmt=\"github\")\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 52, in extract_parameter_info\n    param[\"Default\"] = v[\"default\"]\nKeyError: 'default'\n
"},{"location":"system/bluetooth_monitor/","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#bluetooth_monitor","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#description","title":"Description","text":"

This node monitors a Bluetooth connection to a wireless device by using L2ping. L2ping generates PING echo command on Bluetooth L2CAP layer, and it is able to receive and check echo response from a wireless device.

"},{"location":"system/bluetooth_monitor/#block-diagram","title":"Block diagram","text":"

L2ping is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/bluetooth_monitor/#output","title":"Output","text":""},{"location":"system/bluetooth_monitor/#bluetooth_monitor-bluetooth_connection","title":"bluetooth_monitor: bluetooth_connection","text":"

[summary]

level message OK OK WARN RTT warning ERROR Lost Function error

[values]

key value (example) Device [0-9]: Status OK / RTT warning / Verify error / Lost / Ping rejected / Function error Device [0-9]: Name Wireless Controller Device [0-9]: Manufacturer MediaTek, Inc. Device [0-9]: Address AA:BB:CC:DD:EE:FF Device [0-9]: RTT 0.00ms key (example) value (example) Device [0-9]: connect No such file or directory"},{"location":"system/bluetooth_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range addresses array Bluetooth addresses of the device to monitor ['4C:B9:9B:6E:7F:9A'] N/A port integer Port number to connect to L2ping service on the host 7640 N/A timeout integer Time in seconds to wait for a response from the device 5 N/A rtt_warn float Time in seconds to warn if the round trip time is greater than this value 0.1 N/A "},{"location":"system/bluetooth_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Assign capability to l2ping_service since L2ping requires cap_net_raw+eip capability.

    sudo setcap 'cap_net_raw+eip' ./build/bluetooth_monitor/l2ping_service\n
  2. Run l2ping_service and bluetooth_monitor.

    ./build/bluetooth_monitor/l2ping_service\nros2 launch bluetooth_monitor bluetooth_monitor.launch.xml\n
"},{"location":"system/bluetooth_monitor/#known-limitations-and-issues","title":"Known limitations and issues","text":"

None.

"},{"location":"system/component_state_monitor/","title":"component_state_monitor","text":""},{"location":"system/component_state_monitor/#component_state_monitor","title":"component_state_monitor","text":"

The component state monitor checks the state of each component using topic state monitor. This is an implementation for backward compatibility with the AD service state monitor. It will be replaced in the future using a diagnostics tree.

"},{"location":"system/default_ad_api_helpers/ad_api_adaptors/","title":"ad_api_adaptors","text":""},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#macro-rendering-error","title":"Macro Rendering Error","text":"

File: system/default_ad_api_helpers/ad_api_adaptors/README.md

FileNotFoundError: [Errno 2] No such file or directory: '/system/default_ad_api_helpers/ad_api_adaptors/schema/ad_api_adaptors.schema.json'

Traceback (most recent call last):\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 688, in render\n    return md_template.render(**page_variables)\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1295, in render\n    self.environment.handle_exception()\n  File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 942, in handle_exception\n    raise rewrite_traceback_stack(source=source)\n  File \"<template>\", line 35, in top-level template code\n  File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 70, in json_to_markdown\n    with open(json_schema_file_path) as f:\nFileNotFoundError: [Errno 2] No such file or directory: '/system/default_ad_api_helpers/ad_api_adaptors/schema/ad_api_adaptors.schema.json'\n
"},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer_1","title":"automatic_pose_initializer","text":"

This node calls localization initialize API when the localization initialization state is uninitialized. Since the API uses GNSS pose when no pose is specified, initialization using GNSS can be performed automatically.

Interface Local Name Global Name Description Subscription - /api/localization/initialization_state The localization initialization state API. Client - /api/localization/initialize The localization initialize API."},{"location":"system/diagnostic_graph_aggregator/","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#diagnostic_graph_aggregator","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#overview","title":"Overview","text":"

The diagnostic graph aggregator node subscribes to diagnostic array and publishes aggregated diagnostic graph. As shown in the diagram below, this node introduces extra diagnostic status for intermediate functional units.

"},{"location":"system/diagnostic_graph_aggregator/#diagnostic-graph-structures","title":"Diagnostic graph structures","text":"

The diagnostic graph is actually a set of fault tree analysis (FTA) for each operation mode of Autoware. Since the status of the same node may be referenced by multiple nodes, the overall structure is a directed acyclic graph (DAG). Each node in the diagnostic graph represents the diagnostic status of a specific functional unit, including the input diagnostics. So we define this as \"unit\", and call the unit corresponding to the input diagnosis \"diag unit\" and the others \"node unit\".

Every unit has an error level that is the same as DiagnosticStatus, a unit type, and optionally a unit path. In addition, every diag unit has a message, a hardware_id, and values that are the same as DiagnosticStatus. The unit type represents how the unit status is calculated, such as AND or OR. The unit path is any unique string that represents the functionality of the unit.

NOTE: This feature is currently under development. The diagnostic graph also supports \"link\" because there are cases where connections between units have additional status. For example, it is natural that many functional units will have an error status until initialization is complete.

"},{"location":"system/diagnostic_graph_aggregator/#operation-mode-availability","title":"Operation mode availability","text":"

For MRM, this node publishes the status of the top-level functional units in the dedicated message. Therefore, the diagnostic graph must contain functional units with the following names. This feature breaks the generality of the graph and may be changed to a plugin or another node in the future.

"},{"location":"system/diagnostic_graph_aggregator/#interfaces","title":"Interfaces","text":"Interface Type Interface Name Data Type Description subscription /diagnostics diagnostic_msgs/msg/DiagnosticArray Diagnostics input. publisher /diagnostics_graph/unknowns diagnostic_msgs/msg/DiagnosticArray Diagnostics not included in graph. publisher /diagnostics_graph/struct tier4_system_msgs/msg/DiagGraphStruct Diagnostic graph (static part). publisher /diagnostics_graph/status tier4_system_msgs/msg/DiagGraphStatus Diagnostic graph (dynamic part). publisher /system/operation_mode/availability tier4_system_msgs/msg/OperationModeAvailability Operation mode availability."},{"location":"system/diagnostic_graph_aggregator/#parameters","title":"Parameters","text":"Parameter Name Data Type Description graph_file string Path of the config file. rate double Rate of aggregation and topic publication. input_qos_depth uint QoS depth of input array topic. graph_qos_depth uint QoS depth of output graph topic. use_operation_mode_availability bool Use operation mode availability publisher."},{"location":"system/diagnostic_graph_aggregator/#examples","title":"Examples","text":"

This is an example of a diagnostic graph configuration. The configuration can be split into multiple files.

ros2 launch diagnostic_graph_aggregator example-main.launch.xml\n

You can reuse the graph by making partial edits. For example, disable hardware checks for simulation.

ros2 launch diagnostic_graph_aggregator example-edit.launch.xml\n
"},{"location":"system/diagnostic_graph_aggregator/#debug-tools","title":"Debug tools","text":""},{"location":"system/diagnostic_graph_aggregator/#graph-file-format","title":"Graph file format","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/","title":"Edit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/#edit","title":"Edit","text":"

The edit is a base object that edits the existing diagnostic graph. Any derived object can be used where a edit object is required.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/#format","title":"Format","text":"Name Type Required Description type string yes The string indicating the type of derived object."},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/#derived-objects","title":"Derived objects","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/graph/","title":"Graph","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/graph/#graph","title":"Graph","text":"

The graph object is the top level structure that makes up the configuration file.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/graph/#format","title":"Format","text":"Name Type Required Description files list no List of path objects for importing subgraphs. units list no List of unit objects that make up the graph. edits list no List of edit objects to partially edit the graph."},{"location":"system/diagnostic_graph_aggregator/doc/format/path/","title":"Path","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#path","title":"Path","text":"

The path object specifies the file path of the subgraph to be imported. The structure of the subgraph file should be graph object.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#format","title":"Format","text":"Name Type Required Description path string yes The file path of the subgraph."},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#substitutions","title":"Substitutions","text":"

File paths can contain substitutions like ROS 2 launch. The supported substitutions are as follows.

Substitution Description $(dirname) The path of this file directory. $(find-pkg-share <package>) The path of the package."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/","title":"Unit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#unit","title":"Unit","text":"

The unit is a base object that makes up the diagnostic graph. Any derived object can be used where a unit object is required.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#format","title":"Format","text":"Name Type Required Description type string yes The string indicating the type of derived object. path string no Any string to reference from other units."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#derived-objects","title":"Derived objects","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/remove/","title":"Remove","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/remove/#remove","title":"Remove","text":"

The remove object is a edit that removes other units.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/edit/remove/#format","title":"Format","text":"Name Type Required Description type string yes Specify remove when using this object. path string yes The path of the unit to remove."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/","title":"And","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/#and","title":"And","text":"

The and object is a unit that is evaluated as the maximum error level of the input units. Note that error level stale is treated as error.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/#format","title":"Format","text":"Name Type Required Description type string yes Specify and or short-circuit-and when using this object. list list[unit] yes List of input unit objects."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/and/#short-circuit-evaluation","title":"Short-circuit evaluation","text":"

Warning

Theshort-circuit-and is work in progress (WIP).

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/","title":"Constant","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/#constant","title":"Constant","text":"

The constant object is a unit with a fixed error level.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/#format","title":"Format","text":"Name Type Required Description type string yes Specify error level when using this object."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/const/#error-levels","title":"Error levels","text":"

The supported error levels are as follows.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/diag/","title":"Diag","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/diag/#diag","title":"Diag","text":"

The diag object is a unit that refers to a specific status within the source diagnostics.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/diag/#format","title":"Format","text":"Name Type Required Description type string yes Specify diag when using this object. diag string yes The name of the diagnostic status."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/link/","title":"Link","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/link/#link","title":"Link","text":"

The link object is a unit that refers to another unit.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/link/#format","title":"Format","text":"Name Type Required Description type string yes Specify link when using this object. link string yes The path of the unit to reference."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/or/","title":"Or","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/or/#or","title":"Or","text":"

The or object is a unit that is evaluated as the minimum error level of the input units. Note that error level stale is treated as error.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/or/#format","title":"Format","text":"Name Type Required Description type string yes Specify or when using this object. list list[unit] yes List of input unit objects."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/","title":"Constant","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/#constant","title":"Constant","text":"

Warning

This object is under development. It may be removed in the future.

The remapping object is a unit that converts error levels.

"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/#format","title":"Format","text":"Name Type Required Description type string yes Specify remapping type when using this object. item unit yes Input unit object."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/remap/#remapping-types","title":"Remapping types","text":"

The supported remapping types are as follows.

"},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/","title":"Tree tool","text":""},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/#tree-tool","title":"Tree tool","text":"

This tool displays the graph structure of the configuration file in tree format.

"},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/#usage","title":"Usage","text":"
ros2 run diagnostic_graph_aggregator tree <graph-config-path>\n
"},{"location":"system/diagnostic_graph_aggregator/doc/tool/tree/#examples","title":"Examples","text":"
ros2 run diagnostic_graph_aggregator tree '$(find-pkg-share diagnostic_graph_aggregator)/example/graph/main.yaml'\n
===== Top-level trees ============================\n- /autoware/modes/autonomous (and)\n    - /functions/pose_estimation (and)\n    - /functions/obstacle_detection (or)\n- /autoware/modes/local (and)\n    - /external/joystick_command (diag)\n- /autoware/modes/remote (and)\n    - /external/remote_command (diag)\n- /autoware/modes/comfortable_stop (and)\n    - /functions/obstacle_detection (or)\n- /autoware/modes/pull_over (and)\n    - /functions/pose_estimation (and)\n    - /functions/obstacle_detection (or)\n===== Subtrees ===================================\n- /functions/pose_estimation (and)\n    - /sensing/lidars/top (diag)\n- /functions/obstacle_detection (or)\n    - /sensing/lidars/front (diag)\n    - /sensing/radars/front (diag)\n===== Isolated units =============================\n- /autoware/modes/stop (ok)\n- /autoware/modes/emergency_stop (ok)\n
"},{"location":"system/diagnostic_graph_utils/","title":"diagnostic_graph_utils","text":""},{"location":"system/diagnostic_graph_utils/#diagnostic_graph_utils","title":"diagnostic_graph_utils","text":"

This package is a utility for diagnostic graph published by diagnostic_graph_aggregator.

"},{"location":"system/diagnostic_graph_utils/#ros-node","title":"ROS node","text":""},{"location":"system/diagnostic_graph_utils/#c-library","title":"C++ library","text":""},{"location":"system/diagnostic_graph_utils/doc/node/converter/","title":"Converter tool","text":""},{"location":"system/diagnostic_graph_utils/doc/node/converter/#converter-tool","title":"Converter tool","text":"

This tool converts /diagnostics_graph to /diagnostics_array so it can be read by tools such as rqt_runtime_monitor.

"},{"location":"system/diagnostic_graph_utils/doc/node/converter/#usage","title":"Usage","text":"
ros2 run diagnostic_graph_utils converter_node\n
"},{"location":"system/diagnostic_graph_utils/doc/node/converter/#examples","title":"Examples","text":"

Terminal 1:

ros2 launch diagnostic_graph_aggregator example-main.launch.xml\n

Terminal 2:

ros2 run diagnostic_graph_utils converter_node\n

Terminal 3:

ros2 run rqt_runtime_monitor rqt_runtime_monitor --ros-args -r diagnostics:=diagnostics_array\n

"},{"location":"system/diagnostic_graph_utils/doc/node/dump/","title":"Dump tool","text":""},{"location":"system/diagnostic_graph_utils/doc/node/dump/#dump-tool","title":"Dump tool","text":"

This tool displays /diagnostics_graph in table format.

"},{"location":"system/diagnostic_graph_utils/doc/node/dump/#usage","title":"Usage","text":"
ros2 run diagnostic_graph_utils dump_node\n
"},{"location":"system/diagnostic_graph_utils/doc/node/dump/#examples","title":"Examples","text":"

Terminal 1:

ros2 launch diagnostic_graph_aggregator example-main.launch.xml\n

Terminal 2:

ros2 run diagnostic_graph_utils dump_node\n

Output:

|----|-------|----------------------------------|------|----------|\n| No | Level | Path                             | Type | Children |\n|----|-------|----------------------------------|------|----------|\n|  1 | OK    | /autoware/modes/stop             | ok   |          |\n|  2 | ERROR | /autoware/modes/autonomous       | and  | 8 9      |\n|  3 | OK    | /autoware/modes/local            | and  | 13       |\n|  4 | OK    | /autoware/modes/remote           | and  | 14       |\n|  5 | OK    | /autoware/modes/emergency_stop   | ok   |          |\n|  6 | OK    | /autoware/modes/comfortable_stop | and  | 9        |\n|  7 | ERROR | /autoware/modes/pull_over        | and  | 8 9      |\n|  8 | ERROR | /functions/pose_estimation       | and  | 10       |\n|  9 | OK    | /functions/obstacle_detection    | or   | 11 12    |\n| 10 | ERROR | /sensing/lidars/top              | diag |          |\n| 11 | OK    | /sensing/lidars/front            | diag |          |\n| 12 | OK    | /sensing/radars/front            | diag |          |\n| 13 | OK    | /external/joystick_command       | diag |          |\n| 14 | OK    | /external/remote_command         | diag |          |\n
"},{"location":"system/dummy_diag_publisher/","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#dummy_diag_publisher","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#purpose","title":"Purpose","text":"

This package outputs a dummy diagnostic data for debugging and developing.

"},{"location":"system/dummy_diag_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_diag_publisher/#outputs","title":"Outputs","text":"Name Type Description /diagnostics diagnostic_msgs::msgs::DiagnosticArray Diagnostics outputs"},{"location":"system/dummy_diag_publisher/#parameters","title":"Parameters","text":""},{"location":"system/dummy_diag_publisher/#node-parameters","title":"Node Parameters","text":"

The parameter DIAGNOSTIC_NAME must be a name that exists in the parameter YAML file. If the parameter status is given from a command line, the parameter is_active is automatically set to true.

Name Type Default Value Explanation Reconfigurable update_rate int 10 Timer callback period [Hz] false DIAGNOSTIC_NAME.is_active bool true Force update or not true DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher true"},{"location":"system/dummy_diag_publisher/#yaml-format-for-dummy_diag_publisher","title":"YAML format for dummy_diag_publisher","text":"

If the value is default, the default value will be set.

Key Type Default Value Explanation required_diags.DIAGNOSTIC_NAME.is_active bool true Force update or not required_diags.DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher"},{"location":"system/dummy_diag_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/dummy_diag_publisher/#usage","title":"Usage","text":""},{"location":"system/dummy_diag_publisher/#launch","title":"launch","text":"
ros2 launch dummy_diag_publisher dummy_diag_publisher.launch.xml\n
"},{"location":"system/dummy_diag_publisher/#reconfigure","title":"reconfigure","text":"
ros2 param set /dummy_diag_publisher velodyne_connection.status \"Warn\"\nros2 param set /dummy_diag_publisher velodyne_connection.is_active true\n
"},{"location":"system/dummy_infrastructure/","title":"dummy_infrastructure","text":""},{"location":"system/dummy_infrastructure/#dummy_infrastructure","title":"dummy_infrastructure","text":"

This is a debug node for infrastructure communication.

"},{"location":"system/dummy_infrastructure/#usage","title":"Usage","text":"
ros2 launch dummy_infrastructure dummy_infrastructure.launch.xml\nros2 run rqt_reconfigure rqt_reconfigure\n
"},{"location":"system/dummy_infrastructure/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_infrastructure/#inputs","title":"Inputs","text":"Name Type Description ~/input/command_array tier4_v2x_msgs::msg::InfrastructureCommandArray Infrastructure command"},{"location":"system/dummy_infrastructure/#outputs","title":"Outputs","text":"Name Type Description ~/output/state_array tier4_v2x_msgs::msg::VirtualTrafficLightStateArray Virtual traffic light array"},{"location":"system/dummy_infrastructure/#parameters","title":"Parameters","text":""},{"location":"system/dummy_infrastructure/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period [Hz] use_first_command bool true Consider instrument id or not use_command_state bool false Consider command state or not instrument_id string `` Used as command id approval bool false set approval filed to ros param is_finalized bool false Stop at stop_line if finalization isn't completed"},{"location":"system/dummy_infrastructure/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/duplicated_node_checker/","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#duplicated-node-checker","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#purpose","title":"Purpose","text":"

This node monitors the ROS 2 environments and detect duplication of node names in the environment. The result is published as diagnostics.

"},{"location":"system/duplicated_node_checker/#standalone-startup","title":"Standalone Startup","text":"
ros2 launch duplicated_node_checker duplicated_node_checker.launch.xml\n
"},{"location":"system/duplicated_node_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The types of topic status and corresponding diagnostic status are following.

Duplication status Diagnostic status Description OK OK No duplication is detected Duplicated Detected ERROR Duplication is detected"},{"location":"system/duplicated_node_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/duplicated_node_checker/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/duplicated_node_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float The scanning and update frequency of the checker. 10 >2"},{"location":"system/duplicated_node_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_comfortable_stop_operator/","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#mrm_comfortable_stop_operator","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#purpose","title":"Purpose","text":"

MRM comfortable stop operator is a node that generates comfortable stop commands according to the comfortable stop MRM order.

"},{"location":"system/mrm_comfortable_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_comfortable_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_comfortable_stop_operator/#input","title":"Input","text":"Name Type Description ~/input/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order"},{"location":"system/mrm_comfortable_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/velocity_limit tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/velocity_limit/clear tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command"},{"location":"system/mrm_comfortable_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_comfortable_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 10 Timer callback frequency [Hz]"},{"location":"system/mrm_comfortable_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation min_acceleration double -1.0 Minimum acceleration for comfortable stop [m/s^2] max_jerk double 0.3 Maximum jerk for comfortable stop [m/s^3] min_jerk double -0.3 Minimum jerk for comfortable stop [m/s^3]"},{"location":"system/mrm_comfortable_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_emergency_stop_operator/","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#mrm_emergency_stop_operator","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#purpose","title":"Purpose","text":"

MRM emergency stop operator is a node that generates emergency stop commands according to the emergency stop MRM order.

"},{"location":"system/mrm_emergency_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_emergency_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_emergency_stop_operator/#input","title":"Input","text":"Name Type Description ~/input/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order ~/input/control/control_cmd autoware_control_msgs::msg::Control Control command output from the last node of the control component. Used for the initial value of the emergency stop command."},{"location":"system/mrm_emergency_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/mrm/emergency_stop/control_cmd autoware_control_msgs::msg::Control Emergency stop command"},{"location":"system/mrm_emergency_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_emergency_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 30 Timer callback frequency [Hz]"},{"location":"system/mrm_emergency_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation target_acceleration double -2.5 Target acceleration for emergency stop [m/s^2] target_jerk double -1.5 Target jerk for emergency stop [m/s^3]"},{"location":"system/mrm_emergency_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_handler/","title":"mrm_handler","text":""},{"location":"system/mrm_handler/#mrm_handler","title":"mrm_handler","text":""},{"location":"system/mrm_handler/#purpose","title":"Purpose","text":"

MRM Handler is a node to select a proper MRM from a system failure state contained in OperationModeAvailability.

"},{"location":"system/mrm_handler/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_handler/#state-transitions","title":"State Transitions","text":""},{"location":"system/mrm_handler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_handler/#input","title":"Input","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry Used to decide whether vehicle is stopped or not /system/operation_mode/availability tier4_system_msgs::msg::OperationModeAvailability Used to select proper MRM from system available mrm behavior contained in operationModeAvailability /vehicle/status/control_mode autoware_vehicle_msgs::msg::ControlModeReport Used to check vehicle mode: autonomous or manual /system/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM emergency stop operation is available /system/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM comfortable stop operation is available /system/mrm/pull_over_manager/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM pull over operation is available /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState Used to check whether the current operation mode is AUTO or STOP."},{"location":"system/mrm_handler/#output","title":"Output","text":"Name Type Description /system/emergency/gear_cmd autoware_vehicle_msgs::msg::GearCommand Required to execute proper MRM (send gear cmd) /system/emergency/hazard_lights_cmd autoware_vehicle_msgs::msg::HazardLightsCommand Required to execute proper MRM (send turn signal cmd) /system/fail_safe/mrm_state autoware_adapi_v1_msgs::msg::MrmState Inform MRM execution state and selected MRM behavior /system/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM emergency stop /system/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM comfortable stop /system/mrm/pull_over_manager/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM pull over"},{"location":"system/mrm_handler/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate integer Timer callback period. 10 N/A timeout_operation_mode_availability float If the input operation_mode_availability topic cannot be received for more than timeout_operation_mode_availability, vehicle will make an emergency stop. 0.5 N/A timeout_call_mrm_behavior float If a service is requested to the mrm operator and the response is not returned by timeout_call_mrm_behavior, the timeout occurs. 0.01 N/A timeout_cancel_mrm_behavior float If a service is requested to the mrm operator and the response is not returned by timeout_cancel_mrm_behavior, the timeout occurs. 0.01 N/A use_emergency_holding boolean If this parameter is true, the handler does not recover to the NORMAL state. False N/A timeout_emergency_recovery float If the duration of the EMERGENCY state is longer than timeout_emergency_recovery, it does not recover to the NORMAL state. 5.0 N/A use_parking_after_stopped boolean If this parameter is true, it will publish PARKING shift command. false N/A use_pull_over boolean If this parameter is true, operate pull over when latent faults occur. false N/A use_comfortable_stop boolean If this parameter is true, operate comfortable stop when latent faults occur. false N/A turning_hazard_on.emergency boolean If this parameter is true, hazard lamps will be turned on during emergency state. true N/A"},{"location":"system/mrm_handler/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/system_diagnostic_monitor/","title":"system_diagnostic_monitor","text":""},{"location":"system/system_diagnostic_monitor/#system_diagnostic_monitor","title":"system_diagnostic_monitor","text":"

This package contains default configurations of diagnostic graph and scripts for system integration.

"},{"location":"system/system_diagnostic_monitor/#configs","title":"Configs","text":"Name Description autoware-main.yaml Diagnostic graphs for basic monitoring of Autoware. autoware-psim.yaml Diagnostic graph with some units disabled for the simulator."},{"location":"system/system_diagnostic_monitor/#scripts","title":"Scripts","text":"Name Description component_state_diagnostics Node that converts component states to diagnostics."},{"location":"system/system_monitor/","title":"System Monitor for Autoware","text":""},{"location":"system/system_monitor/#system-monitor-for-autoware","title":"System Monitor for Autoware","text":"

Further improvement of system monitor functionality for Autoware.

"},{"location":"system/system_monitor/#description","title":"Description","text":"

This package provides the following nodes for monitoring system:

"},{"location":"system/system_monitor/#supported-architecture","title":"Supported architecture","text":""},{"location":"system/system_monitor/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/#how-to-use","title":"How to use","text":"

Use colcon build and launch in the same way as other packages.

colcon build\nsource install/setup.bash\nros2 launch system_monitor system_monitor.launch.xml\n

CPU and GPU monitoring method differs depending on platform. CMake automatically chooses source to be built according to build environment. If you build this package on intel platform, CPU monitor and GPU monitor which run on intel platform are built.

"},{"location":"system/system_monitor/#ros-topics-published-by-system-monitor","title":"ROS topics published by system monitor","text":"

Every topic is published in 1 minute interval.

[Usage] \u2713\uff1aSupported, -\uff1aNot supported

Node Message Intel arm64(tegra) arm64(raspi) Notes CPU Monitor CPU Temperature \u2713 \u2713 \u2713 CPU Usage \u2713 \u2713 \u2713 CPU Load Average \u2713 \u2713 \u2713 CPU Thermal Throttling \u2713 - \u2713 CPU Frequency \u2713 \u2713 \u2713 Notification of frequency only, normally error not generated. HDD Monitor HDD Temperature \u2713 \u2713 \u2713 HDD PowerOnHours \u2713 \u2713 \u2713 HDD TotalDataWritten \u2713 \u2713 \u2713 HDD RecoveredError \u2713 \u2713 \u2713 HDD Usage \u2713 \u2713 \u2713 HDD ReadDataRate \u2713 \u2713 \u2713 HDD WriteDataRate \u2713 \u2713 \u2713 HDD ReadIOPS \u2713 \u2713 \u2713 HDD WriteIOPS \u2713 \u2713 \u2713 HDD Connection \u2713 \u2713 \u2713 Memory Monitor Memory Usage \u2713 \u2713 \u2713 Net Monitor Network Connection \u2713 \u2713 \u2713 Network Usage \u2713 \u2713 \u2713 Notification of usage only, normally error not generated. Network CRC Error \u2713 \u2713 \u2713 Warning occurs when the number of CRC errors in the period reaches the threshold value. The number of CRC errors that occur is the same as the value that can be confirmed with the ip command. IP Packet Reassembles Failed \u2713 \u2713 \u2713 UDP Buf Errors \u2713 \u2713 \u2713 NTP Monitor NTP Offset \u2713 \u2713 \u2713 Process Monitor Tasks Summary \u2713 \u2713 \u2713 High-load Proc[0-9] \u2713 \u2713 \u2713 High-mem Proc[0-9] \u2713 \u2713 \u2713 GPU Monitor GPU Temperature \u2713 \u2713 - GPU Usage \u2713 \u2713 - GPU Memory Usage \u2713 - - GPU Thermal Throttling \u2713 - - GPU Frequency \u2713 \u2713 - For Intel platform, monitor whether current GPU clock is supported by the GPU. Voltage Monitor CMOS Battery Status \u2713 - - Battery Health for RTC and BIOS -"},{"location":"system/system_monitor/#ros-parameters","title":"ROS parameters","text":"

See ROS parameters.

"},{"location":"system/system_monitor/#notes","title":"Notes","text":""},{"location":"system/system_monitor/#cpu-monitor-for-intel-platform","title":"CPU monitor for intel platform","text":"

Thermal throttling event can be monitored by reading contents of MSR(Model Specific Register), and accessing MSR is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Create a user to run 'msr_reader'.

    sudo adduser <username>\n
  2. Load kernel module 'msr' into your target system. The path '/dev/cpu/CPUNUM/msr' appears.

    sudo modprobe msr\n
  3. Allow user to access MSR with read-only privilege using the Access Control List (ACL).

    sudo setfacl -m u:<username>:r /dev/cpu/*/msr\n
  4. Assign capability to 'msr_reader' since msr kernel module requires rawio capability.

    sudo setcap cap_sys_rawio=ep install/system_monitor/lib/system_monitor/msr_reader\n
  5. Run 'msr_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/msr_reader\n
"},{"location":"system/system_monitor/#see-also","title":"See also","text":"

msr_reader

"},{"location":"system/system_monitor/#hdd-monitor","title":"HDD Monitor","text":"

Generally, S.M.A.R.T. information is used to monitor HDD temperature and life of HDD, and normally accessing disk device node is allowed for root user or disk group. As with the CPU monitor, this package provides an approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting_1","title":"Instructions before starting","text":"
  1. Create a user to run 'hdd_reader'.

    sudo adduser <username>\n
  2. Add user to the disk group.

    sudo usermod -a -G disk <username>\n
  3. Assign capabilities to 'hdd_reader' since SCSI kernel module requires rawio capability to send ATA PASS-THROUGH (12) command and NVMe kernel module requires admin capability to send Admin Command.

    sudo setcap 'cap_sys_rawio=ep cap_sys_admin=ep' install/system_monitor/lib/system_monitor/hdd_reader\n
  4. Run 'hdd_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/hdd_reader\n
"},{"location":"system/system_monitor/#see-also_1","title":"See also","text":"

hdd_reader

"},{"location":"system/system_monitor/#gpu-monitor-for-intel-platform","title":"GPU Monitor for intel platform","text":"

Currently GPU monitor for intel platform only supports NVIDIA GPU whose information can be accessed by NVML API.

Also you need to install CUDA libraries. For installation instructions for CUDA 10.0, see NVIDIA CUDA Installation Guide for Linux.

"},{"location":"system/system_monitor/#voltage-monitor-for-cmos-battery","title":"Voltage monitor for CMOS Battery","text":"

Some platforms have built-in batteries for the RTC and CMOS. This node determines the battery status from the result of executing cat /proc/driver/rtc. Also, if lm-sensors is installed, it is possible to use the results. However, the return value of sensors varies depending on the chipset, so it is necessary to set a string to extract the corresponding voltage. It is also necessary to set the voltage for warning and error. For example, if you want a warning when the voltage is less than 2.9V and an error when it is less than 2.7V. The execution result of sensors on the chipset nct6106 is as follows, and \"in7:\" is the voltage of the CMOS battery.

$ sensors\npch_cannonlake-virtual-0\nAdapter: Virtual device\ntemp1:        +42.0\u00b0C\n\nnct6106-isa-0a10\nAdapter: ISA adapter\nin0:           728.00 mV (min =  +0.00 V, max =  +1.74 V)\nin1:             1.01 V  (min =  +0.00 V, max =  +2.04 V)\nin2:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin3:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin4:             1.07 V  (min =  +0.00 V, max =  +2.04 V)\nin5:             1.05 V  (min =  +0.00 V, max =  +2.04 V)\nin6:             1.67 V  (min =  +0.00 V, max =  +2.04 V)\nin7:             3.06 V  (min =  +0.00 V, max =  +4.08 V)\nin8:             2.10 V  (min =  +0.00 V, max =  +4.08 V)\nfan1:          2789 RPM  (min =    0 RPM)\nfan2:             0 RPM  (min =    0 RPM)\n

The setting value of voltage_monitor.param.yaml is as follows.

/**:\nros__parameters:\ncmos_battery_warn: 2.90\ncmos_battery_error: 2.70\ncmos_battery_label: \"in7:\"\n

The above values of 2.7V and 2.90V are hypothetical. Depending on the motherboard and chipset, the value may vary. However, if the voltage of the lithium battery drops below 2.7V, it is recommended to replace it. In the above example, the message output to the topic /diagnostics is as follows. If the voltage < 2.9V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Low Battery\n

If the voltage < 2.7V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Battery Died\n

If neither, then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: OK\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: OK\n

If the CMOS battery voltage drops less than voltage_error or voltage_warn,It will be a warning. If the battery runs out, the RTC will stop working when the power is turned off. However, since the vehicle can run, it is not an error. The vehicle will stop when an error occurs, but there is no need to stop immediately. It can be determined by the value of \"Low Battery\" or \"Battery Died\".

"},{"location":"system/system_monitor/#uml-diagrams","title":"UML diagrams","text":"

See Class diagrams. See Sequence diagrams.

"},{"location":"system/system_monitor/docs/class_diagrams/","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#class-diagrams","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/hdd_reader/","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#hdd_reader","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#name","title":"Name","text":"

hdd_reader - Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD

"},{"location":"system/system_monitor/docs/hdd_reader/#synopsis","title":"Synopsis","text":"

hdd_reader [OPTION]

"},{"location":"system/system_monitor/docs/hdd_reader/#description","title":"Description","text":"

Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD. This runs as a daemon process and listens to a TCP/IP port (7635 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/hdd_reader/#notes","title":"Notes","text":"

The 'hdd_reader' accesses minimal data enough to get Model number, Serial number, HDD temperature, and life of HDD. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

"},{"location":"system/system_monitor/docs/hdd_reader/#ata","title":"[ATA]","text":"Purpose Name Length Model number, Serial number IDENTIFY DEVICE data 256 words(512 bytes) HDD temperature, life of HDD SMART READ DATA 256 words(512 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#nvme","title":"[NVMe]","text":"Purpose Name Length Model number, Serial number Identify Controller data structure 4096 bytes HDD temperature, life of HDD SMART / Health Information 36 Dword(144 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#operation-confirmed-drives","title":"Operation confirmed drives","text":""},{"location":"system/system_monitor/docs/msr_reader/","title":"msr_reader","text":""},{"location":"system/system_monitor/docs/msr_reader/#msr_reader","title":"msr_reader","text":""},{"location":"system/system_monitor/docs/msr_reader/#name","title":"Name","text":"

msr_reader - Read MSR register for monitoring thermal throttling event

"},{"location":"system/system_monitor/docs/msr_reader/#synopsis","title":"Synopsis","text":"

msr_reader [OPTION]

"},{"location":"system/system_monitor/docs/msr_reader/#description","title":"Description","text":"

Read MSR register for monitoring thermal throttling event. This runs as a daemon process and listens to a TCP/IP port (7634 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/msr_reader/#notes","title":"Notes","text":"

The 'msr_reader' accesses minimal data enough to get thermal throttling event. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

Register Address Name Length 1B1H IA32_PACKAGE_THERM_STATUS 64bit

For details please see the documents below.

"},{"location":"system/system_monitor/docs/msr_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/docs/ros_parameters/","title":"ROS parameters","text":""},{"location":"system/system_monitor/docs/ros_parameters/#ros-parameters","title":"ROS parameters","text":""},{"location":"system/system_monitor/docs/ros_parameters/#cpu-monitor","title":"CPU Monitor","text":"

cpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when CPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when CPU temperature reaches a specified value or higher. usage_warn float %(1e-2) 0.90 Generates warning when CPU usage reaches a specified value or higher and last for usage_warn_count counts. usage_error float %(1e-2) 1.00 Generates error when CPU usage reaches a specified value or higher and last for usage_error_count counts. usage_warn_count int n/a 2 Generates warning when CPU usage reaches usage_warn value or higher and last for a specified counts. usage_error_count int n/a 2 Generates error when CPU usage reaches usage_error value or higher and last for a specified counts. load1_warn float %(1e-2) 0.90 Generates warning when load average 1min reaches a specified value or higher. load5_warn float %(1e-2) 0.80 Generates warning when load average 5min reaches a specified value or higher. msr_reader_port int n/a 7634 Port number to connect to msr_reader."},{"location":"system/system_monitor/docs/ros_parameters/#hdd-monitor","title":"HDD Monitor","text":"

hdd_monitor:

\u00a0\u00a0disks:

Name Type Unit Default Notes name string n/a none The disk name to monitor temperature. (e.g. /dev/sda) temp_attribute_id int n/a 0xC2 S.M.A.R.T attribute ID of temperature. temp_warn float DegC 55.0 Generates warning when HDD temperature reaches a specified value or higher. temp_error float DegC 70.0 Generates error when HDD temperature reaches a specified value or higher. power_on_hours_attribute_id int n/a 0x09 S.M.A.R.T attribute ID of power-on hours. power_on_hours_warn int Hour 3000000 Generates warning when HDD power-on hours reaches a specified value or higher. total_data_written_attribute_id int n/a 0xF1 S.M.A.R.T attribute ID of total data written. total_data_written_warn int depends on device 4915200 Generates warning when HDD total data written reaches a specified value or higher. total_data_written_safety_factor int %(1e-2) 0.05 Safety factor of HDD total data written. recovered_error_attribute_id int n/a 0xC3 S.M.A.R.T attribute ID of recovered error. recovered_error_warn int n/a 1 Generates warning when HDD recovered error reaches a specified value or higher. read_data_rate_warn float MB/s 360.0 Generates warning when HDD read data rate reaches a specified value or higher. write_data_rate_warn float MB/s 103.5 Generates warning when HDD write data rate reaches a specified value or higher. read_iops_warn float IOPS 63360.0 Generates warning when HDD read IOPS reaches a specified value or higher. write_iops_warn float IOPS 24120.0 Generates warning when HDD write IOPS reaches a specified value or higher.

hdd_monitor:

Name Type Unit Default Notes hdd_reader_port int n/a 7635 Port number to connect to hdd_reader. usage_warn float %(1e-2) 0.95 Generates warning when disk usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when disk usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#memory-monitor","title":"Memory Monitor","text":"

mem_monitor:

Name Type Unit Default Notes usage_warn float %(1e-2) 0.95 Generates warning when physical memory usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when physical memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#net-monitor","title":"Net Monitor","text":"

net_monitor:

Name Type Unit Default Notes devices list[string] n/a none The name of network interface to monitor. (e.g. eth0, * for all network interfaces) monitor_program string n/a greengrass program name to be monitored by nethogs name. crc_error_check_duration int sec 1 CRC error check duration. crc_error_count_threshold int n/a 1 Generates warning when count of CRC errors during CRC error check duration reaches a specified value or higher. reassembles_failed_check_duration int sec 1 IP packet reassembles failed check duration. reassembles_failed_check_count int n/a 1 Generates warning when count of IP packet reassembles failed during IP packet reassembles failed check duration reaches a specified value or higher. udp_buf_errors_check_duration int sec 1 UDP buf errors check duration. udp_buf_errors_check_count int n/a 1 Generates warning when count of UDP buf errors during udp_buf_errors_check_duration reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#ntp-monitor","title":"NTP Monitor","text":"

ntp_monitor:

Name Type Unit Default Notes server string n/a ntp.ubuntu.com The name of NTP server to synchronize date and time. (e.g. ntp.nict.jp for Japan) offset_warn float sec 0.1 Generates warning when NTP offset reaches a specified value or higher. (default is 100ms) offset_error float sec 5.0 Generates warning when NTP offset reaches a specified value or higher. (default is 5sec)"},{"location":"system/system_monitor/docs/ros_parameters/#process-monitor","title":"Process Monitor","text":"

process_monitor:

Name Type Unit Default Notes num_of_procs int n/a 5 The number of processes to generate High-load Proc[0-9] and High-mem Proc[0-9]."},{"location":"system/system_monitor/docs/ros_parameters/#gpu-monitor","title":"GPU Monitor","text":"

gpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when GPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when GPU temperature reaches a specified value or higher. gpu_usage_warn float %(1e-2) 0.90 Generates warning when GPU usage reaches a specified value or higher. gpu_usage_error float %(1e-2) 1.00 Generates error when GPU usage reaches a specified value or higher. memory_usage_warn float %(1e-2) 0.90 Generates warning when GPU memory usage reaches a specified value or higher. memory_usage_error float %(1e-2) 1.00 Generates error when GPU memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#voltage-monitor","title":"Voltage Monitor","text":"

voltage_monitor:

Name Type Unit Default Notes cmos_battery_warn float volt 2.9 Generates warning when voltage of CMOS Battery is lower. cmos_battery_error float volt 2.7 Generates error when voltage of CMOS Battery is lower. cmos_battery_label string n/a \"\" voltage string in sensors command outputs. if empty no voltage will be checked."},{"location":"system/system_monitor/docs/seq_diagrams/","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#sequence-diagrams","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#ros-topics-cpu-monitor","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-temperature","title":"CPU Temperature","text":"

/diagnostics/cpu_monitor: CPU Temperature

[summary]

level message OK OK

[values]

key (example) value (example) Package id 0, Core [0-9], thermal_zone[0-9] 50.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-usage","title":"CPU Usage","text":"

/diagnostics/cpu_monitor: CPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) CPU [all,0-9]: status OK / high load / very high load CPU [all,0-9]: usr 2.00% CPU [all,0-9]: nice 0.00% CPU [all,0-9]: sys 1.00% CPU [all,0-9]: idle 97.00%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-load-average","title":"CPU Load Average","text":"

/diagnostics/cpu_monitor: CPU Load Average

[summary]

level message OK OK WARN high load

[values]

key value (example) 1min 14.50% 5min 14.55% 15min 9.67%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-thermal-throttling","title":"CPU Thermal Throttling","text":"

Intel and raspi platform only. Tegra platform not supported.

/diagnostics/cpu_monitor: CPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values for intel platform]

key value (example) CPU [0-9]: Pkg Thermal Status OK / throttling

[values for raspi platform]

key value (example) status All clear / Currently throttled / Soft temperature limit active"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-frequency","title":"CPU Frequency","text":"

/diagnostics/cpu_monitor: CPU Frequency

[summary]

level message OK OK

[values]

key value (example) CPU [0-9]: clock 2879MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/","title":"ROS topics: GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_gpu_monitor/#ros-topics-gpu-monitor","title":"ROS topics: GPU Monitor","text":"

Intel and tegra platform only. Raspi platform not supported.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-temperature","title":"GPU Temperature","text":"

/diagnostics/gpu_monitor: GPU Temperature

[summary]

level message OK OK WARN warm ERROR hot

[values]

key (example) value (example) GeForce GTX 1650, thermal_zone[0-9] 46.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-usage","title":"GPU Usage","text":"

/diagnostics/gpu_monitor: GPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650, gpu.[0-9] GPU [0-9]: usage 19.0%

*key: gpu.[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-memory-usage","title":"GPU Memory Usage","text":"

Intel platform only. There is no separate gpu memory in tegra. Both cpu and gpu uses cpu memory.

/diagnostics/gpu_monitor: GPU Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: usage 13.0% GPU [0-9]: total 3G GPU [0-9]: used 1G GPU [0-9]: free 2G"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-thermal-throttling","title":"GPU Thermal Throttling","text":"

Intel platform only. Tegra platform not supported.

/diagnostics/gpu_monitor: GPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values]

key value (example) GPU [0-9]: status OK / throttling GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz GPU [0-9]: reasons GpuIdle / SwThermalSlowdown etc."},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-frequency","title":"GPU Frequency","text":"

/diagnostics/gpu_monitor: GPU Frequency

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#intel-platform","title":"Intel platform","text":"

[summary]

level message OK OK WARN unsupported clock

[values]

key value (example) GPU [0-9]: status OK / unsupported clock GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#tegra-platform","title":"Tegra platform","text":"

[summary]

level message OK OK

[values]

key (example) value (example) GPU 17000000.gv11b: clock 318 MHz"},{"location":"system/system_monitor/docs/topics_hdd_monitor/","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#ros-topics-hdd-monitor","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-temperature","title":"HDD Temperature","text":"

/diagnostics/hdd_monitor: HDD Temperature

[summary]

level message OK OK WARN hot ERROR critical hot

[values]

key value (example) HDD [0-9]: status OK / hot / critical hot HDD [0-9]: name /dev/nvme0 HDD [0-9]: model SAMSUNG MZVLB1T0HBLR-000L7 HDD [0-9]: serial S4EMNF0M820682 HDD [0-9]: temperature 37.0 DegC not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-poweronhours","title":"HDD PowerOnHours","text":"

/diagnostics/hdd_monitor: HDD PowerOnHours

[summary]

level message OK OK WARN lifetime limit

[values]

key value (example) HDD [0-9]: status OK / lifetime limit HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: power on hours 4834 Hours not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-totaldatawritten","title":"HDD TotalDataWritten","text":"

/diagnostics/hdd_monitor: HDD TotalDataWritten

[summary]

level message OK OK WARN warranty period

[values]

key value (example) HDD [0-9]: status OK / warranty period HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: total data written 146295330 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-recoverederror","title":"HDD RecoveredError","text":"

/diagnostics/hdd_monitor: HDD RecoveredError

[summary]

level message OK OK WARN high soft error rate

[values]

key value (example) HDD [0-9]: status OK / high soft error rate HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: recovered error 0 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-usage","title":"HDD Usage","text":"

/diagnostics/hdd_monitor: HDD Usage

[summary]

level message OK OK WARN low disk space ERROR very low disk space

[values]

key value (example) HDD [0-9]: status OK / low disk space / very low disk space HDD [0-9]: filesystem /dev/nvme0n1p4 HDD [0-9]: size 264G HDD [0-9]: used 172G HDD [0-9]: avail 749G HDD [0-9]: use 69% HDD [0-9]: mounted on /"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readdatarate","title":"HDD ReadDataRate","text":"

/diagnostics/hdd_monitor: HDD ReadDataRate

[summary]

level message OK OK WARN high data rate of read

[values]

key value (example) HDD [0-9]: status OK / high data rate of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of read 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writedatarate","title":"HDD WriteDataRate","text":"

/diagnostics/hdd_monitor: HDD WriteDataRate

[summary]

level message OK OK WARN high data rate of write

[values]

key value (example) HDD [0-9]: status OK / high data rate of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of write 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readiops","title":"HDD ReadIOPS","text":"

/diagnostics/hdd_monitor: HDD ReadIOPS

[summary]

level message OK OK WARN high IOPS of read

[values]

key value (example) HDD [0-9]: status OK / high IOPS of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of read 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writeiops","title":"HDD WriteIOPS","text":"

/diagnostics/hdd_monitor: HDD WriteIOPS

[summary]

level message OK OK WARN high IOPS of write

[values]

key value (example) HDD [0-9]: status OK / high IOPS of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of write 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-connection","title":"HDD Connection","text":"

/diagnostics/hdd_monitor: HDD Connection

[summary]

level message OK OK WARN not connected

[values]

key value (example) HDD [0-9]: status OK / not connected HDD [0-9]: name /dev/nvme0 HDD [0-9]: mount point /"},{"location":"system/system_monitor/docs/topics_mem_monitor/","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#ros-topics-memory-monitor","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#memory-usage","title":"Memory Usage","text":"

/diagnostics/mem_monitor: Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) Mem: usage 29.72% Mem: total 31.2G Mem: used 6.0G Mem: free 20.7G Mem: shared 2.9G Mem: buff/cache 4.5G Mem: available 21.9G Swap: total 2.0G Swap: used 218M Swap: free 1.8G Total: total 33.2G Total: used 6.2G Total: free 22.5G Total: used+ 9.1G"},{"location":"system/system_monitor/docs/topics_net_monitor/","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#ros-topics-net-monitor","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#network-connection","title":"Network Connection","text":"

/diagnostics/net_monitor: Network Connection

[summary]

level message OK OK WARN no such device

[values]

key value (example) Network [0-9]: status OK / no such device HDD [0-9]: name wlp82s0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-usage","title":"Network Usage","text":"

/diagnostics/net_monitor: Network Usage

[summary]

level message OK OK

[values]

key value (example) Network [0-9]: status OK Network [0-9]: interface name wlp82s0 Network [0-9]: rx_usage 0.00% Network [0-9]: tx_usage 0.00% Network [0-9]: rx_traffic 0.00 MB/s Network [0-9]: tx_traffic 0.00 MB/s Network [0-9]: capacity 400.0 MB/s Network [0-9]: mtu 1500 Network [0-9]: rx_bytes 58455228 Network [0-9]: rx_errors 0 Network [0-9]: tx_bytes 11069136 Network [0-9]: tx_errors 0 Network [0-9]: collisions 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-traffic","title":"Network Traffic","text":"

/diagnostics/net_monitor: Network Traffic

[summary]

level message OK OK

[values when specified program is detected]

key value (example) nethogs [0-9]: program /lambda/greengrassSystemComponents/1384/999 nethogs [0-9]: sent (KB/Sec) 1.13574 nethogs [0-9]: received (KB/Sec) 0.261914

[values when error is occurring]

key value (example) error execve failed: No such file or directory"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-crc-error","title":"Network CRC Error","text":"

/diagnostics/net_monitor: Network CRC Error

[summary]

level message OK OK WARN CRC error

[values]

key value (example) Network [0-9]: interface name wlp82s0 Network [0-9]: total rx_crc_errors 0 Network [0-9]: rx_crc_errors per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#ip-packet-reassembles-failed","title":"IP Packet Reassembles Failed","text":"

/diagnostics/net_monitor: IP Packet Reassembles Failed

[summary]

level message OK OK WARN reassembles failed

[values]

key value (example) total packet reassembles failed 0 packet reassembles failed per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#udp-buf-errors","title":"UDP Buf Errors","text":"

/diagnostics/net_monitor: UDP Buf Errors

[summary]

level message OK OK WARN UDP buf errors

[values]

key value (example) total UDP rcv buf errors 0 UDP rcv buf errors per unit time 0 total UDP snd buf errors 0 UDP snd buf errors per unit time 0"},{"location":"system/system_monitor/docs/topics_ntp_monitor/","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ros-topics-ntp-monitor","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ntp-offset","title":"NTP Offset","text":"

/diagnostics/ntp_monitor: NTP Offset

[summary]

level message OK OK WARN high ERROR too high

[values]

key value (example) NTP Offset -0.013181 sec NTP Delay 0.053880 sec"},{"location":"system/system_monitor/docs/topics_process_monitor/","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#ros-topics-process-monitor","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#tasks-summary","title":"Tasks Summary","text":"

/diagnostics/process_monitor: Tasks Summary

[summary]

level message OK OK

[values]

key value (example) total 409 running 2 sleeping 321 stopped 0 zombie 0"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-load-proc0-9","title":"High-load Proc[0-9]","text":"

/diagnostics/process_monitor: High-load Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /usr/lib/firefox/firefox %CPU 37.5 %MEM 2.1 PID 14062 USER autoware PR 20 NI 0 VIRT 3461152 RES 669052 SHR 481208 S S TIME+ 23:57.49"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-mem-proc0-9","title":"High-mem Proc[0-9]","text":"

/diagnostics/process_monitor: High-mem Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /snap/multipass/1784/usr/bin/qemu-system-x86_64 %CPU 0 %MEM 2.5 PID 1565 USER root PR 20 NI 0 VIRT 3722320 RES 812432 SHR 20340 S S TIME+ 0:22.84"},{"location":"system/system_monitor/docs/topics_voltage_monitor/","title":"ROS topics: Voltage Monitor","text":""},{"location":"system/system_monitor/docs/topics_voltage_monitor/#ros-topics-voltage-monitor","title":"ROS topics: Voltage Monitor","text":"

\"CMOS Battery Status\" and \"CMOS battery voltage\" are exclusive. Only one or the other is generated. Which one is generated depends on the value of cmos_battery_label.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-status","title":"CMOS Battery Status","text":"

/diagnostics/voltage_monitor: CMOS Battery Status

[summary]

level message OK OK WARN Battery Dead

[values]

key (example) value (example) CMOS battery status OK / Battery Dead

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-voltage","title":"CMOS Battery Voltage","text":"

/diagnostics/voltage_monitor: CMOS battery voltage

[summary]

level message OK OK WARN Low Battery WARN Battery Died

[values]

key value (example) CMOS battery voltage 3.06"},{"location":"system/system_monitor/docs/traffic_reader/","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#traffic_reader","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#name","title":"Name","text":"

traffic_reader - monitoring network traffic by process

"},{"location":"system/system_monitor/docs/traffic_reader/#synopsis","title":"Synopsis","text":"

traffic_reader [OPTION]

"},{"location":"system/system_monitor/docs/traffic_reader/#description","title":"Description","text":"

Monitoring network traffic by process. This runs as a daemon process and listens to a TCP/IP port (7636 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/traffic_reader/#notes","title":"Notes","text":"

The 'traffic_reader' requires nethogs command.

"},{"location":"system/system_monitor/docs/traffic_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/topic_state_monitor/","title":"topic_state_monitor","text":""},{"location":"system/topic_state_monitor/#topic_state_monitor","title":"topic_state_monitor","text":""},{"location":"system/topic_state_monitor/#purpose","title":"Purpose","text":"

This node monitors input topic for abnormalities such as timeout and low frequency. The result of topic status is published as diagnostics.

"},{"location":"system/topic_state_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The types of topic status and corresponding diagnostic status are following.

Topic status Diagnostic status Description OK OK The topic has no abnormalities NotReceived ERROR The topic has not been received yet WarnRate WARN The frequency of the topic is dropped ErrorRate ERROR The frequency of the topic is significantly dropped Timeout ERROR The topic subscription is stopped for a certain time"},{"location":"system/topic_state_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/topic_state_monitor/#input","title":"Input","text":"Name Type Description any name any type Subscribe target topic to monitor"},{"location":"system/topic_state_monitor/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/topic_state_monitor/#parameters","title":"Parameters","text":""},{"location":"system/topic_state_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description topic string - Name of target topic topic_type string - Type of target topic (used if the topic is not transform) frame_id string - Frame ID of transform parent (used if the topic is transform) child_frame_id string - Frame ID of transform child (used if the topic is transform) transient_local bool false QoS policy of topic subscription (Transient Local/Volatile) best_effort bool false QoS policy of topic subscription (Best Effort/Reliable) diag_name string - Name used for the diagnostics to publish update_rate double 10.0 Timer callback period [Hz]"},{"location":"system/topic_state_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description warn_rate double 0.5 If the topic rate is lower than this value, the topic status becomes WarnRate error_rate double 0.1 If the topic rate is lower than this value, the topic status becomes ErrorRate timeout double 1.0 If the topic subscription is stopped for more than this time [s], the topic status becomes Timeout window_size int 10 Window size of target topic for calculating frequency"},{"location":"system/topic_state_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/velodyne_monitor/","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#velodyne_monitor","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#purpose","title":"Purpose","text":"

This node monitors the status of Velodyne LiDARs. The result of the status is published as diagnostics. Take care not to use this diagnostics to decide the lidar error. Please read Assumptions / Known limits for the detail reason.

"},{"location":"system/velodyne_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The status of Velodyne LiDAR can be retrieved from http://[ip_address]/cgi/{info, settings, status, diag}.json.

The types of abnormal status and corresponding diagnostics status are following.

Abnormal status Diagnostic status No abnormality OK Top board temperature is too cold ERROR Top board temperature is cold WARN Top board temperature is too hot ERROR Top board temperature is hot WARN Bottom board temperature is too cold ERROR Bottom board temperature is cold WARN Bottom board temperature is too hot ERROR Bottom board temperature is hot WARN Rpm(Rotations per minute) of the motor is too low ERROR Rpm(Rotations per minute) of the motor is low WARN Connection error (cannot get Velodyne LiDAR status) ERROR"},{"location":"system/velodyne_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/velodyne_monitor/#input","title":"Input","text":"

None

"},{"location":"system/velodyne_monitor/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/velodyne_monitor/#parameters","title":"Parameters","text":""},{"location":"system/velodyne_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description timeout double 0.5 Timeout for HTTP request to get Velodyne LiDAR status [s]"},{"location":"system/velodyne_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description ip_address string \"192.168.1.201\" IP address of target Velodyne LiDAR temp_cold_warn double -5.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes WARN [\u00b0C] temp_cold_error double -10.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes ERROR [\u00b0C] temp_hot_warn double 75.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes WARN [\u00b0C] temp_hot_error double 80.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes ERROR [\u00b0C] rpm_ratio_warn double 0.80 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes WARN rpm_ratio_error double 0.70 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes ERROR"},{"location":"system/velodyne_monitor/#config-files","title":"Config files","text":"

Config files for several velodyne models are prepared. The temp_*** parameters are set with reference to the operational temperature from each datasheet. Moreover, the temp_hot_*** of each model are set highly as 20 from operational temperature. Now, VLP-16.param.yaml is used as default argument because it is lowest spec.

Model Name Config name Operational Temperature [\u2103] VLP-16 VLP-16.param.yaml -10 to 60 VLP-32C VLP-32C.param.yaml -20 to 60 VLS-128 VLS-128.param.yaml -20 to 60 Velarray M1600 Velarray_M1600.param.yaml -40 to 85 HDL-32E HDL-32E.param.yaml -10 to 60"},{"location":"system/velodyne_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This node uses the http_client and request results by GET method. It takes a few seconds to get results, or generate a timeout exception if it does not succeed the GET request. This occurs frequently and the diagnostics aggregator output STALE. Therefore I recommend to stop using this results to decide the lidar error, and only monitor it to confirm lidar status.

"},{"location":"tools/reaction_analyzer/","title":"Reaction Analyzer","text":""},{"location":"tools/reaction_analyzer/#reaction-analyzer","title":"Reaction Analyzer","text":""},{"location":"tools/reaction_analyzer/#description","title":"Description","text":"

The main purpose of the reaction analyzer package is to measure the reaction times of various nodes within a ROS-based autonomous driving simulation environment by subscribing to pre-determined topics. This tool is particularly useful for evaluating the performance of perception, planning, and control pipelines in response to dynamic changes in the environment, such as sudden obstacles. To be able to measure both control outputs and perception outputs, it was necessary to divide the node into two running_mode: planning_control and perception_planning.

"},{"location":"tools/reaction_analyzer/#planning-control-mode","title":"Planning Control Mode","text":"

In this mode, the reaction analyzer creates a dummy publisher for the PredictedObjects and PointCloud2 topics. In the beginning of the test, it publishes the initial position of the ego vehicle and the goal position to set the test environment. Then, it spawns a sudden obstacle in front of the ego vehicle. After the obstacle is spawned, it starts to search reacted messages of the planning and control nodes in the pre-determined topics. When all the topics are reacted, it calculates the reaction time of the nodes and statistics by comparing reacted_times of each of the nodes with spawn_cmd_time, and it creates a csv file to store the results.

"},{"location":"tools/reaction_analyzer/#perception-planning-mode","title":"Perception Planning Mode","text":"

In this mode, the reaction analyzer reads the rosbag files which are recorded from AWSIM, and it creates a topic publisher for each topic inside the rosbag to replay the rosbag. It reads two rosbag files: path_bag_without_object and path_bag_with_object. Firstly, it replays the path_bag_without_object to set the initial position of the ego vehicle and the goal position. After spawn_time_after_init seconds , it replays the path_bag_with_object to spawn a sudden obstacle in front of the ego vehicle. After the obstacle is spawned, it starts to search the reacted messages of the perception and planning nodes in the pre-determined topics. When all the topics are reacted, it calculates the reaction time of the nodes and statistics by comparing reacted_times of each of the nodes with spawn_cmd_time, and it creates a csv file to store the results.

"},{"location":"tools/reaction_analyzer/#point-cloud-publisher-type","title":"Point Cloud Publisher Type","text":"

To get better analyze for Perception & Sensing pipeline, the reaction analyzer can publish the point cloud messages in 3 different ways: async_header_sync_publish, sync_header_sync_publish or async_publish. (T is the period of the lidar's output)

"},{"location":"tools/reaction_analyzer/#usage","title":"Usage","text":"

The common parameters you need to define for both running modes are output_file_path, test_iteration, and reaction_chain list. output_file_path is the output file path is the path where the results and statistics will be stored. test_iteration defines how many tests will be performed. The reaction_chain list is the list of the pre-defined topics you want to measure their reaction times.

IMPORTANT: Ensure the reaction_chain list is correctly defined:

"},{"location":"tools/reaction_analyzer/#prepared-test-environment","title":"Prepared Test Environment","text":""},{"location":"tools/reaction_analyzer/#planning-control-mode_1","title":"Planning Control Mode","text":"
ros2 launch reaction_analyzer reaction_analyzer.launch.xml running_mode:=planning_control vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit map_path:=[MAP_PATH]\n

After the command, the simple_planning_simulator and the reaction_analyzer will be launched. It will automatically start to test. After the test is completed, the results will be stored in the output_file_path you defined.

"},{"location":"tools/reaction_analyzer/#perception-planning-mode_1","title":"Perception Planning Mode","text":"
ros2 launch reaction_analyzer reaction_analyzer.launch.xml running_mode:=perception_planning vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=[MAP_PATH]\n

After the command, the e2e_simulator and the reaction_analyzer will be launched. It will automatically start to test. After the test is completed, the results will be stored in the output_file_path you defined.

"},{"location":"tools/reaction_analyzer/#prepared-test-environment_1","title":"Prepared Test Environment","text":"

Scene without object:

Scene object:

"},{"location":"tools/reaction_analyzer/#custom-test-environment","title":"Custom Test Environment","text":"

If you want to run the reaction analyzer with your custom test environment, you need to redefine some of the parameters. The parameters you need to redefine are initialization_pose, entity_params, goal_pose, and topic_publisher ( for perception_planning mode) parameters.

ros2 launch autoware_launch e2e_simulator.launch.xml vehicle_model:=sample_vehicle sensor_model:=awsim_labs_sensor_kit map_path:=[MAP_PATH]\n

After localize EGO and dummy vehicle, we should write the positions of these entities in the map frame in reaction_analyzer.param.yaml. To achieve this:

PS: initialization_pose is only valid for planning_control mode.

ros2 bag record --all\n

NOTE: You should record the rosbags in the same environment with the same position of the EGO vehicle. You don't need to run Autoware while recording.

"},{"location":"tools/reaction_analyzer/#results","title":"Results","text":"

The results will be stored in the csv file format and written to the output_file_path you defined. It shows each pipeline of the Autoware by using header timestamp of the messages, and it reports Node Latency, Pipeline Latency, and Total Latency for each of the nodes.

"},{"location":"tools/reaction_analyzer/#parameters","title":"Parameters","text":"Name Type Description timer_period double [s] Period for the main processing timer. test_iteration int Number of iterations for the test. output_file_path string Directory path where test results and statistics will be stored. spawn_time_after_init double [s] Time delay after initialization before spawning objects. Only valid perception_planning mode. spawn_distance_threshold double [m] Distance threshold for spawning objects. Only valid planning_control mode. poses.initialization_pose struct Initial pose of the vehicle, containing x, y, z, roll, pitch, and yaw fields. Only valid planning_control mode. poses.entity_params struct Parameters for entities (e.g., obstacles), containing x, y, z, roll, pitch, yaw, x_dimension, y_dimension, and z_dimension. poses.goal_pose struct Goal pose of the vehicle, containing x, y, z, roll, pitch, and yaw fields. topic_publisher.path_bag_without_object string Path to the ROS bag file without objects. Only valid perception_planning mode. topic_publisher.path_bag_with_object string Path to the ROS bag file with objects. Only valid perception_planning mode. topic_publisher.spawned_pointcloud_sampling_distance double [m] Sampling distance for point clouds of spawned objects. Only valid planning_control mode. topic_publisher.dummy_perception_publisher_period double [s] Publishing period for the dummy perception data. Only valid planning_control mode. topic_publisher.pointcloud_publisher.pointcloud_publisher_type string Defines how the PointCloud2 messages are going to be published. Modes explained above. topic_publisher.pointcloud_publisher.pointcloud_publisher_period double [s] Publishing period of the PointCloud2 messages. topic_publisher.pointcloud_publisher.publish_only_pointcloud_with_object bool Default false. Publish only the point cloud messages with the object. reaction_params.first_brake_params.debug_control_commands bool Debug publish flag. reaction_params.first_brake_params.control_cmd_buffer_time_interval double [s] Time interval for buffering control commands. reaction_params.first_brake_params.min_number_descending_order_control_cmd int Minimum number of control commands in descending order for triggering brake. reaction_params.first_brake_params.min_jerk_for_brake_cmd double [m/s\u00b3] Minimum jerk value for issuing a brake command. reaction_params.search_zero_vel_params.max_looking_distance double [m] Maximum looking distance for zero velocity on trajectory reaction_params.search_entity_params.search_radius double [m] Searching radius for spawned entity. Distance between ego pose and entity pose. reaction_chain struct List of the nodes with their topics and topic's message types."},{"location":"tools/reaction_analyzer/#limitations","title":"Limitations","text":" "},{"location":"tools/reaction_analyzer/#future-improvements","title":"Future improvements","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/","title":"accel_brake_map_calibrator","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#accel_brake_map_calibrator","title":"accel_brake_map_calibrator","text":"

The role of this node is to automatically calibrate accel_map.csv / brake_map.csv used in the autoware_raw_vehicle_cmd_converter node.

The base map, which is lexus's one by default, is updated iteratively with the loaded driving data.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#how-to-calibrate","title":"How to calibrate","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the autoware_accel_brake_map_calibrator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch autoware_accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true\n

Or if you want to use rosbag files, run the following commands.

ros2 launch autoware_accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true use_sim_time:=true\nros2 bag play <rosbag_file> --clock\n

During the calibration with setting the parameter progress_file_output to true, the log file is output in [directory of autoware_accel_brake_map_calibrator]/config/ . You can also see accel and brake maps in [directory of autoware_accel_brake_map_calibrator]/config/accel_map.csv and [directory of autoware_accel_brake_map_calibrator]/config/brake_map.csv after calibration.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#calibration-plugin","title":"Calibration plugin","text":"

The rviz:=true option displays the RViz with a calibration plugin as below.

The current status (velocity and pedal) is shown in the plugin. The color on the current cell varies green/red depending on the current data is valid/invalid. The data that doesn't satisfy the following conditions are considered invalid and will not be used for estimation since aggressive data (e.g. when the pedal is moving fast) causes bad calibration accuracy.

The detailed parameters are described in the parameter section.

Note: You don't need to worry about whether the current state is red or green during calibration. Just keep getting data until all the cells turn red.

The value of each cell in the map is gray at first, and it changes from blue to red as the number of valid data in the cell accumulates. It is preferable to continue the calibration until each cell of the map becomes close to red. In particular, the performance near the stop depends strongly on the velocity of 0 ~ 6m/s range and the pedal value of +0.2 ~ -0.4, range so it is desirable to focus on those areas.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#diagnostics","title":"Diagnostics","text":"

The accel brake map_calibrator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current accel/brake map is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the accel/brake map.

Status Diagnostics Type Diagnostics message Description No calibration required OK \"OK\" Calibration Required WARN \"Accel/brake map Calibration is required.\" The accuracy of current accel/brake map may be low.

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /accel_brake_map_calibrator/output/update_suggest\n

When the diagnostics type is WARN, True is published on this topic and the update of the accel/brake map is suggested.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#evaluation-of-the-accel-brake-map-accuracy","title":"Evaluation of the accel / brake map accuracy","text":"

The accuracy of map is evaluated by the Root Mean Squared Error (RMSE) between the observed acceleration and predicted acceleration.

TERMS:

You can check additional error information with the following topics.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#how-to-visualize-calibration-data","title":"How to visualize calibration data","text":"

The process of calibration can be visualized as below. Since these scripts need the log output of the calibration, the pedal_accel_graph_output parameter must be set to true while the calibration is running for the visualization.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#visualize-plot-of-relation-between-acceleration-and-pedal","title":"Visualize plot of relation between acceleration and pedal","text":"

The following command shows the plot of used data in the calibration. In each plot of velocity ranges, you can see the distribution of the relationship between pedal and acceleration, and raw data points with colors according to their pitch angles.

ros2 run autoware_accel_brake_map_calibrator view_plot.py\n

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#visualize-statistics-about-accelerationvelocitypedal-data","title":"Visualize statistics about acceleration/velocity/pedal data","text":"

The following command shows the statistics of the calibration:

of all data in each map cell.

ros2 run autoware_accel_brake_map_calibrator view_statistics.py\n

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#how-to-save-the-calibrated-accel-brake-map-anytime-you-want","title":"How to save the calibrated accel / brake map anytime you want","text":"

You can save accel and brake map anytime with the following command.

ros2 service call /accel_brake_map_calibrator/update_map_dir tier4_vehicle_msgs/srv/UpdateAccelBrakeMap \"path: '<accel/brake map directory>'\"\n

You can also save accel and brake map in the default directory where Autoware reads accel_map.csv/brake_map.csv using the RViz plugin (AccelBrakeMapCalibratorButtonPanel) as following.

  1. Click Panels tab, and select AccelBrakeMapCalibratorButtonPanel.

  2. Select the panel, and the button will appear at the bottom of RViz.

  3. Press the button, and the accel / brake map will be saved. (The button cannot be pressed in certain situations, such as when the calibrator node is not running.)

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#parameters","title":"Parameters","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#system-parameters","title":"System Parameters","text":"Name Type Description Default value update_method string you can select map calibration method. \"update_offset_each_cell\" calculates offsets for each grid cells on the map. \"update_offset_total\" calculates the total offset of the map. \"update_offset_each_cell\" get_pitch_method string \"tf\": get pitch from tf, \"none\": unable to perform pitch validation and pitch compensation \"tf\" pedal_accel_graph_output bool if true, it will output a log of the pedal accel graph. true progress_file_output bool if true, it will output a log and csv file of the update process. false default_map_dir str directory of default map [directory of autoware_raw_vehicle_cmd_converter]/data/default/ calibrated_map_dir str directory of calibrated map [directory of autoware_accel_brake_map_calibrator]/config/ update_hz double hz for update 10.0"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#algorithm-parameters","title":"Algorithm Parameters","text":"Name Type Description Default value initial_covariance double Covariance of initial acceleration map (larger covariance makes the update speed faster) 0.05 velocity_min_threshold double Speeds smaller than this are not used for updating. 0.1 velocity_diff_threshold double When the velocity data is more than this threshold away from the grid reference speed (center value), the associated data is not used for updating. 0.556 max_steer_threshold double If the steer angle is greater than this value, the associated data is not used for updating. 0.2 max_pitch_threshold double If the pitch angle is greater than this value, the associated data is not used for updating. 0.02 max_jerk_threshold double If the ego jerk calculated from ego acceleration is greater than this value, the associated data is not used for updating. 0.7 pedal_velocity_thresh double If the pedal moving speed is greater than this value, the associated data is not used for updating. 0.15 pedal_diff_threshold double If the current pedal value is more then this threshold away from the previous value, the associated data is not used for updating. 0.03 max_accel double Maximum value of acceleration calculated from velocity source. 5.0 min_accel double Minimum value of acceleration calculated from velocity source. -5.0 pedal_to_accel_delay double The delay time between actuation_cmd to acceleration, considered in the update logic. 0.3 update_suggest_thresh double threshold of RMSE ratio that update suggest flag becomes true. ( RMSE ratio: [RMSE of new map] / [RMSE of original map] ) 0.7 max_data_count int For visualization. When the data num of each grid gets this value, the grid color gets red. 100 accel_brake_value_source string Whether to use actuation_status or actuation_command as accel/brake sources. value status"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#test-utility-scripts","title":"Test utility scripts","text":""},{"location":"vehicle/autoware_accel_brake_map_calibrator/#constant-accelbrake-command-test","title":"Constant accel/brake command test","text":"

These scripts are useful to test for accel brake map calibration. These generate an ActuationCmd with a constant accel/brake value given interactively by a user through CLI.

The accel/brake_tester.py receives a target accel/brake command from CLI. It sends a target value to actuation_cmd_publisher.py which generates the ActuationCmd. You can run these scripts by the following commands in the different terminals, and it will be as in the screenshot below.

ros2 run autoware_accel_brake_map_calibrator accel_tester.py\nros2 run autoware_accel_brake_map_calibrator brake_tester.py\nros2 run autoware_accel_brake_map_calibrator actuation_cmd_publisher.py\n

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#calibration-method","title":"Calibration Method","text":"

Two algorithms are selectable for the acceleration map update, update_offset_four_cell_around and update_offset_each_cell. Please see the link for details.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#data-preprocessing","title":"Data Preprocessing","text":"

Before calibration, missing or unusable data (e.g., too large handle angles) must first be eliminated. The following parameters are used to determine which data to remove.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#parameters_1","title":"Parameters","text":"Name Description Default Value velocity_min_threshold Exclude minimal velocity 0.1 max_steer_threshold Exclude large steering angle 0.2 max_pitch_threshold Exclude large pitch angle 0.02 max_jerk_threshold Exclude large jerk 0.7 pedal_velocity_thresh Exclude large pedaling speed 0.15"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#update_offset_each_cell","title":"update_offset_each_cell","text":"

Update by Recursive Least Squares(RLS) method using data close enough to each grid.

Advantage : Only data close enough to each grid is used for calibration, allowing accurate updates at each point.

Disadvantage : Calibration is time-consuming due to a large amount of data to be excluded.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#parameters_2","title":"Parameters","text":"

Data selection is determined by the following thresholds.

Name Default Value velocity_diff_threshold 0.556 pedal_diff_threshold 0.03"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#update-formula","title":"Update formula","text":"\\[ \\begin{align} \\theta[n]=& \\theta[n-1]+\\frac{p[n-1]x^{(n)}}{\\lambda+p[n-1]{(x^{(n)})}^2}(y^{(n)}-\\theta[n-1]x^{(n)})\\\\ p[n]=&\\frac{p[n-1]}{\\lambda+p[n-1]{(x^{(n)})}^2} \\end{align} \\]"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#variables","title":"Variables","text":"Variable name Symbol covariance \\(p[n-1]\\) map_offset \\(\\theta[n]\\) forgettingfactor \\(\\lambda\\) phi \\(x(=1)\\) measured_acc \\(y\\)"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#update_offset_four_cell_around-1","title":"update_offset_four_cell_around [1]","text":"

Update the offsets by RLS in four grids around newly obtained data. By considering linear interpolation, the update takes into account appropriate weights. Therefore, there is no need to remove data by thresholding.

Advantage : No data is wasted because updates are performed on the 4 grids around the data with appropriate weighting. Disadvantage : Accuracy may be degraded due to extreme bias of the data. For example, if data \\(z(k)\\) is biased near \\(Z_{RR}\\) in Fig. 2, updating is performed at the four surrounding points ( \\(Z_{RR}\\), \\(Z_{RL}\\), \\(Z_{LR}\\), and \\(Z_{LL}\\)), but accuracy at \\(Z_{LL}\\) is not expected.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#implementation","title":"Implementation","text":"

See eq.(7)-(10) in [1] for the updated formula. In addition, eq.(17),(18) from [1] are used for Anti-Windup.

"},{"location":"vehicle/autoware_accel_brake_map_calibrator/#references","title":"References","text":"

[1] Gabrielle Lochrie, Michael Doljevic, Mario Nona, Yongsoon Yoon, Anti-Windup Recursive Least Squares Method for Adaptive Lookup Tables with Application to Automotive Powertrain Control Systems, IFAC-PapersOnLine, Volume 54, Issue 20, 2021, Pages 840-845

"},{"location":"vehicle/autoware_external_cmd_converter/","title":"external_cmd_converter","text":""},{"location":"vehicle/autoware_external_cmd_converter/#external_cmd_converter","title":"external_cmd_converter","text":"

external_cmd_converter is a node that converts desired mechanical input to acceleration and velocity by using accel/brake map.

"},{"location":"vehicle/autoware_external_cmd_converter/#algorithm","title":"Algorithm","text":""},{"location":"vehicle/autoware_external_cmd_converter/#how-to-calculate-reference-acceleration-and-velocity","title":"How to calculate reference acceleration and velocity","text":"

A reference acceleration and velocity are derived from the throttle and brake values of external control commands.

"},{"location":"vehicle/autoware_external_cmd_converter/#reference-acceleration","title":"Reference Acceleration","text":"

A reference acceleration is calculated from accel_brake_map based on values of a desired_pedal and a current velocity;

\\[ pedal_d = throttle_d - brake_d, \\] \\[ acc_{ref} = Acc(pedal_d, v_{x,current}). \\] Parameter Description \\(throttle_d\\) throttle value of external control command (~/in/external_control_cmd.control.throttle) \\(brake_d\\) brake value of external control command (~/in/external_control_cmd.control.brake) \\(v_{x,current}\\) current longitudinal velocity (~/in/odometry.twist.twist.linear.x) Acc accel_brake_map"},{"location":"vehicle/autoware_external_cmd_converter/#reference-velocity","title":"Reference Velocity","text":"

A reference velocity is calculated based on a current velocity and a reference acceleration:

\\[ v_{ref} = v_{x,current} + k_{v_{ref}} \\cdot \\text{sign}_{gear} \\cdot acc_{ref}. \\] Parameter Description \\(acc_{ref}\\) reference acceleration \\(k_{v_{ref}}\\) reference velocity gain \\(\\text{sign}_{gear}\\) gear command (~/in/shift_cmd) (Drive/Low: 1, Reverse: -1, Other: 0)"},{"location":"vehicle/autoware_external_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/in/external_control_cmd tier4_external_api_msgs::msg::ControlCommand target throttle/brake/steering_angle/steering_angle_velocity is necessary to calculate desired control command. ~/input/shift_cmd\" autoware_vehicle_msgs::GearCommand current command of gear. ~/input/emergency_stop tier4_external_api_msgs::msg::Heartbeat emergency heart beat for external command. ~/input/current_gate_mode tier4_control_msgs::msg::GateMode topic for gate mode. ~/input/odometry navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/autoware_external_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/out/control_cmd autoware_control_msgs::msg::Control ackermann control command converted from selected external command"},{"location":"vehicle/autoware_external_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description ref_vel_gain_ double reference velocity gain timer_rate double timer's update rate wait_for_first_topic double if time out check is done after receiving first topic control_command_timeout double time out check for control command emergency_stop_timeout double time out check for emergency stop command"},{"location":"vehicle/autoware_external_cmd_converter/#limitation","title":"Limitation","text":"

tbd.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/","title":"autoware_raw_vehicle_cmd_converter","text":""},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#autoware_raw_vehicle_cmd_converter","title":"autoware_raw_vehicle_cmd_converter","text":""},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#overview","title":"Overview","text":"

The raw_vehicle_command_converter is a crucial node in vehicle automation systems, responsible for translating desired steering and acceleration inputs into specific vehicle control commands. This process is achieved through a combination of a lookup table and an optional feedback control system.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#lookup-table","title":"Lookup Table","text":"

The core of the converter's functionality lies in its use of a CSV-formatted lookup table. This table encapsulates the relationship between the throttle/brake pedal (depending on your vehicle control interface) and the corresponding vehicle acceleration across various speeds. The converter utilizes this data to accurately translate target accelerations into appropriate throttle/brake values.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#creation-of-reference-data","title":"Creation of Reference Data","text":"

Reference data for the lookup table is generated through the following steps:

  1. Data Collection: On a flat road, a constant value command (e.g., throttle/brake pedal) is applied to accelerate or decelerate the vehicle.
  2. Recording Data: During this phase, both the IMU acceleration and vehicle velocity data are recorded.
  3. CSV File Generation: A CSV file is created, detailing the relationship between command values, vehicle speed, and resulting acceleration.

Once the acceleration map is crafted, it should be loaded when the RawVehicleCmdConverter node is launched, with the file path defined in the launch file.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#auto-calibration-tool","title":"Auto-Calibration Tool","text":"

For ease of calibration and adjustments to the lookup table, an auto-calibration tool is available. More information and instructions for this tool can be found here.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#variable-gear-ratio-vgr","title":"Variable Gear Ratio (VGR)","text":"

This is a gear ratio for converting tire angle to steering angle. Generally, to improve operability, the gear ratio becomes dynamically larger as the speed increases or the steering angle becomes smaller. For a certain vehicle, data was acquired and the gear ratio was approximated by the following formula.

\\[ a + b \\times v^2 - c \\times \\lvert \\delta \\rvert \\]

For that vehicle, the coefficients were as follows.

vgr_coef_a: 15.713\nvgr_coef_b: 0.053\nvgr_coef_c: 0.042\n

When convert_steer_cmd_method: \"vgr\" is selected, the node receives the control command from the controller as the desired tire angle and calculates the desired steering angle to output. Also, when convert_actuation_to_steering_status: true, this node receives the actuation_status topic and calculates the steer tire angle from the steer_wheel_angle and publishes it.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#vehicle-adaptor","title":"Vehicle Adaptor","text":"

Under development A feature that compensates for control commands according to the dynamic characteristics of the vehicle. This feature works when use_vehicle_adaptor: true is set and requires control_horizon to be enabled, so you need to set enable_control_cmd_horizon_pub: true in the trajectory_follower node.

"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/input/control_cmd autoware_control_msgs::msg::Control target velocity/acceleration/steering_angle/steering_angle_velocity is necessary to calculate actuation command. ~/input/steering\" autoware_vehicle_msgs::msg::SteeringReport subscribe only when convert_actuation_to_steering_status: false. current status of steering used for steering feed back control ~/input/odometry navigation_msgs::Odometry twist topic in odometry is used. ~/input/actuation_status tier4_vehicle_msgs::msg::ActuationStatus actuation status is assumed to receive the same type of status as sent to the vehicle side. For example, if throttle/brake pedal/steer_wheel_angle is sent, the same type of status is received. In the case of steer_wheel_angle, it is used to calculate steer_tire_angle and VGR in this node.

Input topics when vehicle_adaptor is enabled

Name Type Description ~/input/accel geometry_msgs::msg::AccelWithCovarianceStamped; acceleration status ~/input/operation_mode_state autoware_adapi_v1_msgs::msg::OperationModeState operation mode status ~/input/control_horizon autoware_control_msgs::msg::ControlHorizon control horizon command"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/output/actuation_cmd tier4_vehicle_msgs::msg::ActuationCommandStamped actuation command for vehicle to apply mechanical input ~/output/steering_status autoware_vehicle_msgs::msg::SteeringReport publish only when convert_actuation_to_steering_status: true. steer tire angle is calculated from steer wheel angle and published."},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#parameters","title":"Parameters","text":"Name Type Description Default Range csv_path_accel_map string path for acceleration map csv file $(find-pkg-share autoware_raw_vehicle_cmd_converter)/data/default/accel_map.csv N/A csv_path_brake_map string path for brake map csv file $(find-pkg-share autoware_raw_vehicle_cmd_converter)/data/default/brake_map.csv N/A csv_path_steer_map string path for steer map csv file $(find-pkg-share autoware_raw_vehicle_cmd_converter)/data/default/steer_map.csv N/A convert_accel_cmd boolean use accel or not true N/A convert_brake_cmd boolean use brake or not true N/A convert_steer_cmd boolean use steer or not true N/A use_steer_ff boolean steering steer controller using steer feed forward or not true N/A use_steer_fb boolean steering steer controller using steer feed back or not true N/A is_debugging boolean debugging mode or not false N/A max_throttle float maximum value of throttle 0.4 \u22650.0 max_brake float maximum value of brake 0.8 \u22650.0 max_steer float maximum value of steer 10.0 N/A min_steer float minimum value of steer -10.0 N/A steer_pid.kp float proportional coefficient value in PID control 150.0 N/A steer_pid.ki float integral coefficient value in PID control 15.0 >0.0 steer_pid.kd float derivative coefficient value in PID control 0.0 N/A steer_pid.max float maximum value of PID 8.0 N/A steer_pid.min float minimum value of PID -8.0. N/A steer_pid.max_p float maximum value of Proportional in PID 8.0 N/A steer_pid.min_p float minimum value of Proportional in PID -8.0 N/A steer_pid.max_i float maximum value of Integral in PID 8.0 N/A steer_pid.min_i float minimum value of Integral in PID -8.0 N/A steer_pid.max_d float maximum value of Derivative in PID 0.0 N/A steer_pid.min_d float minimum value of Derivative in PID 0.0 N/A steer_pid.invalid_integration_decay float invalid integration decay value in PID control 0.97 >0.0 convert_steer_cmd_method string method for converting steer command vgr ['vgr', 'steer_map'] vgr_coef_a float coefficient a for variable gear ratio 15.713 N/A vgr_coef_b float coefficient b for variable gear ratio 0.053 N/A vgr_coef_c float coefficient c for variable gear ratio 0.042 N/A convert_actuation_to_steering_status boolean convert actuation to steering status or not. Whether to subscribe to actuation_status and calculate and publish steering_status For example, receive the steering wheel angle and calculate the steering wheel angle based on the gear ratio. If false, the vehicle interface must publish steering_status. true N/A use_vehicle_adaptor boolean flag to enable feature that compensates control commands according to vehicle dynamics. false N/A"},{"location":"vehicle/autoware_raw_vehicle_cmd_converter/#limitation","title":"Limitation","text":"

The current feed back implementation is only applied to steering control.

"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/","title":"steer_offset_estimator","text":""},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#steer_offset_estimator","title":"steer_offset_estimator","text":""},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#purpose","title":"Purpose","text":"

The role of this node is to automatically calibrate steer_offset used in the vehicle_interface node.

The base steer offset value is 0 by default, which is standard, is updated iteratively with the loaded driving data. This module is supposed to be used in below straight driving situation.

"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Estimates sequential steering offsets from kinematic model and state observations. Calculate yaw rate error and then calculate steering error recursively by least squared method, for more details see updateSteeringOffset() function.

"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#input","title":"Input","text":"Name Type Description ~/input/twist geometry_msgs::msg::TwistStamped vehicle twist ~/input/steer autoware_vehicle_msgs::msg::SteeringReport steering"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#output","title":"Output","text":"Name Type Description ~/output/steering_offset autoware_internal_debug_msgs::msg::Float32Stamped steering offset ~/output/steering_offset_covariance autoware_internal_debug_msgs::msg::Float32Stamped covariance of steering offset"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the steer_offset_estimator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch steer_offset_estimator steer_offset_estimator.launch.xml\n

Or if you want to use rosbag files, run the following commands.

ros2 param set /use_sim_time true\nros2 bag play <rosbag_file> --clock\n
"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#parameters","title":"Parameters","text":"Name Type Description Default Range initial_covariance float steer offset is larger than tolerance 1000 N/A steer_update_hz float update hz of steer data 10 \u22650.0 forgetting_factor float weight of using previous value 0.999 \u22650.0 valid_min_velocity float velocity below this value is not used 5 \u22650.0 valid_max_steer float steer above this value is not used 0.05 N/A warn_steer_offset_deg float Warn if offset is above this value. ex. if absolute estimated offset is larger than 2.5[deg] => warning 2.5 N/A"},{"location":"vehicle/autoware_steer_offset_estimator/Readme/#diagnostics","title":"Diagnostics","text":"

The steer_offset_estimator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current steer_offset is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the steer_offset.

Status Diagnostics Type Diagnostics message No calibration required OK \"Preparation\" Calibration Required WARN \"Steer offset is larger than tolerance\"

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /vehicle/status/steering_offset\n
"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/","title":"autoware_mission_details_overlay_rviz_plugin","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#autoware_mission_details_overlay_rviz_plugin","title":"autoware_mission_details_overlay_rviz_plugin","text":"

This RViz plugin displays the remaining distance and time for the current mission.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#input","title":"Input","text":"Name Type Description /planning/mission_remaining_distance_time autoware_planning_msgs::msg::MissionRemainingDistanceTime The topic is for mission remaining distance and time"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#overlay-parameters","title":"Overlay Parameters","text":"Name Type Default Value Description Width int 170 Width of the overlay [px] Height int 100 Height of the overlay [px] Right int 10 Margin from the right border [px] Top int 10 Margin from the top border [px]

The mission details display is aligned with top right corner of the screen.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#usage","title":"Usage","text":"

Similar to autoware_overlay_rviz_plugin

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#credits","title":"Credits","text":"

Based on the jsk_visualization package.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_mission_details_overlay_rviz_plugin/#icons","title":"Icons","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/","title":"autoware_overlay_rviz_plugin","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#autoware_overlay_rviz_plugin","title":"autoware_overlay_rviz_plugin","text":"

Plugin for displaying 2D overlays over the RViz2 3D scene.

Based on the jsk_visualization package, under the 3-Clause BSD license.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and gears.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#input","title":"Input","text":"Name Type Description /vehicle/status/velocity_status autoware_vehicle_msgs::msg::VelocityReport The topic is vehicle velocity /vehicle/status/turn_indicators_status autoware_vehicle_msgs::msg::TurnIndicatorsReport The topic is status of turn signal /vehicle/status/hazard_status autoware_vehicle_msgs::msg::HazardReport The topic is status of hazard /vehicle/status/steering_status autoware_vehicle_msgs::msg::SteeringReport The topic is status of steering /vehicle/status/gear_status autoware_vehicle_msgs::msg::GearReport The topic is status of gear /planning/scenario_planning/current_max_velocity tier4_planning_msgs::msg::VelocityLimit The topic is velocity limit /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray The topic is status of traffic light"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#signaldisplay","title":"SignalDisplay","text":"Name Type Default Value Description property_width_ int 128 Width of the plotter window [px] property_height_ int 128 Height of the plotter window [px] property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_signal_color_ QColor QColor(25, 255, 240) Turn Signal color"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_overlay_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz2 and click Add button under the Displays panel.

  2. Under By display type tab, select autoware_overlay_rviz_plugin/SignalDisplay and press OK.

  3. Enter the names of the topics if necessary.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_string_stamped_overlay_rviz_plugin/","title":"autoware_string_stamped_rviz_plugin","text":""},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_string_stamped_overlay_rviz_plugin/#autoware_string_stamped_rviz_plugin","title":"autoware_string_stamped_rviz_plugin","text":"

Plugin for displaying 2D overlays over the RViz2 3D scene.

"},{"location":"visualization/autoware_overlay_rviz_plugin/autoware_string_stamped_overlay_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the ROS message whose topic type is autoware_internal_debug_msgs::msg::StringStamped in rviz.

"},{"location":"visualization/autoware_perception_rviz_plugin/","title":"autoware_perception_rviz_plugin","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#autoware_perception_rviz_plugin","title":"autoware_perception_rviz_plugin","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#purpose","title":"Purpose","text":"

It is an rviz plugin for visualizing the result from perception module. This package is based on the implementation of the rviz plugin developed by Autoware.Auto.

See Autoware.Auto design documentation for the original design philosophy. [1]

"},{"location":"visualization/autoware_perception_rviz_plugin/#input-types-visualization-results","title":"Input Types / Visualization Results","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#detectedobjects","title":"DetectedObjects","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#input-types","title":"Input Types","text":"Name Type Description autoware_perception_msgs::msg::DetectedObjects detection result array"},{"location":"visualization/autoware_perception_rviz_plugin/#visualization-result","title":"Visualization Result","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#trackedobjects","title":"TrackedObjects","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#input-types_1","title":"Input Types","text":"Name Type Description autoware_perception_msgs::msg::TrackedObjects tracking result array"},{"location":"visualization/autoware_perception_rviz_plugin/#visualization-result_1","title":"Visualization Result","text":"

Overwrite tracking results with detection results.

"},{"location":"visualization/autoware_perception_rviz_plugin/#predictedobjects","title":"PredictedObjects","text":""},{"location":"visualization/autoware_perception_rviz_plugin/#input-types_2","title":"Input Types","text":"Name Type Description autoware_perception_msgs::msg::PredictedObjects prediction result array"},{"location":"visualization/autoware_perception_rviz_plugin/#visualization-result_2","title":"Visualization Result","text":"

Overwrite prediction results with tracking results.

"},{"location":"visualization/autoware_perception_rviz_plugin/#referencesexternal-links","title":"References/External links","text":"

[1] https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/tools/visualization/autoware_rviz_plugins

"},{"location":"visualization/autoware_perception_rviz_plugin/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"visualization/bag_time_manager_rviz_plugin/","title":"bag_time_manager_rviz_plugin","text":""},{"location":"visualization/bag_time_manager_rviz_plugin/#bag_time_manager_rviz_plugin","title":"bag_time_manager_rviz_plugin","text":""},{"location":"visualization/bag_time_manager_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin allows publishing and controlling the ros bag time.

"},{"location":"visualization/bag_time_manager_rviz_plugin/#output","title":"Output","text":"

tbd.

"},{"location":"visualization/bag_time_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select BagTimeManagerPanel and press OK.

  3. See bag_time_manager_rviz_plugin/BagTimeManagerPanel is added.

"},{"location":"visualization/tier4_adapi_rviz_plugin/","title":"tier4_adapi_rviz_plugin","text":""},{"location":"visualization/tier4_adapi_rviz_plugin/#tier4_adapi_rviz_plugin","title":"tier4_adapi_rviz_plugin","text":""},{"location":"visualization/tier4_adapi_rviz_plugin/#routepanel","title":"RoutePanel","text":"

To use the panel, set the topic name from 2D Goal Pose Tool to /rviz/routing/pose. By default, when a tool publish a pose, the panel immediately sets a route with that as the goal. Enable or disable of allow_goal_modification option can be set with the check box.

Push the mode button in the waypoint to enter waypoint mode. In this mode, the pose is added to waypoints. Press the apply button to set the route using the saved waypoints (the last one is a goal). Reset the saved waypoints with the reset button.

"},{"location":"visualization/tier4_adapi_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"visualization/tier4_adapi_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"visualization/tier4_adapi_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/","title":"tier4_camera_view_rviz_plugin","text":""},{"location":"visualization/tier4_camera_view_rviz_plugin/#tier4_camera_view_rviz_plugin","title":"tier4_camera_view_rviz_plugin","text":""},{"location":"visualization/tier4_camera_view_rviz_plugin/#thirdpersonview-tool","title":"ThirdPersonView Tool","text":"

Add the tier4_camera_view_rviz_plugin/ThirdPersonViewTool tool to the RViz. Push the button, the camera will focus on the vehicle and set the target frame to base_link. Short cut key 'o'.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#birdeyeview-tool","title":"BirdEyeView Tool","text":"

Add the tier4_camera_view_rviz_plugin/BirdEyeViewTool tool to the RViz. Push the button, the camera will turn to the BEV view, the target frame is consistent with the latest frame. Short cut key 'r'.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"visualization/tier4_camera_view_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"visualization/tier4_datetime_rviz_plugin/","title":"tier4_datetime_rviz_plugin","text":""},{"location":"visualization/tier4_datetime_rviz_plugin/#tier4_datetime_rviz_plugin","title":"tier4_datetime_rviz_plugin","text":""},{"location":"visualization/tier4_datetime_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the ROS Time and Wall Time in rviz.

"},{"location":"visualization/tier4_datetime_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/tier4_datetime_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select tier4_datetime_rviz_plugin/AutowareDateTimePanel and press OK.
"},{"location":"visualization/tier4_localization_rviz_plugin/","title":"tier4_localization_rviz_plugin","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#tier4_localization_rviz_plugin","title":"tier4_localization_rviz_plugin","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin can display the localization history obtained by ekf_localizer, ndt_scan_matching, and GNSS. If the uncertainty of the estimated pose is given, it can also be displayed.

"},{"location":"visualization/tier4_localization_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#input","title":"Input","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#pose-history","title":"Pose History","text":"Name Type Description input/pose geometry_msgs::msg::PoseStamped In input/pose, put the result of localization calculated by ekf_localizer, ndt_scan_matching, or GNSS"},{"location":"visualization/tier4_localization_rviz_plugin/#pose-with-covariance-history","title":"Pose With Covariance History","text":"Name Type Description input/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped In input/pose_with_covariance, put the result of localization calculated by ekf_localizer, ndt_scan_matching, or GNSS"},{"location":"visualization/tier4_localization_rviz_plugin/#parameters","title":"Parameters","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/tier4_localization_rviz_plugin/#pose-history_1","title":"Pose History","text":"Name Type Default Value Description property_buffer_size_ int 100 Buffer size of topic property_line_view_ bool true Use Line property or not property_line_width_ float 0.1 Width of Line property [m] property_line_alpha_ float 1.0 Alpha of Line property property_line_color_ QColor Qt::white Color of Line property"},{"location":"visualization/tier4_localization_rviz_plugin/#pose-with-covariance-history_1","title":"Pose With Covariance History","text":"Name Type Default Value Description property_buffer_size_ int 100 Buffer size of topic property_path_view_ bool true Use path property or not property_shape_type_ string Line Line or Arrow property_line_width_ float 0.1 Width of Line property [m] property_line_alpha_ float 1.0 Alpha of Line property property_line_color_ QColor Qt::white Color of Line property property_arrow_shaft_length float 0.3 Shaft length of Arrow property property_arrow_shaft_diameter float 0.15 Shaft diameter of Arrow property property_arrow_head_length float 0.2 Head length of Arrow property property_arrow_head_diameter float 0.3 Head diameter of Arrow property property_arrow_alpha_ float 1.0 Alpha of Arrow property property_arrow_color_ QColor Qt::white Color of Arrow property property_sphere_scale_ float 1.0 Scale of Sphere property property_sphere_alpha_ float 0.5 Alpha of Sphere property property_sphere_color_ QColor (204, 51, 204) Color of Sphere property"},{"location":"visualization/tier4_localization_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/tier4_localization_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select tier4_localization_rviz_plugin/PoseHistory or PoseWithCovarianceHistory. Next, press OK.
  3. Enter the name of the topic where you want to view the trajectory and the covariance.
  4. You can view the trajectory and the covariance.
"},{"location":"visualization/tier4_planning_rviz_plugin/","title":"tier4_planning_rviz_plugin","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#tier4_planning_rviz_plugin","title":"tier4_planning_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"visualization/tier4_planning_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the path, trajectory, and maximum speed.

"},{"location":"visualization/tier4_planning_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#input","title":"Input","text":"Name Type Description /input/path autoware_planning_msgs::msg::Path The topic on which to subscribe path /input/trajectory autoware_planning_msgs::msg::Trajectory The topic on which to subscribe trajectory /planning/scenario_planning/current_max_velocity tier4_planning_msgs/msg/VelocityLimit The topic on which to publish max velocity"},{"location":"visualization/tier4_planning_rviz_plugin/#output","title":"Output","text":"Name Type Description /planning/mission_planning/checkpoint geometry_msgs/msg/PoseStamped The topic on which to publish checkpoint"},{"location":"visualization/tier4_planning_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/tier4_planning_rviz_plugin/#missioncheckpoint","title":"MissionCheckpoint","text":"Name Type Default Value Description pose_topic_property_ string mission_checkpoint The topic on which to publish checkpoint std_dev_x_ float 0.5 X standard deviation for checkpoint pose [m] std_dev_y_ float 0.5 Y standard deviation for checkpoint pose [m] std_dev_theta_ float M_PI / 12.0 Theta standard deviation for checkpoint pose [rad] position_z_ float 0.0 Z position for checkpoint pose [m]"},{"location":"visualization/tier4_planning_rviz_plugin/#path","title":"Path","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_view_ bool false Use Constant Width or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"visualization/tier4_planning_rviz_plugin/#drivablearea","title":"DrivableArea","text":"Name Type Default Value Description color_scheme_property_ int 0 Color scheme of DrivableArea property alpha_property_ float 0.2 Alpha of DrivableArea property draw_under_property_ bool false Draw as background or not"},{"location":"visualization/tier4_planning_rviz_plugin/#pathfootprint","title":"PathFootprint","text":"Name Type Default Value Description property_path_footprint_view_ bool true Use Path Footprint property or not property_path_footprint_alpha_ float 1.0 Alpha of Path Footprint property property_path_footprint_color_ QColor Qt::black Color of Path Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m]"},{"location":"visualization/tier4_planning_rviz_plugin/#trajectory","title":"Trajectory","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_velocity_text_view_ bool false View text Velocity property_velocity_text_scale_ float 0.3 Scale of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"visualization/tier4_planning_rviz_plugin/#trajectoryfootprint","title":"TrajectoryFootprint","text":"Name Type Default Value Description property_trajectory_footprint_view_ bool true Use Trajectory Footprint property or not property_trajectory_footprint_alpha_ float 1.0 Alpha of Trajectory Footprint property property_trajectory_footprint_color_ QColor QColor(230, 230, 50) Color of Trajectory Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m] property_trajectory_point_view_ bool false Use Trajectory Point property or not property_trajectory_point_alpha_ float 1.0 Alpha of Trajectory Point property property_trajectory_point_color_ QColor QColor(0, 60, 255) Color of Trajectory Point property property_trajectory_point_radius_ float 0.1 Radius of Trajectory Point property"},{"location":"visualization/tier4_planning_rviz_plugin/#maxvelocity","title":"MaxVelocity","text":"Name Type Default Value Description property_topic_name_ string /planning/scenario_planning/current_max_velocity The topic on which to subscribe max velocity property_text_color_ QColor QColor(255, 255, 255) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 96 Length of the plotter window [px] property_value_scale_ float 1.0 / 4.0 Value scale"},{"location":"visualization/tier4_planning_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_planning_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the path or trajectory.
"},{"location":"visualization/tier4_planning_rviz_plugin/#material-design-icons","title":"Material Design Icons","text":"

This project uses Material Design Icons by Google. These icons are used under the terms of the Apache License, Version 2.0.

Material Design Icons are a collection of symbols provided by Google that are used to enhance the user interface of applications, websites, and other digital products.

"},{"location":"visualization/tier4_planning_rviz_plugin/#license","title":"License","text":"

The Material Design Icons are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

"},{"location":"visualization/tier4_planning_rviz_plugin/#acknowledgments","title":"Acknowledgments","text":"

We would like to express our gratitude to Google for making these icons available to the community, helping developers and designers enhance the visual appeal and user experience of their projects.

"},{"location":"visualization/tier4_state_rviz_plugin/","title":"tier4_state_rviz_plugin","text":""},{"location":"visualization/tier4_state_rviz_plugin/#tier4_state_rviz_plugin","title":"tier4_state_rviz_plugin","text":""},{"location":"visualization/tier4_state_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the current status of autoware. This plugin also can engage from the panel.

"},{"location":"visualization/tier4_state_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_state_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState The topic represents the state of operation mode /api/routing/state autoware_adapi_v1_msgs::msg::RouteState The topic represents the state of route /api/localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState The topic represents the state of localization initialization /api/motion/state autoware_adapi_v1_msgs::msg::MotionState The topic represents the state of motion /api/autoware/get/emergency tier4_external_api_msgs::msg::Emergency The topic represents the state of external emergency /vehicle/status/gear_status autoware_vehicle_msgs::msg::GearReport The topic represents the state of gear"},{"location":"visualization/tier4_state_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to autonomous /api/operation_mode/change_to_stop autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to stop /api/operation_mode/change_to_local autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to local /api/operation_mode/change_to_remote autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to remote /api/operation_mode/enable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to enable vehicle control by Autoware /api/operation_mode/disable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to disable vehicle control by Autoware /api/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute The service to clear route state /api/motion/accept_start autoware_adapi_v1_msgs::srv::AcceptStart The service to accept the vehicle to start /api/autoware/set/emergency tier4_external_api_msgs::srv::SetEmergency The service to set external emergency /planning/scenario_planning/max_velocity_default tier4_planning_msgs::msg::VelocityLimit The topic to set maximum speed of the vehicle"},{"location":"visualization/tier4_state_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select tier4_state_rviz_plugin/AutowareStatePanel and press OK.

  3. If the auto button is activated, can engage by clicking it.

"},{"location":"visualization/tier4_system_rviz_plugin/","title":"tier4_system_rviz_plugin","text":""},{"location":"visualization/tier4_system_rviz_plugin/#tier4_system_rviz_plugin","title":"tier4_system_rviz_plugin","text":""},{"location":"visualization/tier4_system_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin display the Hazard information from Autoware; and output notices when emergencies are from initial localization and route setting.

"},{"location":"visualization/tier4_system_rviz_plugin/#input","title":"Input","text":"Name Type Description /system/emergency/hazard_status autoware_system_msgs::msg::HazardStatusStamped The topic represents the emergency information from Autoware"},{"location":"visualization/tier4_traffic_light_rviz_plugin/","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"visualization/tier4_traffic_light_rviz_plugin/#tier4_traffic_light_rviz_plugin","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"visualization/tier4_traffic_light_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin panel publishes dummy traffic light signals.

"},{"location":"visualization/tier4_traffic_light_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_traffic_light_rviz_plugin/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals autoware_perception_msgs::msg::TrafficLightGroupArray Publish traffic light signals"},{"location":"visualization/tier4_traffic_light_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select TrafficLightPublishPanel and press OK.
  3. Set Traffic Light ID & Traffic Light Status and press SET button.
  4. Traffic light signals are published, while PUBLISH button is pushed.
"},{"location":"visualization/tier4_vehicle_rviz_plugin/","title":"tier4_vehicle_rviz_plugin","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#tier4_vehicle_rviz_plugin","title":"tier4_vehicle_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"visualization/tier4_vehicle_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and acceleration.

"},{"location":"visualization/tier4_vehicle_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#input","title":"Input","text":"Name Type Description /vehicle/status/velocity_status autoware_vehicle_msgs::msg::VelocityReport The topic is vehicle twist /control/turn_signal_cmd autoware_vehicle_msgs::msg::TurnIndicatorsReport The topic is status of turn signal /vehicle/status/steering_status autoware_vehicle_msgs::msg::SteeringReport The topic is status of steering /localization/acceleration geometry_msgs::msg::AccelWithCovarianceStamped The topic is the acceleration"},{"location":"visualization/tier4_vehicle_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"visualization/tier4_vehicle_rviz_plugin/#consolemeter","title":"ConsoleMeter","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale"},{"location":"visualization/tier4_vehicle_rviz_plugin/#steeringangle","title":"SteeringAngle","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale property_handle_angle_scale_ float 3.0 Scale is steering angle to handle angle"},{"location":"visualization/tier4_vehicle_rviz_plugin/#turnsignal","title":"TurnSignal","text":"Name Type Default Value Description property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_width_ int 256 Left of the plotter window [px] property_height_ int 256 Width of the plotter window [px]"},{"location":"visualization/tier4_vehicle_rviz_plugin/#velocityhistory","title":"VelocityHistory","text":"Name Type Default Value Description property_velocity_timeout_ float 10.0 Timeout of velocity [s] property_velocity_alpha_ float 1.0 Alpha of velocity property_velocity_scale_ float 0.3 Scale of velocity property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of velocity history property_vel_max_ float 3.0 Color Border Vel Max [m/s]"},{"location":"visualization/tier4_vehicle_rviz_plugin/#accelerationmeter","title":"AccelerationMeter","text":"Name Type Default Value Description property_normal_text_color_ QColor QColor(25, 255, 240) Normal text color property_emergency_text_color_ QColor QColor(255, 80, 80) Emergency acceleration color property_left_ int 896 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1 / 6.667 Value text scale property_emergency_threshold_max_ float 1.0 Max acceleration threshold for emergency [m/s^2] property_emergency_threshold_min_ float -2.5 Min acceleration threshold for emergency [m/s^2]"},{"location":"visualization/tier4_vehicle_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"visualization/tier4_vehicle_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_vehicle_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the status.
"}]} \ No newline at end of file diff --git a/latest/sitemap.xml.gz b/latest/sitemap.xml.gz index a3bc5461bdf66..0c997d9e5082c 100644 Binary files a/latest/sitemap.xml.gz and b/latest/sitemap.xml.gz differ diff --git a/latest/vehicle/autoware_steer_offset_estimator/Readme/index.html b/latest/vehicle/autoware_steer_offset_estimator/Readme/index.html index f747126171fe5..59f41de646be0 100644 --- a/latest/vehicle/autoware_steer_offset_estimator/Readme/index.html +++ b/latest/vehicle/autoware_steer_offset_estimator/Readme/index.html @@ -14354,12 +14354,12 @@

Output