diff --git a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/.pages b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/.pages
index f8a64ee8fae..76036a5a2f3 100644
--- a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/.pages
+++ b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/.pages
@@ -1,3 +1,4 @@
nav:
- index.md
- Tuning localization: localization-tuning
+ - Tuning perception: perception-tuning
diff --git a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/localization-tuning/index.md b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/localization-tuning/index.md
index c126cf2a91f..bdcb66c6656 100644
--- a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/localization-tuning/index.md
+++ b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/localization-tuning/index.md
@@ -4,8 +4,8 @@
In this section,
our focus will be on refining localization accuracy within the YTU Campus environment through updates to localization parameters and methods.
-Our approach entails using NDT as the pose input source,
-and the Gyro Odometer as the twist input source.
+Our approach involves
+utilizing NDT as the pose input source and the Gyro Odometer as the twist input source.
These adjustments play a pivotal role
in achieving a heightened level of precision and reliability in our localization processes,
ensuring optimal performance in the specific conditions of the YTU campus.
diff --git a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/images/after-tuning-clustering.png b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/images/after-tuning-clustering.png
new file mode 100644
index 00000000000..02c08f180ec
Binary files /dev/null and b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/images/after-tuning-clustering.png differ
diff --git a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/images/initial-clusters.png b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/images/initial-clusters.png
new file mode 100644
index 00000000000..e60fe26e38e
Binary files /dev/null and b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/images/initial-clusters.png differ
diff --git a/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/index.md b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/index.md
new file mode 100644
index 00000000000..383689a2eec
--- /dev/null
+++ b/docs/how-to-guides/integrating-autoware/tuning-parameters-and-performance/tuning-parameters/perception-tuning/index.md
@@ -0,0 +1,131 @@
+# Tuning perception
+
+## Introduction
+
+In this section, we plan to enhance our perception accuracy within the YTU Campus environment
+by updating some parameters and methods.
+We will enable camera-lidar
+fusion as our chosen perception method. This approach holds the potential to significantly
+improve our ability to accurately perceive and understand the surroundings, enabling our vehicles
+to navigate more effectively and safely within the campus premises. By fine-tuning these perception
+parameters, we aim to advance the capabilities of our systems and further optimize their performance
+in this specific environment.
+
+## Perception parameter tuning
+
+### Enabling camera-lidar fusion
+
+To enable camera-lidar fusion, you need to first calibrate both your camera and lidar.
+Following that, you will need to utilize the `image_info`
+and `rectified_image` topics in order to employ the `tensorrt_yolo` node.
+Once these ROS 2 topics are prepared,
+we can proceed with enabling camera-lidar fusion as our chosen perception method:
+
+!!! note "Enabling camera lidar fusion on [`autoware.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/2255356e0164430ed5bc7dd325e3b61e983567a3/autoware_launch/launch/autoware.launch.xml#L42)"
+
+ ```diff
+ -
+ +
+ ```
+
+After that,
+we need
+to run the [TensorRT YOLO node](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/tensorrt_yolo) for our camera topics
+if it hasn't been launched on your sensor model.
+You can launch the tensorrt_yolo nodes by uncommenting the following lines in the [`camera_lidar_fusion_based_detection.launch.xml`](https://github.com/autowarefoundation/autoware.universe/blob/main/launch/tier4_perception_launch/launch/object_recognition/detection/camera_lidar_fusion_based_detection.launch.xml)
+file:
+
+!!! note "Please adjust the following lines in the `camera_lidar_fusion_based_detection.launch.xml` file based on the number of your cameras (image_number)"
+
+ ```xml
+
+
+
+
+ ...
+ ```
+
+### Tuning ground segmentation
+
+!!! warning
+
+ under construction
+
+### Tuning euclidean clustering
+
+The `euclidean_clustering` package applies Euclidean clustering methods
+to cluster points into smaller parts for classifying objects.
+Please refer to [`euclidean_clustering` package documentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/euclidean_cluster) for more information.
+This package is used in the detection pipeline of Autoware architecture.
+There are two different euclidean clustering methods included in this package:
+`euclidean_cluster` and `voxel_grid_based_euclidean_cluster`.
+In the default design of Autoware,
+the `voxel_grid_based_euclidean_cluster` method serves as the default Euclidean clustering method.
+
+In the YTU campus environment, there are many small objects like birds,
+dogs, cats, balls, cones, etc. To detect, track,
+and predict these small objects, we aim to assign clusters to them as small as possible.
+
+Firstly, we will change our object filter method from lanelet_filter to position_filter
+to detect objects that are outside the lanelet boundaries at the [`tier4_perception_component.launch.xml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/launch/components/tier4_perception_component.launch.xml).
+
+```diff
+-
++
+```
+
+After changing the filter method for objects,
+the output of our perception pipeline looks like the image below:
+
+
+
+Now, we can detect unknown objects that are outside the lanelet map,
+but we still need to update the filter range
+or disable the filter for unknown objects in the [`object_position_filter.param.yaml`](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/object_filter/object_position_filter.param.yaml) file.
+
+```diff
+ upper_bound_x: 100.0
+- lower_bound_x: 0.0
++ lower_bound_x: -100.0
+- upper_bound_y: 10.0
++ upper_bound_y: 100.0
+- lower_bound_y: -10.0
++ lower_bound_y: -100.0
+```
+
+Also, you can simply disable the filter for unknown labeled objects.
+
+```diff
+- UNKNOWN : true
++ UNKNOWN : false
+```
+
+After that,
+we can update our clustering parameters
+since we can detect all objects regardless of filtering objects with the lanelet2 map.
+As we mentioned earlier, we want to detect small objects.
+Therefore,
+we will decrease the minimum cluster size to 1 in the [`voxel_grid_based_euclidean_cluster.param.yaml` file](https://github.com/autowarefoundation/autoware_launch/blob/main/autoware_launch/config/perception/object_recognition/detection/clustering/voxel_grid_based_euclidean_cluster.param.yaml).
+
+```diff
+- min_cluster_size: 10
++ min_cluster_size: 1
+```
+
+After making these changes, our perception output is shown in the following image:
+
+
+
+If you want to use an object filter after fine-tuning clusters for unknown objects,
+you can utilize either the lanelet filter or the position filter for unknown objects.
+Please refer to the documentation of the [`detected_object_validation` package page](https://autowarefoundation.github.io/autoware.universe/main/perception/detected_object_validation/) for further information.
\ No newline at end of file