Replies: 4 comments 2 replies
-
Hi @yash7321 Does accuracy noticably improve if you use 848x480 for the depth and color resolution instead of 640x480? 848x480 is the optimal depth resolution for accuracy for most RealSense 400 Series cameras (on the D415 model 1280x720 is the optimal resolution). The Hole-Filling post-processing filter is disabled by default in the RealSense Viewer, so please also try disabling that filter to see whether your results in Python are closer to those in the Viewer. Intel's guidelines for the order in which to apply post-processing filters in recommends that depth_to_disparity is applied before the Spatial filter. https://dev.intelrealsense.com/docs/post-processing-filters#using-filters-in-application-code It is also recommended that alignment is applied after the list of post-processing 'process' lines, whilst in your script alignment is applied before the post-processing filters are applied. |
Beta Was this translation helpful? Give feedback.
-
Your camera is in an upright position if the side with the large single central screw-thread hole (where the tripod attached) is facing straight downwards towards the ground. Yes, the IMU axes in the above image represent the upright position, facing forwards. After aligning depth to color, the 0,0,0 origin of depth changes from the center-line of the left infrared sensor to the center-line of the RGB sensor. |
Beta Was this translation helpful? Give feedback.
-
cam you tell me exactly how camera and imu axis are aligned, i have gone through files but still cofused Camera Coordinate System (Intel RealSense): X-axis: Right IMU Coordinate System (Intel RealSense D435i): Accelerometer: X-axis: left Gyroscope:X-axis: Pitch (rotation around X-axis) is this correct? |
Beta Was this translation helpful? Give feedback.
-
Camera coordinate system: The positive x-axis points to the right. The positive y-axis points down. The positive z-axis points forward. So you are correct about these camera axes. IMU coordinate system: Acceleration uses the same vectors as the camera coordinate system (x-right, y-down, z-forward). The arrows on the diagram below illustrate the directions of the gyroscopic x, y and z angles. |
Beta Was this translation helpful? Give feedback.
-
No matter what filters or approaches I use, I can't seem to achieve the same depth image accuracy in Python as I do with the Intel RealSense Viewer. Is there a way to improve this? Below is the code I've written. Do you have any suggestions to make it better?
#camera.py
import pyrealsense2 as rs
import numpy as np
def initialize_camera():
def get_frames(pipeline, align, decimation, spatial, temporal, hole_filling, threshold, sync_tolerance=70):
frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()
accel_frame = frames.first_or_default(rs.stream.accel)
gyro_frame = frames.first_or_default(rs.stream.gyro)
Beta Was this translation helpful? Give feedback.
All reactions