Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grab the ground #126

Open
qyp-robot opened this issue Apr 7, 2022 · 17 comments
Open

grab the ground #126

qyp-robot opened this issue Apr 7, 2022 · 17 comments

Comments

@qyp-robot
Copy link

Hello, @atenpas ,when I simulated in gazebo, my camera position was overhead, and there was ground in the recognized point cloud; after I ran GPD, the final grabbing pose was grabbing the edge of the ground; yes Shouldn't a filter be added to filter the ground? Do you have a specific operation method?

image
image
image

@ojijinWang
Copy link

ojijinWang commented Apr 7, 2022

I am not @atenpas. lol. you have two options to remove them. one, use the parameter of workspace to filter out the grasp candidates out of the workspace. second, i just mention it here because we have not uploaded the code yet. use the paper of our following up research "Robust grasp detection with incomplete point cloud and complex background". If you want i will discuss with the co-authour about sending you the code that we have not published yet. I replay this because we found the same problem and the filter of workspace is not good enough. And we proposed the occlusion checking athough the reviewers said it might not realy necessary.

@qyp-robot
Copy link
Author

Okay, thank you very much for your answer. I remember that we had interacted before. Thank you for providing me with a solution. At present, I am still reproducing and understanding his paper on the GPD algorithm. If there is a need later, I am looking forward to learning from you. The work done; at present, I will solve and reproduce his algorithm first; thank you very much for your answer, I hope you have time to help me, thank you very much!

@qyp-robot
Copy link
Author

Hello, @ojijinWang ,can I only reduce the grab space and the working space? When I shrink to a cube with a length of 0.5, GPD finds 0 grab poses. After adjusting it to a cube with a work space and a grab space of 1.0 side length, it is still It's grabbing the edge of the ground, thank you very much!
image

@ojijinWang
Copy link

It's a little bit complex, let's explain one by one.
First, the coordinates. the frame of GPD is as same as the input point cloud. you should write a code to transform the point cloud data to your assigned frame, like: camera frame to robot frame. In Rviz, you might see the point cloud in the robot frame, which is tranformed inside Rviz to illustrate. The data of the point cloud should be transformed by yourself before GPD. About point cloud tranforming, see PCL which is a C++ lib for point cloud.
Second, the workspace. There are two workspaces. One is for cutting out the input point cloud before generating the candidates. The other is for filtering out the generated candidates. The workspace of point cloud should be bigger than workspace of filtering. And the workspace of point cloud should include the background to check the collision of candidates and the background.
Third, the parameter. Unit is meter. The origin is the coordinates of point cloud.
BTY, for the convenience of other readers, we'd better use English here. Or you can ask me in private using other language.

@qyp-robot
Copy link
Author

Thank you for your reply.
I know about the transformation between coordinates. Make a transformation between the point cloud in the camera coordinate system and the robot body; transform to the end gripper through the relationship between the robot joints; GPD input Should the coordinates of the point cloud be transformed from the camera coordinate system to the robot body coordinate system? Finally, the grasping coordinates output by GPD are actually the coordinates under the reference of the robot body coordinate system.
I also understand the other two workspaces. According to my understanding, the point cloud input to GPD should be the most primitive point cloud segmented and filtered to distinguish the object from the ground. To achieve this, I need to use the pcl library to process the point cloud, and then input it to GPD after processing?

@ojijinWang
Copy link

I didnt get your question, so i say my idea again.

  1. transform the point cloud to the robot frame (base_link i think) using PCL.
  2. two workspaces are both paramaters of GPD. your paramaters are wrong, if you what a workspace of 0.5x0.5x0.5. the value should be -0.25, 0.25, -0.25,0.25,-0.25,0.25. [x_min. x_max, y_min, y_max, z_min, z_max]
  3. dont use the PCL to remove ground. If you remove the ground, you will see wrong candidates that fingers of the expensive gripper sting into the ground. If you remove the ground, the GPD will think the ground is free space. But you can use PCL to remove some backgrounds which are not relative to the collision.

@qyp-robot
Copy link
Author

Thanks for helping me correct the workspace parameter error!
Do you look at this understanding? If I removed the ground, the ground would be considered a free surface, so the raised gripper would collide with the ground, so this is not an option.
Then the point cloud information I sent back through gazebo, in addition to the object, there is also a ground point cloud, and the final output gripper is to grasp the ground. As I said before, the only way to solve this problem is to reduce the work Is there space to implement or is it the method in your thesis?
It seems that github does not have the function of private message. This is my mailbox [email protected]. If it is convenient for you, we can contact by email. Thank you for your answer!

@qyp-robot
Copy link
Author

Thank you for your answer, I know my reason. The reason why the VTK window appears like this may be that the reference coordinate system is unknown and different, which leads to a change in the viewing angle, because I originally planned to use the camera to test GPD alone. In this program It is the coordinates of the added baselink. This should be the problem. I will re-understand the code. If you have any questions, I will continue to ask you. Thank you very much for your answer, which gave me a lot of pointers.

@qyp-robot
Copy link
Author

Hello, @ojijinWang ,may I ask the gripper attitude information output by gpd_ros, first of all, the position can be transformed through the transformation relationship between the camera and base_link, but how to send the output attitude to the moveit side, so that the robotic arm can move to the specified position? I see that the format of the output message does not seem to be acceptable to the move_group node, can you help me, thank you very much!

@qyp-robot
Copy link
Author

Hello, @ojijinWang ,may I ask, I see the message output in the gpd_ros package, the position and attitude, which are in the camera coordinate system; and the transformation to the camera coordinate system is the correct pose information at the end of the robot.
If I grab multiple objects in sequence, and set the final output posture to be 10, I want moveit to execute in sequence, and what format should it be passed to the move_group interface? If you have the relevant code can I refer to it? Thank you so much!

image

@ojijinWang
Copy link

ojijinWang commented May 28, 2022

sorry for late responding. I used the following code to generate the pose msgs. The moveit interface accept the pose inputs. You need to modify the following code a little bit to transform the coordinates to your robot configuration. The pointcloud that i used had been transformed to the base of the robot before the gpd_ros, i dont need to transform the pose to the robot base. And since I have modified the source code and add something, you might have problem to build the code as well. but i think you can fix them.

geometry_msgs::Pose convertToPoseMsg(const Grasp& hand)
{
gpd_fix::GraspConfig msg;
tf::pointEigenToMsg(hand.getGraspBottom(), msg.bottom);
tf::pointEigenToMsg(hand.getGraspTop(), msg.top);
tf::pointEigenToMsg(hand.getGraspSurface(), msg.surface);
tf::vectorEigenToMsg(hand.getApproach(), msg.approach);
tf::vectorEigenToMsg(hand.getBinormal(), msg.binormal);
tf::vectorEigenToMsg(hand.getAxis(), msg.axis);
msg.width.data = hand.getGraspWidth();
msg.score.data = hand.getScore();
tf::pointEigenToMsg(hand.getSample(), msg.sample);

Eigen::Vector3d base = hand.getGraspBottom() - 0.00 * hand.getApproach();
geometry_msgs::Point base_point;
tf::pointEigenToMsg(base, base_point);
geometry_msgs::Point position = base_point;
geometry_msgs::Quaternion orientation;
Eigen::Quaterniond quat(hand.getFrame());
orientation.x = quat.x();
orientation.y = quat.y();
orientation.z = quat.z();
orientation.w = quat.w();

geometry_msgs::Pose pose;
pose.position = position;
pose.orientation = orientation;

return pose;
}

void transform_gpd_to_moveit(const geometry_msgs::Pose& effector_pose_in, geometry_msgs::Pose& effector_pose_out)
{
effector_pose_out = effector_pose_in;

//check the Z aix diretcion
// vz = (0,0,1); vz = tf * vz; if vz(3) > 0, rotate pi
tf::Pose normal_vector_z = tf::Transform( tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.0, 0.0, 1.0) );
// generate a direction only transform
tf::Pose trandsform_tf;
geometry_msgs::Pose pose_trans = effector_pose_out;
pose_trans.position.x = 0.0;
pose_trans.position.y = 0.0;
pose_trans.position.z = 0.0;
tf::poseMsgToTF(pose_trans, trandsform_tf);
normal_vector_z = trandsform_tf * normal_vector_z;
if(normal_vector_z.getOrigin().z() < 0)	//the z axis if downside, rotate it
{
	tf::Quaternion quat_x180 = tf::createQuaternionFromRPY(3.14159, 0, 0);
	tf::Transform transform_x180 = tf::Transform(quat_x180, tf::Vector3(0.0, 0.0, 0.0));
	trandsform_tf =  trandsform_tf * transform_x180;
	tf::poseTFToMsg(trandsform_tf, pose_trans);
	effector_pose_out.orientation = pose_trans.orientation;	//update the pose
}
	
//change the coordinates definition to moveit pose		
//static tranform
tf::Transform transform;
tf::Quaternion quat = tf::createQuaternionFromRPY(0, -1.5707, 3.14159);
transform.setRotation(quat);
//transform
tf::Pose pose_tf;
tf::poseMsgToTF(effector_pose_out, pose_tf);
pose_tf = pose_tf * transform ;	//transform
//tranformed pose
geometry_msgs::Pose pose_tranformed;
tf::poseTFToMsg(pose_tf, pose_tranformed);
pose_tranformed.position = effector_pose_out.position;	//we only need the axis tranformed, the position keeps same

effector_pose_out = pose_tranformed;

}

@qyp-robot
Copy link
Author

okay, thank you! I calculate it manually now, because I am not very proficient in c++, I will add your code to test, thank you very much for your answer, thank you ! @ojijinWang

@qyp-robot
Copy link
Author

Hello, @ojijinWang what is the origin of the workspace in the cfg file? Is the origin the camera_pose?

@xins981
Copy link

xins981 commented Oct 25, 2022

try set sample_above_plane = 1

@lucarei
Copy link

lucarei commented May 30, 2023

This is my interface between ROS pointcloud messages and open3D (python package for pointcloud processing): ROS-open3D interface. It is a general package, you can use whatever processing of open3D you need. Floor removal is already implemented in my code.

@djjebdh
Copy link

djjebdh commented Jun 26, 2024

你好,@ojijinWang,我只能缩小抓取空间和工作空间吗?当我缩小到边长为0.5的立方体时,GPD发现抓取姿势为0个,调整为工作空间和抓取空间边长为1.0的立方体后,它仍然是抓取地面边缘,非常感谢! 图像
you can also modify workspace to filter this points

@ojijinWang
Copy link

@djjebdh
GPD有两个Filter,都是关于相机位置
如果要更好的Filter去除蒂地面和遮挡,请参考我修改后的GPD
https://github.com/ojijinWang/gpd_vcloud

There are filters about the camera position to remove the wrong grasps. You can set up a big workspace, generate many candidates, and then use a filter to remove the wrong candidates.
For a stronger filter to remove the ground and unsafe grasps, you can refer to my modified package
https://github.com/ojijinWang/gpd_vcloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants