Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kinect pointclouds #192

Open
davidmoshal opened this issue Sep 14, 2021 · 5 comments
Open

Kinect pointclouds #192

davidmoshal opened this issue Sep 14, 2021 · 5 comments

Comments

@davidmoshal
Copy link

Hi, this project looks great.
Wondering if kinect pointclouds are supported?
Are any pointcloud, or 3D libaries?

@morisil
Copy link
Contributor

morisil commented Nov 27, 2021

Hi @davidmoshal , I contributed the original kinect v1 support, which differs from kinect v1 support in other projects in this sense, that actual transformation of kinect raw depth data into floating point numbers is happening already on the GPU as fragment shader. I saw the formula of transforming kinect data into point cloud at some point. It is relatively simple and could also be implemented in the shader. However the question is what such a shader should output? I can imagine:

  1. using GL_POINTS and have a fragment shader encoding them out of kinect raw data as color vector into a buffer, which can be then used for drawing instances directly on GPU.
  2. preparing the whole generative mesh in similar fashion
  3. achieving similar effect should be possible by using vertex shader with texture extrusion based on the same formula of calculating point cloud coordinates

I wanted to provide these features for a long time, but I am still learning skills which would allow me to write it on top of what I have already provided. Maybe at the beginning of the next year I will add it as a generalized component over any depth camera device, to realize true predictable "3D scanner" out of any possible input.

@morisil
Copy link
Contributor

morisil commented Nov 28, 2021

Just for the reference, here is an article providing the RawDepthToMeters mapping and also DepthToWorld mapping for Kinect 1

http://graphics.stanford.edu/~mdfisher/Kinect.html

float RawDepthToMeters(int depthValue)
{
    if (depthValue < 2047)
    {
        return float(1.0 / (double(depthValue) * -0.0030711016 + 3.3309495161));
    }
    return 0.0f;
}

Vec3f DepthToWorld(int x, int y, int depthValue)
{
    static const double fx_d = 1.0 / 5.9421434211923247e+02;
    static const double fy_d = 1.0 / 5.9104053696870778e+02;
    static const double cx_d = 3.3930780975300314e+02;
    static const double cy_d = 2.4273913761751615e+02;

    Vec3f result;
    const double depth = RawDepthToMeters(depthValue);
    result.x = float((x - cx_d) * depth * fx_d);
    result.y = float((y - cy_d) * depth * fy_d);
    result.z = float(depth);
    return result;
}

@davidmoshal
Copy link
Author

davidmoshal commented Dec 18, 2021

@morisil thanks! Those are awesome links, much appreciated !!

@hamoid
Copy link
Member

hamoid commented Aug 25, 2022

Has something changed regarding this issue with the recent release of the depth camera orx?

@morisil
Copy link
Contributor

morisil commented Sep 7, 2022

@hamoid depth to meters is now implemented in shader processing raw kinect data, which is a precondition. depth to world would work the best with compute shader (or fragment shader - GPGPU for compatibility) calculating position of a buffer of instances, to avoid back trip to CPU. Either points or vertices of quads can be used. Not that difficult to implement, and I want to do it at some point, less for the sake of having a point cloud. more for the sake of having proportional camera perspective for diverse projection mapping / space mapping conditions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants