Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use dual-source blending for rendering the sky #17672

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

ecoskey
Copy link
Contributor

@ecoskey ecoskey commented Feb 4, 2025

Objective

Since previously we only had the alpha channel available, we stored the mean of the transmittance in the aerial view lut, resulting in a grayer fog than should be expected.

Solution

  • Calculate transmittance to scene in render_sky with two samples from the transmittance lut
  • use dual-source blending to effectively have per-component alpha blending

@ecoskey ecoskey added A-Rendering Drawing game state to the screen D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Feb 4, 2025
@alice-i-cecile alice-i-cecile added this to the 0.16 milestone Feb 4, 2025
@alice-i-cecile
Copy link
Member

@mate-h this one is for you :)

@mate-h
Copy link
Contributor

mate-h commented Feb 4, 2025

I've had an initial look at this, and using dual source blending is a great idea, the diff looks good to me. I went to test things out and noticed some inaccuracies between what I expect to see ground truth, vs the results seem to be shifting colors in the rendered geometry toward orange.
I dug a bit deeper and it's likely an issue in either how we sample the transmittance lut or how we calculate the transmittance though the atmosphere for a path segment to the scene depth from the camera using that function.
I will continue investigating and testing this further.

I pushed all my testing code to the https://github.com/mate-h/bevy/tree/atmosphere-test branch and will keep working on that.

Screenshot 2025-02-04 at 12 22 05 AM

thanks @mate-h for more debugging help and your fancy reference
raymarcher!
@mate-h
Copy link
Contributor

mate-h commented Feb 7, 2025

After this last iteration and more thorough testing we were able to identify the source of the issue. I've tested both cases of the geometry above and below the horizon and the math checks out compared to the raymarched reference.
Test case for when the ray intersects the ground:
image

Test case for when the ray does not intersect the ground (else block):
image

@@ -118,6 +118,23 @@ fn sample_transmittance_lut(r: f32, mu: f32) -> vec3<f32> {
return textureSampleLevel(transmittance_lut, transmittance_lut_sampler, uv, 0.0).rgb;
}

//should be in bruneton_functions, but wouldn't work in that module bc imports. What to do wrt licensing?
Copy link
Contributor

@mate-h mate-h Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should just leave a comment saying that the license in the bruneton_functions.wgsl file applies to this function, or come up with some way to reorganize the imports. Also just in general, since the code is not exaclty the same as bruneton's verison, it is a WGSL re-write, not sure how the licensing works because I am no legal expert. but does it still apply?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's a direct port with no significant practical changes then I'd say the license still applies.

return sample_aerial_view_lut(in.uv, depth);
let t = ndc_to_camera_dist(vec3(uv_to_ndc(in.uv), depth));
inscattering = sample_aerial_view_lut(in.uv, t);
transmittance = sample_transmittance_lut_segment(r, mu, t);
Copy link
Contributor

@mate-h mate-h Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note on performance, we should start profiling this since we are now taking 3 texture samples per fragment here to determine both inscattering and transmittance, as opposed to 1 that we used before. Although it seems negligible.

Copy link
Contributor Author

@ecoskey ecoskey Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compared to full per-pixel raymarching I'm not too concerned. If it ends up being an issue we can always add a fallback to the old way

@mate-h
Copy link
Contributor

mate-h commented Feb 8, 2025

@alice-i-cecile this one is ready for final review and merge when you get a chance!

@alice-i-cecile
Copy link
Member

Needs a second rendering review :)

@@ -118,6 +118,23 @@ fn sample_transmittance_lut(r: f32, mu: f32) -> vec3<f32> {
return textureSampleLevel(transmittance_lut, transmittance_lut_sampler, uv, 0.0).rgb;
}

//should be in bruneton_functions, but wouldn't work in that module bc imports. What to do wrt licensing?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be resolved before we can merge. I'd leave a little comment here.

@@ -130,23 +147,31 @@ fn sample_sky_view_lut(r: f32, ray_dir_as: vec3<f32>) -> vec3<f32> {
return textureSampleLevel(sky_view_lut, sky_view_lut_sampler, uv, 0.0).rgb;
}

fn ndc_to_camera_dist(ndc: vec3<f32>) -> f32 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note here that there is also a bevy PBR builtin for this bevy_pbr::view_transformations::depth_ndc_to_view_z and then the return value could be just multiplied by settings.scene_units_to_m Also seems to consider projection in that function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that function only returns the "parallel" distance from the camera to the object, i.e. |P - C| cosθ where θ is the angle between P - C and the camera forward vector. So we could use it, but we'd have to divide out that cosθ term, which isn't too difficult. There's also the issue of bindings, so we'd still have to copy the function into the local shaders

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh true I didn't think about the bindings, scratch that :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Needs-Review Needs reviewer attention (from anyone!) to move forward
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

4 participants