Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the formular 16.1 in pbrt-v3-book #336

Open
LittleTheFu opened this issue Jul 27, 2024 · 11 comments
Open

about the formular 16.1 in pbrt-v3-book #336

LittleTheFu opened this issue Jul 27, 2024 · 11 comments

Comments

@LittleTheFu
Copy link

here is the link below
https://www.pbr-book.org/3ed-2018/Light_Transport_III_Bidirectional_Methods/The_Path-Space_Measurement_Equation

I marked it with red lines,I don't know why AFilm is missing
捕获

@rainbow-app
Copy link

rainbow-app commented Oct 2, 2024

Yes, I'm late, been 2 months since you asked.

Let me say that this whole bi-dir topic is very vague in the book. After trying to understand it, I gave up and turned to Veach's PhD thesis. He describes the algorithm much better. Unfortunately he's too abstract, and doesn't provide any concrete examples.

Anyway, I can't comment on those integrals and answer your question.

However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that?


For now I'll briefly describe why their derivation of expression for importance W is vague. Their two key arguments are:

  • normalization 16.3 (why this norm??)
  • W*cos proportional to density p (again, why??).

Weird arguments, in my opinion, but ok, whatever. What's missing is the demonstration that the so defined W really measures radiance. After all, it is importance sole purpose!

@LittleTheFu
Copy link
Author

Thank you for your reply.
Today I still can't get the point of the meaning of "W".
I searched this concept, but still can't understand it.
It would be very nice if you could explain it in detail.


However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that?

yes, I want to understand it !!!!!

@rainbow-app
Copy link

rainbow-app commented Oct 3, 2024

Let me repeat, I found pbrt-book very vague for bdpt, so I use Veach's formulas and his notation (splitting of W into W^0=spatial part and W^1=directional). Don't let his measure-theory stuff scare you -- I found it very easy to ignore it.

Assume camera is pin-hole = a point (I didn't consider realistic cameras). Assume it measures some radiance L from a remote area light.

See eq. 8.7 (p.223) in Veach. The first term after second equals sign gives us the measurement in our case. We'll derive importance expression from equating it to L.

How camera is set up:

  • Camera position is modeled as a (2D) delta-fn on a surface (A(x_1) in 8.7). The delta-fn is embedded into W^0. This surface is ignored for ray tracing.

  • Camera sensor is imaginary (doesn't participate in ray tracing), and there's no integral over it. d_s is distance from it to camera center, d_s=1 in pbrt code.

First consider only 1 pixel on the sensor.

importance-github
s=area of 1 pixel, S=area of remote surface, they are related as shown.

Now that term becomes: integral { L G W^1 C } dA(x_0), it must be =L to measure brightness=radiance. This integral is only over the small remote surface S.

You should now be able to follow the simple arithmetics in the image.

Now consider the full sensor, MxN pixels.

UPD see #336 (comment)

Few additional comments

  • The index "j" on W on LHS is hidden inside W^1: (1) dependence on angle1, and (2) the "j" should (my mistake) have been expressed on the RHS by a multiplier like {1 inside j-th pixel, 0 outside}.

  • W^1 (=directional part) and C are dimensionless, final W (and the delta-fn) has dimension of 1/m^2.

  • The C is delta-like: it goes to infinity as pixel size goes to zero. It's ok: smaller the pixel => smaller the solid angle => smaller S, less energy comes to the pixel => the multiplier C has to become larger to measure same brightness.

Hope this is detailed enough.

@LittleTheFu
Copy link
Author

Thank you for your comment, it's very kind of you.

But I'm stuck in the middle,which is how to get the W^0.
I know the definition of W^0,W^1,like this:
we01

But I don't know how to get that W^0 as that in this step:
w

@rainbow-app
Copy link

Your second equation is good, but written the other way around. Should be W=W^0*W^1, it's just how we split the W (there's not much to think about it).

The first one is good angle-wise (all cosines get cancelled). Magnitude-wise -- no. If we consider only 1 pixel, there's no integral. You do write the integral, so it seems you consider the final W for whole sensor. Can't do it: no magic jumps please. You need to derive it in two steps: (1) 1 pixel, (2) whole sensor.

Neither of your two equations can be used as definition. W^0 and W^1 are not defined, they are derived.

Now to your question.

W^0 is derived from the way how you decide to model the camera. It doesn't follow from any equations. See "Camera position is modeled as a...". The general expression for W^0 (C*delta-fn) follows from those words. I'm sure there can be other approaches. I just picked the simplest (to my taste) model that could be fit into Veach's integrals.

@rainbow-app
Copy link

rainbow-app commented Oct 12, 2024

I guess, that (or something else) is still not clear.

The measurement eq. (Veach 8.7) gives us the freedom to choose a surface (existing in the scene, or introducing a new one) and a function (W). For a pinhole camera we don't need that much freedom: there's nothing to integrate (=average with a weight W). Well, almost don't need: we still would like some averaging over the pixel for anti-aliasing purposes. But roughly speaking, yes, we don't need that freedom.

Remember, we are at step 1 out of 2 = consider 1 pixel only.

The pixel value is determined by energy from a very narrow (again, no integral = no averaging = pixel is small) solid angle cone. The cone is determined by position of its origin (this is camera center) and position and size of the pixel.

Now there can be two approaches:

  1. Fix origin, and integrate over the pixel.

  2. Fix pixel, and integrate over surface that hosts the cone origin.

1st approach. Introduce a small sensor surface, and integrate over it. We'd choose the spatial W^0 to be similar to a delta-fn for that pixel: approximate the integral by a product of integrand and small pixel area. The camera center is fixed somewhere else (behind the sensor = off the surface), but it doesn't matter because camera is point-like anyway (point-like, yes, but in implementation we still can set d_s=1 -- it doesn't matter).

2nd approach. Introduce a small surface to host the camera center, and collapse the integral with a delta-fn (this time really delta, at a mathematical point) in W^0 (this is our freedom). And no integral over sensor surface. Well, roughly speaking: there will be integral, but it's a different integral, not like Veach 8.7, we'd approximate it as above.

We can choose either of the approaches, each does its work (=measures radiance for the "j"-th pixel) properly. It can be easily seen that both lead to same result. I chose 2nd originally.

(in both cases we equip the sensor with small ideal lens; and nothing of this participates in ray tracing)


I don't mind you taking breaks, or read Veach, or just live your life, but I was hoping that you confirm that you resolved it.

UPD. Imagine you are given the task of finding such a surface and a function W that would give (measure) radiance for a single pixel in a pinhole camera. Try to do it on your own. Most likely you'll end up with the same arguments, and the same expression for W^0.

@LittleTheFu
Copy link
Author

I'm sorry for taking so long time.

After reading your post, I think I finally understand, but I'm not entirely sure. Let me repeat it to see if I've got it right.

The "W" we want is the intergal over the red region(cone?).
The yellow line is one line carries its own weight.And this is direct function(or W^1, or detla-function).
And pixel area is W^0.
Because the pixel is so small,we can get the intergal simply by multipying the weight yellow carries with pixel area.

See the markers in the picture.
The real meaning of "delta" here is that, given a point on the pixel, we can get the only one weight line.(As in the picture,the blue circle specify the only yellow line).

w

@rainbow-app
Copy link

rainbow-app commented Oct 14, 2024

Something's right, something's wrong.

Because the pixel is so small,we can get the intergal simply by multipying the weight yellow carries with pixel area.

This is very right (very simple idea really). However previous details (what exactly you mean by "the weight yellow carries") are wrong.

The "W" we want is the intergal over the red region(cone?).

No, W is a function. The measurement we want is the intergal over the red pixel.

And pixel area is W^0.

Very much no.


Most importantly, you seem to miss that

The measurement eq. (Veach 8.7) gives us the freedom to choose a surface (existing in the scene, or introducing a new one) and a function (W).

Slowly: We measure something. As an integral. Over a surface. With a weight inside the integral. Notice: surface+W are used together.

It's meaningless (in general, and in this case) to tell values of a function without specifying where it is defined. What is the surface of integration (that enters in Veach 8.7) in your picture? Here we come to my previous post, and the 2 approaches.


Another good thing in your post is this strategy how to resolve it. You re-read my posts (because I have already written pretty much all I could), then ask questions, then back -- in a loop until cleared.

@LittleTheFu
Copy link
Author

LittleTheFu commented Oct 16, 2024

Slowly: We measure something. As an integral. Over a surface. With a weight inside the integral. Notice: surface+W are used together.

I still seem to have a lot of confusion, so let me try to resolve this now.

  1. W is not the thing only can be found from camera,but anywhere in the scene.Because I can image camera as a light source,and it emit W(just like light emit rays).Then W bounce around in the scene,and finally it reaches a state of equilibrium.
    wight_emit_bounce
    Suppose we can get W by postion and direction.Like W(pos,dir), the param "pos" can be any point in the scene,and "dir" can be any direction.
    So we need another help function which descriped below,in step 2.

2.(I doubts about this step, but let's go with it for now)
At any surface in the scene,we can get a function just like the brdf,but this one fouces on W.
W_BRDF


3.combined with function "g" and W emitted from camera,we can finally get any W in any postion and dirction.


4.just like step 1,but this time, the main character is light.It emitted and bouced around,and finally reached a state of equilibrium.
light_bounce


5.like step 2,we can get brdf from surface(because of the nature of materials).
L_brdf


6.Now, things have changed.
We must combine "W" and "L" and "f" to get the final result.
RESULT

In the world without "W",things go like this :
L_forular

But now when "W" joins the game,it looks like this :
WITH_W_L

@rainbow-app
Copy link

rainbow-app commented Oct 16, 2024

camera ... it emit W

I looked at your post very briefly. But long enough to say that I won't try to understand anything there. You are totally ignoring my derivation and going the pbrt way. Ok, fine with me. But you are on your own. I can't help you with that.

I don't claim this dual photography is wrong (I used the word "vague" in the first post). Veach also has a chapter on adjoint. I certainly know bra-ket in quantum theory, and co-vectors in general relativity (this is a basic concept from linear algebra), so there may really be some meaning behind it. It just looks very unnatural to me in this case (the camera in computer graphics), so I didn't look at it.

@rainbow-app
Copy link

Looks like I cannot add images to previous posts, so I'm writing a new one.

Sorry, I understood the second step all wrong previously. Hope you didn't waste much time on it -- since you were stuck on the (very beginning of the) first step.

It's a bit more involved.

See Veach 10.8. Our case is direct illumination, the (0,2)-path. Estimate for the measurement is L*alpha_2^E. We want it to be L, so the alpha_2^E must be 1. See 10.6.

github-step2

I don't show the sensor, only the camera (at 2). As to alpha_1: Veach assumes everything is spatially extended, however the camera is a point. So we get alpha_1 by assuming camera center has some area (confusingly denoted s here, as the pixel area), and then making it ->0.

If we sample a single pixel, with the density P(2-1) shown at the very bottom, both C and P have area of the pixel (s) in the denominator. It is easy to see that everything cancels, and we get alpha_2^E = 1, what we want.

Now to the full sensor, MxN pixels.

Nothing prevents us from sampling the full sensor (e.g. mathematically it's perfectly ok to sample a function outside its support). The density in the denominator will then be MxN times smaller for each pixel. We want the alpha_2^E to still be 1 (to measure radiance L), so we must decrease C in the numerator by the same factor, MxN. This corresponds to having sensor area in the denominator, and we get the pbrt expressions (for W and P).

This crucial alpha_2^1 factor is the innocent-looking return value 1 (always 1) from PerspectiveCamera::GenerateRay/Differential.

Again, sorry for this confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants