-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about the formular 16.1 in pbrt-v3-book #336
Comments
Yes, I'm late, been 2 months since you asked. Let me say that this whole bi-dir topic is very vague in the book. After trying to understand it, I gave up and turned to Veach's PhD thesis. He describes the algorithm much better. Unfortunately he's too abstract, and doesn't provide any concrete examples. Anyway, I can't comment on those integrals and answer your question. However, if you want to understand how we get to the result (the importance expression), I think I can help you. I can write the proper (in my opinion, I'm a physicist by education) derivation -- from the measurement. Do you want to understand that? For now I'll briefly describe why their derivation of expression for importance W is vague. Their two key arguments are:
Weird arguments, in my opinion, but ok, whatever. What's missing is the demonstration that the so defined W really measures radiance. After all, it is importance sole purpose! |
Thank you for your reply.
yes, I want to understand it !!!!! |
Let me repeat, I found pbrt-book very vague for bdpt, so I use Veach's formulas and his notation (splitting of W into W^0=spatial part and W^1=directional). Don't let his measure-theory stuff scare you -- I found it very easy to ignore it. Assume camera is pin-hole = a point (I didn't consider realistic cameras). Assume it measures some radiance L from a remote area light. See eq. 8.7 (p.223) in Veach. The first term after second equals sign gives us the measurement in our case. We'll derive importance expression from equating it to L. How camera is set up:
First consider only 1 pixel on the sensor.
Now that term becomes: integral { L G W^1 C } dA(x_0), it must be =L to measure brightness=radiance. This integral is only over the small remote surface S. You should now be able to follow the simple arithmetics in the image. Now consider the full sensor, MxN pixels. UPD see #336 (comment) Few additional comments
Hope this is detailed enough. |
Your second equation is good, but written the other way around. Should be W=W^0*W^1, it's just how we split the W (there's not much to think about it). The first one is good angle-wise (all cosines get cancelled). Magnitude-wise -- no. If we consider only 1 pixel, there's no integral. You do write the integral, so it seems you consider the final W for whole sensor. Can't do it: no magic jumps please. You need to derive it in two steps: (1) 1 pixel, (2) whole sensor. Neither of your two equations can be used as definition. W^0 and W^1 are not defined, they are derived. Now to your question. W^0 is derived from the way how you decide to model the camera. It doesn't follow from any equations. See "Camera position is modeled as a...". The general expression for W^0 (C*delta-fn) follows from those words. I'm sure there can be other approaches. I just picked the simplest (to my taste) model that could be fit into Veach's integrals. |
I guess, that (or something else) is still not clear. The measurement eq. (Veach 8.7) gives us the freedom to choose a surface (existing in the scene, or introducing a new one) and a function (W). For a pinhole camera we don't need that much freedom: there's nothing to integrate (=average with a weight W). Well, almost don't need: we still would like some averaging over the pixel for anti-aliasing purposes. But roughly speaking, yes, we don't need that freedom. Remember, we are at step 1 out of 2 = consider 1 pixel only. The pixel value is determined by energy from a very narrow (again, no integral = no averaging = pixel is small) solid angle cone. The cone is determined by position of its origin (this is camera center) and position and size of the pixel. Now there can be two approaches:
1st approach. Introduce a small sensor surface, and integrate over it. We'd choose the spatial W^0 to be similar to a delta-fn for that pixel: approximate the integral by a product of integrand and small pixel area. The camera center is fixed somewhere else (behind the sensor = off the surface), but it doesn't matter because camera is point-like anyway (point-like, yes, but in implementation we still can set d_s=1 -- it doesn't matter). 2nd approach. Introduce a small surface to host the camera center, and collapse the integral with a delta-fn (this time really delta, at a mathematical point) in W^0 (this is our freedom). And no integral over sensor surface. Well, roughly speaking: there will be integral, but it's a different integral, not like Veach 8.7, we'd approximate it as above. We can choose either of the approaches, each does its work (=measures radiance for the "j"-th pixel) properly. It can be easily seen that both lead to same result. I chose 2nd originally. (in both cases we equip the sensor with small ideal lens; and nothing of this participates in ray tracing) I don't mind you taking breaks, or read Veach, or just live your life, but I was hoping that you confirm that you resolved it. UPD. Imagine you are given the task of finding such a surface and a function W that would give (measure) radiance for a single pixel in a pinhole camera. Try to do it on your own. Most likely you'll end up with the same arguments, and the same expression for W^0. |
I'm sorry for taking so long time. After reading your post, I think I finally understand, but I'm not entirely sure. Let me repeat it to see if I've got it right. The "W" we want is the intergal over the red region(cone?). See the markers in the picture. |
Something's right, something's wrong.
This is very right (very simple idea really). However previous details (what exactly you mean by "the weight yellow carries") are wrong.
No, W is a function. The measurement we want is the intergal over the red pixel.
Very much no. Most importantly, you seem to miss that
Slowly: We measure something. As an integral. Over a surface. With a weight inside the integral. Notice: surface+W are used together. It's meaningless (in general, and in this case) to tell values of a function without specifying where it is defined. What is the surface of integration (that enters in Veach 8.7) in your picture? Here we come to my previous post, and the 2 approaches. Another good thing in your post is this strategy how to resolve it. You re-read my posts (because I have already written pretty much all I could), then ask questions, then back -- in a loop until cleared. |
I looked at your post very briefly. But long enough to say that I won't try to understand anything there. You are totally ignoring my derivation and going the pbrt way. Ok, fine with me. But you are on your own. I can't help you with that. I don't claim this dual photography is wrong (I used the word "vague" in the first post). Veach also has a chapter on adjoint. I certainly know bra-ket in quantum theory, and co-vectors in general relativity (this is a basic concept from linear algebra), so there may really be some meaning behind it. It just looks very unnatural to me in this case (the camera in computer graphics), so I didn't look at it. |
Looks like I cannot add images to previous posts, so I'm writing a new one. Sorry, I understood the second step all wrong previously. Hope you didn't waste much time on it -- since you were stuck on the (very beginning of the) first step. It's a bit more involved. See Veach 10.8. Our case is direct illumination, the (0,2)-path. Estimate for the measurement is L*alpha_2^E. We want it to be L, so the alpha_2^E must be 1. See 10.6. I don't show the sensor, only the camera (at 2). As to alpha_1: Veach assumes everything is spatially extended, however the camera is a point. So we get alpha_1 by assuming camera center has some area (confusingly denoted s here, as the pixel area), and then making it ->0. If we sample a single pixel, with the density P(2-1) shown at the very bottom, both C and P have area of the pixel (s) in the denominator. It is easy to see that everything cancels, and we get alpha_2^E = 1, what we want. Now to the full sensor, MxN pixels. Nothing prevents us from sampling the full sensor (e.g. mathematically it's perfectly ok to sample a function outside its support). The density in the denominator will then be MxN times smaller for each pixel. We want the alpha_2^E to still be 1 (to measure radiance L), so we must decrease C in the numerator by the same factor, MxN. This corresponds to having sensor area in the denominator, and we get the pbrt expressions (for W and P). This crucial alpha_2^1 factor is the innocent-looking return value 1 (always 1) from PerspectiveCamera::GenerateRay/Differential. Again, sorry for this confusion. |
here is the link below
https://www.pbr-book.org/3ed-2018/Light_Transport_III_Bidirectional_Methods/The_Path-Space_Measurement_Equation
I marked it with red lines,I don't know why AFilm is missing
The text was updated successfully, but these errors were encountered: