Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PMR #15

Open
Tandon-A opened this issue Sep 28, 2023 · 3 comments
Open

PMR #15

Tandon-A opened this issue Sep 28, 2023 · 3 comments

Comments

@Tandon-A
Copy link

Hello Henry,

Thank you for assisting in understanding the paper better.

I am a bit confused on how pressure loss is being applied on Mod2 of the BPWNet model.

After going through the paper, I understand the loss flow as follows:

  1. PMR is used to reconstruct pressure maps for GT pose and predicted pose, and then MSE is applied on the two.

But in checking the code for PMR, you do an int conversion on the verts_taxel part (L555 - mesh_depth_lib), which is non differentiable.

Since the output of PMR module depends on verts_taxels_int which does not have gradient, how are you sending gradient back to verts_taxel and then the model?

Best,
Abhishek

@henryclever
Copy link
Contributor

I'm pretty sure the gradients are still propagated through the int. Notice before converting to an int, the z-values are multiplied by 1000 so that the z-axis resolution--the one that we care about for gradient propagation -- is less than 1mm. The x and y axes turn to a scale of "1" int increments = "1" taxel up/down or left/right.

The reason they are converted to ints is so that I can use the torch.unique function to sort the vertices according to their position on the x, y across the surface of the bed. This sorting methods also sorts them according to which is the "lowest" or "highest" position in the z space so that in the end you are left with an 27x64 array of all the "lowest" or "highest" z values. And everything that doesn't have a z value (e.g. where the mesh isn't above a particular pressure taxel) is set to 0. Note that there are some areas in the middle of the mesh where the triangles are rather large (e.g. > 1" on a large SMPL body) and there may be a "0" hole in the middle of where the body is. L610-L632 take care of that by filling in the holes based on what is nearby.

You should be able to check by running a couple epochs as the README suggests and zeroing all of the losses except for the PMR loss. as long as the loss function trends down (and it should) then it's propagating gradients.

-Henry

@henryclever
Copy link
Contributor

I assume you want to get this method working with the larger pressure array size (i.e. 33x68)? if you get stuck on this let me know and I'll see if I can fix it. I don't want my messy code to block you.

-Henry

@Tandon-A
Copy link
Author

Tandon-A commented Oct 1, 2023

Hi Henry,

I'll try to run the network on just PMR loss. But I was testing on an example of type casting (added script below) but the code breaks in doing the backprop.

import torch 

x = torch.rand(2, 3) * 10 + 1
x.requires_grad = True

print (x, x.requires_grad, x.grad)

x_int = x.type(torch.LongTensor)

print (x_int, x_int.requires_grad)


gt = torch.ones((2, 3))
criterion = torch.nn.L1Loss()

loss = criterion(x_int, gt)
print (loss)

loss.backward()
print (x.grad)

It produces this error:

    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Do let me know your thoughts on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants