You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the implementation. After using it, I think adding a warning or raising an error would be highly useful when the input x1 is not an increasing function. For instance, the following interpolated result will be problematic, but if we remove all the .flip([0]), it works just fine. Users might spend a lot of time trying to debug where the error is, but actually the only thing need to do is to reverse the order of x1.
# the golden underlying function is y= x^2
x1 = torch.tensor([0, 1, 2, 3, 4, 5], dtype=torch.float32)
y1 = torch.tensor([0, 1, 4, 9, 16, 25], dtype=torch.float32)
x2 = torch.tensor([0.5, 1.5, 2.5, 3.5, 4.5], dtype=torch.float32)
# if we incautiously feed it in a reversed order, the results are problematic
y2 = interp1d(x1.flip([0]), y1.flip([0]), x2.flip([0])).squeeze(0)
plt.figure()
plt.plot(x1,y1, 'original')
plt.plot(x2,y2.flip(0), 'interpolated')
plt.show()
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for the implementation. After using it, I think adding a warning or raising an error would be highly useful when the input
x1
is not an increasing function. For instance, the following interpolated result will be problematic, but if we remove all the.flip([0])
, it works just fine. Users might spend a lot of time trying to debug where the error is, but actually the only thing need to do is to reverse the order ofx1
.The text was updated successfully, but these errors were encountered: