1 minute ago, Stackmann0 said:I don't think that its correct. I mean, you can search for the triangle in UV space that has (x,y) (the point in uv space we're trying to find its 3d pos) inside it. But then, the fact of using barycentric coords of that point to find the 3D pos doesn't sound correct to me as the mapping between the triangles (the one in UV space and the one in 3D space) is not involved.. So, I don't know if there is a solution to this problem in the general case as you mentioned in your second reply.. or maybe I'm missing something or confusing things
It is an established solution, it's easy to demonstrate working, just code it up. The transform isn't necessary because you are simply interpolating a triangle in both cases...
However...
Afaik what may be causing the confusion is that strictly speaking, the texture mapping used in general 3d is not 'physically correct' as you are seeing it. If you use a fish eye projection for a camera, and draw a triangle, in theory the texture should also be distorted, but if you render it with standard 3d games hardware it will not be distorted. Only the vertices are going through the transform matrix, the fragment shader afaik is typically given simple linear interpolation. This may not be the case in a ray tracer.
So, you are actually right in a way I think.