Hi,
I am having a lot of trouble trying to recover world space position from depth.
I swear I have managed to get this to work before in another project, but I have been stuck on this for ages
I am using OpenGL and a deferred pipeline
I am not modifying the depth in any special way, just whatever OpenGL does
and I have been trying to recover world space position with this (I don't care about performance at this time, i just want it to work):
vec4 getWorldSpacePositionFromDepth(
sampler2D depthSampler,
mat4 proj,
mat4 view,
vec2 screenUVs)
{
mat4 inverseProjectionView = inverse(proj * view);
float pixelDepth = texture(depthSampler, screenUVs).r * 2.0f - 1.0f;
vec4 clipSpacePosition = vec4( screenUVs * 2.0f - 1.0f, pixelDepth, 1.0);
vec4 worldPosition = inverseProjectionView * clipSpacePosition;
worldPosition = vec4((worldPosition.xyz / worldPosition.w ), 1.0f);
return worldPosition;
}
Which I am sure is how many other sources do it...
But the positions seem distorted and get worse as i move the camera away from origin it seems, which of course then breaks all of my lighting...
Please see attached image to see the difference between depth reconstructed world space position and the actual world space position
Any help would be much appreciated!
K