I'm looking to find the eye vector (world-space normalized vector from camera) to any given pixel on the screen, given the (-1,1) screen position and a full set of matrices (world, view, projection, inverses of same.) It seemed to me that I couldn't just multiply screen position by inverseProj, because proj isn't really reversible, and sure enough, I have some weird behavior suggesting that to be the case (although I'm having a hard time figuring out how to "print debug" this in a way where I can be sure what's happening.) I've done some Googling but haven't been able to find anything-- maybe it's a weird problem that nobody cares about, maybe it's obvious to everyone except me
This is kind of an idle question, because I know there are some other well-documented techniques (recovering the vector from a depth read, as in rebuilding world pos from a depth map) but for my purposes, recovering the eye vector without accessing depth would be preferable.
I'm working in HLSL, in DX9, in a closed source engine, but I don't imagine that matters. I'm trying to create pseudo-geometry in post, concentric spheres centered on the camera, for playing with fogging techniques. I want to get world position for various vectors at multiple, arbitrary depths and use those for simplex noise look-ups. I'm just a hobbyist, and an out-of-practice one at that. Any kind of help or pointing me to a source I missed would be appreciated.