Advertisement

Casting camera rays - 2 different approaches

Started by August 02, 2018 09:58 AM
2 comments, last by savail 6 years, 6 months ago

Hey,

I have to cast camera rays through the near plane of the camera and the first approach in the code below is the one I've come up with and I understand it precisely. However, I've come across much more elegant and shorter solution which looks to give exacly the same results (at least visually in my app) and this is the "Second approach" below.
 


struct VS_INPUT
{
    float3 localPos : POSITION;
};

struct PS_INPUT
{
    float4 screenPos : SV_POSITION;
    float3 localPos : POSITION;
};

PS_INPUT vsMain(in VS_INPUT input)
{
    PS_INPUT output;
    
    output.screenPos = mul(float4(input.localPos, 1.0f), WorldViewProjMatrix);
    output.localPos = input.localPos;

    return output;
}

float4 psMain(in PS_INPUT input) : SV_Target
{
    //First approach
    {
        const float3 screenSpacePos = mul(float4(input.localPos, 1.0f), WorldViewProjMatrix).xyw;
        const float2 screenPos = screenSpacePos.xy / screenSpacePos.z; //divide by w taken above as third argument
        const float2 screenPosUV = screenPos * float2(0.5f, -0.5f) + 0.5f; //invert Y axis for the shadow map look up in future

        //fov is vertical
        float nearPlaneHeight = TanHalfFov * 1.0f; //near = 1.0f
        float nearPlaneWidth = AspectRatio * nearPlaneHeight;

        //position of rendered point projected on the near plane
        float3 cameraSpaceNearPos = float3(screenPos.x * nearPlaneWidth, screenPos.y * nearPlaneHeight, 1.0f);
        
        //transform the direction from camera to world space
        const float3 direction = mul(cameraSpaceNearPos, (float3x3)InvViewMatrix).xyz;
    }
    
    //Second approach
    {
        //UV for shadow map look up later in code
        const float2 screenPosUV                = input.screenPos.xy * rcp( renderTargetSize );
        const float2 screenPos                  = screenPosUV * 2.0f - 1.0f; // transform range 0->1 to -1->1

        // Ray's direction in world space, VIEW_LOOK/RIGHT/UP are camera basis vectors in world space
        //fov is vertical
        const float3 direction                  = (VIEW_LOOK + TanHalfFov * (screenPos.x*VIEW_RIGHT*AspectRatio - screenPos.y*VIEW_UP));
    }
    ...
}

I cannot understand what happens in the second approach right at the first 2 lines. input.screenPos.xy is calculated in vs and interpolated here but it's still before the perspective divide right? So for example y coordinate of input.screenPos should be in range -|w| <= y <= |w| where w is the z coordinate of the point in camera space, so maximally w can be equal to Far and minimally to Near plane right? How come dividing y by the renderTargetSize above yield the result supposedly in <0,1> range? Also screenPosUV seems to have already inverted Y axis for some reason I also don't understand - and that's why probably the minus sign in the calculation of direction.

In my setup for example renderTargetSize is (1280, 720), Far = 100, Near = 1.0f, I use LH coordinate system and camera by default looks towards positive Z axis. Both approaches first and second give me the same results but I would like to understand this second approach. Would be very grateful for any help!

You may not realise but basically you are asking how to get the homogenised x, y coordiantes in the pixel shader.

The pixel shader doesn't see homogenised x,y it sees the size of the render target (0 to 1280, 0 to 720). The conversion from homogenised to render target space is done just before it enters the pixel shader.

The first operation is basically redoing the projection operation so might not be the most efficient. There would be a third method that passes the homogenised as a second field not labelled as SV_Postion and so it wouldn't be converted. That is in VS, set the localPos to be the same as screenPos and it wont be changed (other than interpolation).

You might get better suggestions if you gave more detail like is it a vector you want or a ray, what the rays origin and destination are, is it normalised or the length of something etc. I say this because the matrices used might already have what you need without converting back and forth.

Advertisement

Thanks a lot! This is exacly what I was missing. I didn't know that attribute marked with SV_POSITION is already converted to raster space automagically in pixel shader.

This topic is closed to new replies.

Advertisement