Advertisement

HLSL Geometry Shader not emitting triangles?

Started by April 28, 2018 04:29 PM
5 comments, last by NyquistVelocity 6 years, 9 months ago

Hello everyone! I'm an atmospheric scientist working on radar data visualization software and have hit a bit of a snag with my move to DX10 and implementation of a geometry shader.

Here's an example of the interface, showing several different products from a mobile radar dataset. In the build shown, I precalculate the positions of all the gates (basically, radar pixels) in the dataset and pass that to the VShader and PShader for transformation to screen coordinates and coloration based on color tables. I recently implemented a GShader to expand my road layers so as to have them be more than one pixel wide, and want to implement a GShader for data so that I can dramatically decrease the memory load of a dataset (long-range radar datasets can consume >1GB of video memory... not great).

I initially wrote the whole shader implementation, but when it didn't work I backed way off and have just been trying to get the GShader to emit triangles that form a quad in the middle of each frame. In the input assembler stage, I'm passing the VShader two 2-byte integers: a beam index (to know which direction the antenna is pointing) and a gate index (range from radar). Below is my passthrough VShader (since all the actual geographical geometry is going to need to be calculated in the GShader stage). I put the "POSITION" semantic in VOut thinking that vertices without a defined position were getting culled, but that apparently is not the case. There's a few other radar-related fields in there (Value, FilterValue, SpecialValue) but I think we can safely ignore those, unless inclusion of them has pushed my vertex size over some limit and is the cause of my problems.


struct VOut
{
    float4 Position : POSITION;
    int2 GateRay : GATE_RAY;
    float Value : VALUE;
    float FilterValue : FILTER_VALUE;
    int SpecialValue : SPECIAL_VALUE;
};

Texture2D<float> filterData : register(t0);
Texture2D<float> valueData : register(t1);

VOut VShader(int2 GateRay : POSITION)
{
    VOut output;

    output.Position = float4(0.5, 0.5, 0.5, 0.0);
    output.GateRay = GateRay;
    output.Value = valueData[output.GateRay];
    output.FilterValue = filterData[output.GateRay];
    
    if (output.Value == -1.#INF)
        output.SpecialValue = 1;
    else if (output.Value == 1.#INF)
        output.SpecialValue = 2;
    else
        output.SpecialValue = 0;

    return output;
}

My dummy GShader code is below. I am intentionally winding one triangle the wrong way - I do this during shader development so that if I screw up badly I can see at least half of my triangles. At this point, I'm just trying to get something to show onscreen. I don't see anything glaringly wrong with it, but I suppose if I did I would have fixed it. I adapted this code from the GShader that expands my GIS road lines into rectangles. Unlike this one, that GShader works.


struct PS_IN
{
    float4 Position : SV_POSITION;
    int2 GateRay : GATE_RAY;
    float Value : VALUE;
    float FilterValue : FILTER_VALUE;
    int SpecialValue : SPECIAL_VALUE;
};

[maxvertexcount(6)]
void GShader(point VOut gin[1], inout TriangleStream<PS_IN> triStream)
{
    PS_IN v[4];

    v[0].Position = float4(-0.5, -0.5, 0.5, 0.0);
    v[0].GateRay = int2(1, 1);
    v[0].Value = 50.0;
    v[0].FilterValue = 0.0;
    v[0].SpecialValue = 0;

    v[1].Position = float4(0.5, -0.5, 0.5, 0.0);
    v[1].GateRay = int2(1, 1);
    v[1].Value = 50.0;
    v[1].FilterValue = 0.0;
    v[1].SpecialValue = 0;

    v[2].Position = float4(-0.5, 0.5, 0.5, 0.0);
    v[2].GateRay = int2(1, 1);
    v[2].Value = 50.0;
    v[2].FilterValue = 0.0;
    v[2].SpecialValue = 0;

    v[3].Position = float4(0.5, 0.5, 0.5, 0.0);
    v[3].GateRay = int2(1, 1);
    v[3].Value = 50.0;
    v[3].FilterValue = 0.0;
    v[3].SpecialValue = 0;


    triStream.Append(v[0]);
    triStream.Append(v[3]);
    triStream.Append(v[2]);
	
    triStream.RestartStrip();

    triStream.Append(v[0]);
    triStream.Append(v[3]);
    triStream.Append(v[1]);

    triStream.RestartStrip();
	
}

Below is the dummy pixel shader I'm using. It should just color my triangles white. Normally I use a pixel shader compiled from HLSL code I generate from a user-defined color table - but in the interest of reducing sophistication in debugging, I'm using this dummy.


struct PS_IN
{
    float4 Position : SV_POSITION;
    int2 GateRay : GATE_RAY;
    float Value : VALUE;
    float FilterValue : FILTER_VALUE;
    int SpecialValue : SPECIAL_VALUE;
};

float4 PShader(float4 Position : SV_POSITION, int2 GateRay : GATE_RAY, float Value : VALUE, float FilterValue : FILTER_VALUE, int SpecialValue : SPECIAL_VALUE) : SV_TARGET
{
    return float4(1.0, 1.0, 1.0, 1.0);
}

Thanks in advance for the help!

I'll look at this in more detail later, but in the meantime, two things: first, you shouldn't use d3d10 since d3d11 runs on the same hardware and is meant to replace d3d10

The other thing, is use the graphics debugger to debug your graphics pipeline, it'll save you tons of time. In visual studio, under the debug menu, there's an option for graphics debugger. Click that then click start graphics debugger. Once the debugger is started, press the print screen button to capture a frame for debugging. Once the frame has been captured, double click on that captured frame to see all draw calls and pipeline state for that frame, as well as being able to debug the shaders and see the output from each shader

Advertisement
3 hours ago, iedoc said:

I'll look at this in more detail later, but in the meantime, two things: first, you shouldn't use d3d10 since d3d11 runs on the same hardware and is meant to replace d3d10

The other thing, is use the graphics debugger to debug your graphics pipeline, it'll save you tons of time. In visual studio, under the debug menu, there's an option for graphics debugger. Click that then click start graphics debugger. Once the debugger is started, press the print screen button to capture a frame for debugging. Once the frame has been captured, double click on that captured frame to see all draw calls and pipeline state for that frame, as well as being able to debug the shaders and see the output from each shader

I didn't jump to D3D11 because I was concerned about hardware compatibility, weirdly. Thanks for pointing that out, I'll change my code over now and fire up the graphics debugger!

Edit: My code renders to a texture, so there's no present call for the graphics debugger to see. Hmm.

One suggestion: from what I see you do not need geometry shader at all. Generally, GShader is used when you need to access all vertices of your primitive (to generate face normal, for instance), to route triangles to different render target slices, to dynamically generate triangles (not recommended). If you want to render multiple quads, you can use instancing instead, with your VShader emitting vertex positions based on VertexId and reading quad data from constant buffer based on InsatnceId or a vertex buffer that uses per instance frequency.

SV_Positions with w=0 sounds bad, it likely gets clipped. Try


v[0].Position = float4(-0.5, -0.5, 0.5, 1.0);

etc. instead.

18 hours ago, DiligentDev said:

One suggestion: from what I see you do not need geometry shader at all. Generally, GShader is used when you need to access all vertices of your primitive (to generate face normal, for instance), to route triangles to different render target slices, to dynamically generate triangles (not recommended). If you want to render multiple quads, you can use instancing instead, with your VShader emitting vertex positions based on VertexId and reading quad data from constant buffer based on InsatnceId or a vertex buffer that uses per instance frequency.

The reason I'm using a geometry shader is to reduce memory usage. For a typical scan from one of the weather radars commonly found in the US, there are 720 rays with 1,832 pixels ("gates") per ray for 1,319,040 total observations per sweep. I used to split each observation into its two component triangles, then pre-calculate geometry (with double precision) on the CPU and create a vertex buffer at load time. But with a latitude and longitude (each 4 bytes) and a ray and gate index (each 2 bytes) in each vertex, I ended up with 72 bytes per vertex using ~95MB of memory for geometry alone.

Obviously, this is a terrible way to handle sweep geometry.

Fortunately, the beam propagates at regular intervals down its length. So, I can pass Texture2D's of those properties (per ray) to the geometry shader, then use ray and gate indices to calculate latitudes and longitudes for radar observations. This method means I'm consuming only ~5-10MB of memory to define geometry. For a real world example, some data I have here from Hurricane Harvey takes 47MB with a precalculated vertex buffer, vs. just over 2MB using a geometry shader.

Unfortunately, the math I'm using in the geometry shader right now has about 1m (~0.00001°) precision, compared to the math I was using on the CPU with could geolocate to about 1cm. So my new challenge is to come up with a more precise way of calculating latitudes and longitudes using float math to retain speed.

I need to look into best practices regarding GPU memory management. As of right now, I just tie up GPU memory whether you're looking at a given dataset or not - there has to be an intelligent way of swapping data in and out of the GPU as needed, otherwise modern games wouldn't look as good as they do.

30 minutes ago, unbird said:

SV_Positions with w=0 sounds bad, it likely gets clipped. Try



v[0].Position = float4(-0.5, -0.5, 0.5, 1.0);

etc. instead.

This turned out to be the problem. I got this working very late last night.

I'm not super familiar with what the 'w' component is typically used for, so I'll be doing some Googling...

This topic is closed to new replies.

Advertisement