Advertisement

Using a vertex buffer with the format R16G16B16A16_SINT

Started by November 09, 2017 03:07 AM
2 comments, last by HD86 7 years, 3 months ago

In DirectX 9 I would use this input layout:

{ 0, 0, D3DDECLTYPE_SHORT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 }

with this vertex shader slot:

float4 Position : POSITION0

That is, I would use the vertex buffer format SHORT4 for corresponding float4 in the shader and everything would work great.

In DirectX 12 this does not work. When I use the format DXGI_FORMAT_R16G16B16A16_SINT with float4 in the shader, I get all zeros in the shader.

If I use int4 in the shader instead of float4, I get numbers in the shader but they are messed up. I can't figure out exactly what is wrong with them because I can't see them. The shader debugger of visual studio keeps crashing.

The debugger layer does not say anything when I use int4, but it gives a warning when I use float4.

How can I use the R16G16B16A16_SINT input layout?

You need to use R16G16B16A16_SNORM.

SINT is when you use the raw signed integer values, and you must declare your variable as int4. The values will be in range [-32768;32767] since they're integers.

SNORM is when the integers are mapped from range [-32768;32767] to the range [-1.0;1.0] and your variable must be declared as float4.

Advertisement

Thanks for clarifying this.

This topic is closed to new replies.

Advertisement