Advertisement

How to declare XMCOLOR in HLSL?

Started by April 17, 2020 05:27 PM
3 comments, last by Flone 4 years, 9 months ago

So my vertex structure has an XMCOLOR member in it and I want to use it in HLSL but I'm getting errors while compiling the shaders.

error X4577: Not all elements of SV_Position were written.

error X4576: Output signature parameter (1-based Entry 0) type must be float32 and mask must be xyzw.

I'm new to HLSL and DirectX so I can't understand what's the problem...

My vertex (VS) and pixel (PS) shaders:

cbuffer cbPerObj : register(b0)
{
	float4x4 vWorldViewProj; 
};

struct VertexIn
{
	float3 pos  : POSITION;
	float3 tan  : TANGENT;
	float3 norm : NORMAL;
	float3 tex0 : TEX0;
	float3 tex1 : TEX1;
	float color : COLOR;
};

struct VertexOut
{
	float3 pos  : SV_POSITION;
	float3 tan  : TANGENT;
	float3 norm : NORMAL;
	float3 tex0 : TEX0;
	float3 tex1 : TEX1;
	float color : COLOR;
};

VertexOut VS(VertexIn vin)
{
	VertexOut vout;
	
	vout.pos = mul(float4(vin.pos, 1.0f), vWorldViewProj);
    vout.color = vin.color;
	vout.tan = vin.tan;
	vout.norm = vin.norm;
	vout.tex0 = vin.tex0;
	vout.tex1 = vin.tex1;
    
    return vout;
}

float PS(VertexOut pin) : SV_Target
{
    return pin.color;
}

And my Vertex struct in C++:

struct SVertex
{
	DirectX::XMFLOAT3 vPos;
	DirectX::XMFLOAT3 vTangent;
	DirectX::XMFLOAT3 vNormal;
	DirectX::XMFLOAT3 vTex0;
	DirectX::XMFLOAT3 vTex1;
	DirectX::PackedVector::XMCOLOR color;
};

MSDN says it is a vector of four-

https://docs.microsoft.com/en-us/windows/win32/api/directxpackedvector/ns-directxpackedvector-xmcolor

“A 32-bit Alpha Red Green Blue (ARGB) color vector, where each color channel is specified as an unsigned 8 bit integer.”

“Unsigned 32-bit integer representing the color. The colors are stored in A8R8G8B8 format.”

Try "float4" inside the shaders. “float4 color : COLOR;”
Inside the shader it should translate to 4 floats each in the range of 0.0f to 1.0f.

Advertisement

I am reading the documentation now, and it seems XMCOLOR is a format to pack the color on the CPU side.
On the GPU side, the texture format will indicate the packaging of data.
And i don't see a format like DXGI_FORMAT_A8R8G8B8_….
So you need to unpack the XMCOLOR on the CPU side before loading the data to a resource with the formats:

DXGI_FORMAT_R8G8B8A8_UNORM (automatically translates to 4 floats inside the shader)
or
DXGI_FORMAT_R32G32B32A32_FLOAT

@NikiTo yeah, I fixed it, my first mistake is that my SV_POSITION is float3 not float4 and I set the color to float4 and it worked!

I've also read it in the code:
“A 32-bit Alpha Red Green Blue (ARGB) color vector, where each color channel is specified as an unsigned 8 bit integer.”

but I thought it's just a 8-8-8-8 bit value = 32 bit value = float, although I was thinking if I'm not mistaken because when I was using the XMCOLOR constructor to pass color it needed 4 float values.

This topic is closed to new replies.

Advertisement