Advertisement

Channel information invalidating texture for BMFont

Started by May 24, 2015 10:19 PM
10 comments, last by WitchLord 9 years, 6 months ago

I have been trying to implement rendering of the BMFont format in DirectX 11 based upon the sample code and I am running into issues once I started to use the channel information. Below I have two images, one where I do not use the channel information in the pixel shader and one where I do. The test being displayed is "testna" using the Comic24 binary file from the example code. The text at the top is with alpha blending off, the second is with it on.

Before using the channel information

testnajustpixel.png

With using the channel information

testnawithchannel.png

When I use the do not use the channel information the pixel shader is just


float4 PS( VS_OUTPUT input) : SV_Target {
    return textureDiffuse0.Sample(textureSampler0, input.Tex);
}

When I use the channel information I do it the same way the example code does (except for using color)


float4 PS( VS_OUTPUT input) : SV_Target {
	float4 pixel = textureDiffuse0.Sample(textureSampler0, input.Tex);

    // Are we rendering a colored image, or 
    // a character from only one of the channels
    if( dot(vector(1,1,1,1), input.channel) )
    {
        // Get the pixel value
		float val = dot(pixel, input.channel);
		
        pixel.rgb = 1;
        pixel.a   = val;
    }
    return pixel;
}

Does anyone know why when I use the channel information that this happens? I inspected the channel information while debugging and it is valid. I also tried to hard code it in the shader with the value that I see that it typically is (256) and I get the same result

What value are you placing in the vertex' channel argument? It should be a vector with only one of the red, green, blue, or alpha channels set to 1 (or 255). Which channel to set to 1 is given in the .fnt file for the character that you're drawing.

From the first image all characters in the word 'testna' are in the green channel, so you'll want to use the value (0,0,1,0) assuming th order of the channels is (A,R,G,B).

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

Advertisement

I have my vertex data defined as


struct Vertex {
    float x, y, z, u, v;
    DWORD channel;
};
Vertex m_VertexData[400];

?

?

?

?

?

?

Then I assign the chanel with


m_VertexData[id].channel = ch->chnl

In the vertex shader it is


struct VS_INPUT {
    float4 Pos : POSITION;
    float2 Tex : TEXCOORD0;
	uint4 channel : BLENDINDICES0;
};
struct VS_OUTPUT {
    float4 Pos : SV_POSITION;
    float2 Tex : TEXCOORD0;
	uint4 channel : BLENDINDICES0;
};
VS_OUTPUT VS( VS_INPUT input ) {
    VS_OUTPUT output = (VS_OUTPUT)0;
    output.Pos = mul( float4( input.Pos.xyz, 1.0f), WorldViewProjMatrix);
    output.Tex = input.Tex;
    output.channel = input.channel;
	
    return output;
}

All of that seems to be exactly how you have it, along with the following vertex format


D3D11_INPUT_ELEMENT_DESC layout[] =
{
	{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
	{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
	{ "BLENDINDICES", 0, DXGI_FORMAT_R8G8B8A8_UINT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

I recommend changing 'uint4 channel' to 'float4 channel' and DXGI_FORMAT_R8G8B8A8_UINT to DXGI_FORMAT_R8G8B8A8_UNORM. That way you'll get values in range [0;1] and dot() should return some meaningful values. (at least AA will work) As for wrong selection, like Andreas said, check what are the real values in input.channel (just move rbg directly to pixel.rbg)

Everything you've shown so far SEEMS to be correct, the problem is most likely a very specific little detail that we're not seeing.

I've yet to use DX11 so I can't say if the difference that behc mentioned would change anything. In my own code I use DX9 and the BLENDINDICES part of the vertex format uses the type D3DDECLTYPE_UBYTE4. I suppose this would be the same as DXGI_FORMAT_R8G8B8A8_UINT in DC11, but cannot confirm it.

You'll need to do some debugging. Let us know what the value of the channel is when you draw the 'testna' characters.

float4 PS( VS_OUTPUT input) : SV_Target 
{ 
   float4 pixel = textureDiffuse0.Sample(textureSampler0, input.Tex); 
 
   // Are we rendering a colored image, or 
   // a character from only one of the channels 
   if( dot(vector(1,1,1,1), input.channel) )                // input.channel should have the value vector(0,0,1,0) here (assuming ARGB order)
   { 
      // Get the pixel value 
      float val = dot(pixel, input.channel);                 // with vector(0,0,1,0) the dot function will return just the value of the green channel
      pixel.rgb = 1; 
      pixel.a = val;                                                     // the value of the green channel will be used for the alpha blending
   } 
 
   return pixel; 
}

The code I use can be found here (in case you haven't seen it before):

* acgfx_font.cpp (.h)

* acgfx_dynrender.cpp (.h)

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

In the pixel and vertex shader the value will be incorrect it seems. It will end up being "input.channel x = 0, y = 128, z = 196, w = 195 uint4". When I hard code it in the shader to be "input.channel = uint4(0, 1, 0, 0)", then it will be correct. The texture is being read as RGBA as DirectX11 does not seem to have an ARGB format. Which I suspect may also be leading to my issue where when I use alpha testing all my character are showing up as boxes.

Advertisement

At least you know now what to investigate further to find the cause of the problem. :)

Let us know what it was when you find it.

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

I suppose something that I do not understand is how are you setting the channel to a DWORD in VtxData and VtxPos, but then the shader knows it is a int4(0, 1, 0, 0)? How does it end up taking the DWORD value of 256 and then converting it into that? Perhaps there is something else I am missing, but I stepped though the code and looked at the values in PIX to see that

The DWORD value 256 seen in hexadecimal looks like this 0x00000100. If you consider that each byte of the DWORD represent one channel you'll get (0,0,1,0).

Then it is just a matter of knowing which byte represents which channel.

From your vertex format you're telling DX11 that the DWORD should be interpreted as R8G8B8A8, i.e. (Red, Green, Blue, Alpha).

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

I figured out what the issue was. In the BLENDINDICIES D3D11_INPUT_ELEMENT_DESC format I was saying they should start at 24, instead of 20. I look at it dozens of times and it finally stood out for me.

Thank you for the help

This topic is closed to new replies.

Advertisement