I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation.
So I read in a series of samples and push them into float arrays
float vals1[9], vals2[9], vals3[9], vals4[9];
int x = 0,y=0;
for ( x = 0; x < 3; x++)
{
for (y = 0; y < 3; y++)
{
vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x));
vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x));
vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x));
vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x));
}
}
I can send these values out directly and the data is as expected
Output1[threadID.xy] = (uint) (vals1[4] );
Output2[threadID.xy] = (uint) (vals2[4] );
Output3[threadID.xy] = (uint) (vals3[4] );
Output4[threadID.xy] = (uint) (vals4[4] );
however if i do anything to that data it is destroyed.
If i add a
vals1[4] = vals1[4]/2;
or a
vals1[4] = vals[1]-vals[4];
the data is gone and everything comes back 0.
How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?