const unsigned int * h = (const unsigned int *)(&v);
unsigned int f = (h[0] + h[1]*11 - (h[2]*17))&0x7fffffff; // avoid problems with +-0
return (f>>22)^(f>>12)^(f);
In short, it doesn't work and frankly I can't quite understand how it's supposed to work.I mean, it is almost 5 AM, so I might be missing something grossly obvious, but even if the vertices are normalized, f can overflow (which would be fine, but has nothing to do with the sign bit). Which is to say that I don't even know how anding the sign bit of the hash would somehow filter out only +/- zero.
Here's my naive test that explicitly handles +/- zero. I've removed the bit mangling on the last line and changed one of the primes.
__int64 vtxhash(IN const math::vec3& v)
{
const unsigned int* h = (const unsigned int*)(&v);
__int64 h0 = ((h[0] & 0x7fffffff) == 0) ? 0 : h[0];
__int64 h1 = ((h[1] & 0x7fffffff) == 0) ? 0 : h[1];
__int64 h2 = ((h[2] & 0x7fffffff) == 0) ? 0 : h[2];
return f = (h0 + h1 * 23 - (h2 * 17));
}
Anyone dealt with this before?The reason I'm asking is mostly for peer review and because I'd like to make heads and tails of the reference snippet.
For reference - I'm not really dealing with millions of vertices, although no collisions would be a nice thing to have.