Advertisement

Manual cubemap lookup/filtering in HLSL

Started by October 16, 2017 06:11 PM
2 comments, last by bandages 7 years, 3 months ago

I'm working in an old application (DX9-based) where I don't have access to the C code, but I can write any (model 3.0) HLSL shaders I want.  I'm trying to mess with some cube mapping concepts.  I've gotten to the point where I'm rendering a cube map of the scene to a cross cube that I can plug directly into ATI cubemapgen for filtering, which is already easier than trying to make one in Blender, so I'm pretty happy so far.  But I would like to do my own filtering and lookups for two purposes: one, to effortlessly render directly to sphere map (which is the out-of-the-box environment mapping for the renderer I'm using), and two, to try out dynamic cube mapping so I can play with something approaching real-time reflections.  Also, eventually, I'd like to do realish-time angular Gaussian on the cube map so that I can get a good feel for how to map specular roughness values to Gaussian-blurred environment miplevels.  It's hard to get a feel for that when it requires processing through several independent, slow applications.

 

Unfortunately, the math to do lookups and filtering is challenging, and I can't find anybody else online doing the same thing.  It seems to me that I'm going to need a world-vector-to-cube-cross-UV function for the lookup, then a cube-cross-UV-to-world-vector function for the filtering (so I can point sample four or more adjacent texels, then interpolate on the basis of angular distance rather than UV distance.)

 

First, I'm wondering if there's any kind of matrix that I can use here to transform vector to cube-cross map, rather than doing a bunch of conditionals on the basis of which cube face I want to read.  This seems like maybe it would be possible?  But I'm not really sure, it's kind of a weird transformation.  Right now, my cube cross is a 3:4 portrait, going top/front/bottom/back from top to bottom, because that's what cubemapgen wants to see.  I suppose I could make another texture from it with a different orientation, if that would mean I could skip a bunch of conditionals on every lookup.

 

Second, it seems like once I have the face, I could just use something like my rendering matrix for that face to transform a vector to UV space,  but I'm not sure that I could use the inverse of that matrix to get a vector from an arbitrary cube texel for filtering, because it involves a projection matrix-- I know those are kind of special, but I'm still wrapping my head around a lot of these concepts.  I'm not even sure I could make the inverse very easily; I can grab an inverseProj from the engine, but I'm writing to projM._11_22 to set the FOV to 90, and I'm not sure how that would affect the inverse.

 

Really interested in any kind of discussion on techniques involved, as well as any free resources.  I'd like to solve the problem, but it's much more important to me to use the problem as a way to learn more.

Getting the cubemap face + UV coordinates from a direction vector is fairly simple. The largest component determines the face, and the other two components are then your UV's after you divide by the max component, and then remap them from [-1, 1] to [0, 1]. Here's some example code for you from one of my open-source projects:


template<typename T> static XMVECTOR SampleCubemap(Float3 direction, const TextureData<T>& texData)
{
    Assert_(texData.NumSlices == 6);

    float maxComponent = std::max(std::max(std::abs(direction.x), std::abs(direction.y)), std::abs(direction.z));
    uint32 faceIdx = 0;
    Float2 uv = Float2(direction.y, direction.z);
    if(direction.x == maxComponent)
    {
        faceIdx = 0;
        uv = Float2(-direction.z, -direction.y) / direction.x;
    }
    else if(-direction.x == maxComponent)
    {
        faceIdx = 1;
        uv = Float2(direction.z, -direction.y) / -direction.x;
    }
    else if(direction.y == maxComponent)
    {
        faceIdx = 2;
        uv = Float2(direction.x, direction.z) / direction.y;
    }
    else if(-direction.y == maxComponent)
    {
        faceIdx = 3;
        uv = Float2(direction.x, -direction.z) / -direction.y;
    }
    else if(direction.z == maxComponent)
    {
        faceIdx = 4;
        uv = Float2(direction.x, -direction.y) / direction.z;
    }
    else if(-direction.z == maxComponent)
    {
        faceIdx = 5;
        uv = Float2(-direction.x, -direction.y) / -direction.z;
    }

    uv = uv * Float2(0.5f, 0.5f) + Float2(0.5f, 0.5f);
    return SampleTexture2D(uv, faceIdx, texData);
}

I don't think there's any simple matrix or transformation that will get you UV coordinates for a cubemap that's set up as a "cross". It would be easier if you had all of the faces laid out horizontally or vertically in cubemap face order(-X, +X, -Y, +Y, -Z, +Z), but if that's not possible then it should be possible to do a bit of extra computation to go from face index -> cross coordinates.

From there doing bilinear filtering isn't too hard by just treating the texture as 2D, but smoothly filtering across cubemap faces requires all kinds of special logic.

Advertisement

Thank you!

Was working on this since writing but wasn't getting anywhere.  I'd just given up when I read your message, figuring I'd wait until I'm smarter.  Replaced my ridiculous, non-functional code and it works :)  Now I just have to figure out why to actually use + and - and .zy vs .yz since I just trial-and-errored it.

I'm sure there's a reason cubemap filtering goes so slowly.  But at least I've already found things to read and try when it comes to that, so hopefully I won't get stuck.

sphNonFail.png.970b707a1b8d719312b0dc8d778736c3.png

This topic is closed to new replies.

Advertisement