I have 4 fisheye cameras and I project them after lens correction onto a 3D bowel mesh. I'm getting flickering or really like ants are moving in those areas of the scene (top view) of the cameras.
I've tried to generate mipmaps that solved the flickering and the quality is much better. The problem is I'm running that on an embedded platform and generating mipmaps each frame is not considers and it's cpu intensive operations. Plus also getting a technique like Render to texture and do SSAA is also a problem.
I have tried to create a 2D Mask where I do bicubic interpolation/filtering on those areas that the flicker appears, it didn't solve the issue.
I'm looking forward for methods to solve that problem.
Game Programming is the process of converting dead pictures to live ones .
I guess you can solve it easily by taking multiple samples. E.g. divide your texel into a 4x4 grid, then do lens correction and one texture fetch for each subpixel sample individually, take average result of all samples. You could also use N random subpixel positions, where N is larger in areas of more distortion to optimize.
It depends on the lens correction distortion how much bicubic filter can help. If one pixel maps to an area of about 5x5 pixels in the photo, a cubic filter covering 3x3 pixels will still under sample. And you want a average of an area, not a better point sample, so that's why the better filter might not be good enough. To improve this, mip maps would help ofc as they make an average of an area.
I'll draw a picture to illustrate how multiple samples should fix it:
On the left is a destination pixel with 6 random samples. You map each of them with your distortion fixing math to the source image, which might be a larger area shown on the right. A bilinear lookup should suffice for each sample, and the average of all samples gives a good estimate if sample count is high enough. Simple Monte Carlo integration.
So all you need is a way to generate random sub pixel positions, usually done using hash functions. Pseudo code would be like this:
const int sampleCount = 8;
const int dimensions = 2;
int seed = (int(curPixel.x) * screenWidth + int(curPixel.y)) * sampleCount * dimensions; // we ensure each sub sample gets it's own unique seed
vec3 sum (0);
for (int i=0; i<sampleCount; i++)
{
float subOffsetU = randF (seed + i * dimensions + 0) - .5f; // assuming hash returns values between 0 and 1
float subOffsetV = randF (seed + i * dimensions + 1) - .5f; // assuming hash returns values between 0 and 1
vec2 subSampleCoords = pixelUV + vec2(subOffsetU, subOffsetV);
vec3 sample = TextureFetch (ProjectionUnDistort(subSampleCoords));
sum += sample;
}
vec3 averagedResult = sum / sampleCount;
Some example c++ hash function i'm using would be this:
I forgot to mention that such hash functions are no perfect random number generators. It can happen, e.g. if image width has a certain value, that the generated random samples show patterns, and then the method no longer works properly. Without visualizing sub sample positions that's hard to detect - quite an annoying problem. That's why you can find hundreds of different hash functions in shadertoy, but none is perfect. To fix it, we can try adding some constant arbitrary numbers like so:
AhmedSaleh said: How about this solution that I did ?
I see it's sampling a regular grid per texel, so yes this should give you high quality multi sampling.
But i miss the projection mapping in your inner loop? I assumed you have some fancy analytical mapping from planar to fisheye projection, and you would need to do this for each sample before the texture lookup.