Hello all,
I'v noticed a very weird behaviour of my code:
for(int y = start_y; y < end_y; ++y)
{
for(int x = start_x; x < end_x; ++x)
{
Spectrum s;
vector2<real> pixel_sample(real(x) / real(w), real(1) - (real(y) / real(h)));
real total_weight = 0.0;
for(int i = 0; i < aa_quality + 1; ++i)
{
vector2<real> sample = pixel_sample;
if (aa_quality > 1)
{
vector2<real> d = (Sampler::SampleUniform2D() - vector2<real>(0.5, 0.5)) * (filter.GetWidth() * 2);
sample += (d / vector2<real>(w, h)); //Randomly offsets the ray if aa > 1 sample per pixel
}
Spectrum dofs;
for(int j = 0; j < camera->GetSamplesPerPixel(); j++)
{
vector2<real> dof_sample = sample;
ray<vector4<real>> rr = camera->GenerateRay(dof_sample);
rr.maxdistance = Q_INF;
SurfacePoint point;
point.previousIOR = Spectrum(1.0);
real w = filter.Evaluate(dof_sample);
total_weight += w;
Spectrum val = qlRenderer->Trace(rr, point, 6);
dofs += val * w;
}
s += dofs / 1;
}
s = s / total_weight;
buffer->SetPixel(s, x, y, real(0));
}
That's the main loop of my raytracer. That just temp code, is not yet part of the core rendering framework.
I have a test scene that renders in 1 sec as long as aa is set to 1 sample per pixel (variable aa_quality). If I set it to 2, rendering time jumps up to 25 secs!
If I comment out the for statement, leaving only its body, I get back to 1 sec. If I hard-code “2” I get 25 secs.
As far as I can tell, there is no reason for it to behave this way: repeating the loop twice should lead to twice the rendering time.
Even stranger, if I force the most inner loop (for depth of field) to repeat twice (with aa loop executing once):
for(int j = 0; j < 2; j++)
, nothing happens, I get twice the rendering time despite aa and dof loops being essentially the same thing: in both cases 2 rays are shot into the scene.
If that was something related to the optimizer, I would expect both loops to break it.
Any idea? (working with visual c++ 2019 compiler).
Thank you!