Hello, my simplified code looks like that:
//Queries initialization
// disjoint0 is D3D11_QUERY_TIMESTAMP_DISJOINT
// queryStart & queryEnd are D3D11_QUERY_TIMESTAMP
while (true)
{
m_d3DeviceContext->Begin(disjoint0);
//UpdatingScene
//DrawingScene
m_d3DeviceContext->End(queryStart);
Sleep(10); //First sleep
m_swapChain->Present(0, 0);
m_d3DeviceContext->End(queryEnd);
m_d3DeviceContext->End(disjoint0);
Sleep(10); //Second sleep
while (m_d3DeviceContext->GetData(disjoint0, NULL, 0, 0) == S_FALSE);
D3D10_QUERY_DATA_TIMESTAMP_DISJOINT tsDisjoint;
m_d3DeviceContext->GetData(disjoint0, &tsDisjoint, sizeof(tsDisjoint), 0);
if (tsDisjoint.Disjoint)
continue;;
UINT64 frameStart, frameEnd;
m_d3DeviceContext->GetData(queryStart, &frameStart, sizeof(UINT64), 0);
m_d3DeviceContext->GetData(queryEnd, &frameEnd, sizeof(UINT64), 0);
double time = 1000.0 * (frameEnd - frameStart) / (double)tsDisjoint.Frequency;
DebugLog::Log(time);
}
The first sleep is not affecting GPU time at all (what is desirable, obviously), but the second one does. I made a lot of tries and it looks like sleeps before swapchain are ignored by GPU, but for some reason any sleep between swapchain present and getting data from disjoint causes increased GPU time by its value. Changing queries' ends places make no difference.
Why do I care about that sleep? In my real code I'm getting data of n frame and in the same frame getting data of frame n-1. So my GPU time results for n-1 are increased by time needed to evaluate n frame.
Why is it happening and what can I do to prevent that?