Advertisement

How is my OpenGL test program overheating my CPU without causing high CPU usage?

Started by June 12, 2022 07:32 PM
5 comments, last by dpadam450 2 years, 5 months ago

Hi,

I'm using Windows 10 and Visual Studio 2022 and C++17 with the multi-threaded MSVC Runtime as a DLL (/MDd and /MD). I'm using glfw3.3.7, glad (latest version as of 6/12/2022), and GLM 0.9.9.9.

I downloaded GLFW3 and created the test program from their quick start page. I substituted GLM in for linmath. The test program is just the standard early test thing that shows a spinning triangle.

While I was running it, my computer crashed! Out of nowhere! And when I restarted it, it went into the bios with a message saying my CPU was at a high temperature. My fans had not even kicked in! It was like somehow my computer didn't notice it was overheating!

I've turned down my overclocking and tuned my fan profile so this isn't happening as quickly, now, but the strange thing is, this program isn't causing high cpu usage at all! The cpu is sitting around at about 4% usage! (I have glfwSwapInterval(1) set.) And yet I can watch the “CPU package” temperature rising in Amory Crate! The “CPU” temperature doesn't rise so much, so the fans don't spin up, I have to manually turn up the fan speed to prevent it from overheating!

What the freak!? No game I play does this. I have an AMD5900x with a Nvidia 3060 gpu. All my off-the-shelf games run great, I can overclock, no problem. Cpu and gpu temps stay low all the time. There's something specific about This Particular Program that somehow heats up the cpu WITHOUT causing high cpu usage! This is so strange!

It can't possibly be GLM? Is it something about memory bus thrashing because of the MS _security shtuff doing extra work for every buffer?

I dunno. Grasping at straws here. I'll try compiling in release mode (as opposed to debug mode), I know GLM is a lot more efficient in the release version. I'll make sure the release version is compiling with GS-. I'll try doing a few optimizations like taking the glViewport statement out of the main render loop. But..? Really? My “CPU Package” is overheating without showing high usage? And it doesn't happen with any of my games, only with this particular OpenGL program? That's not even doing much? What could possibly be happening here?

Any ideas?

edit: I just noticed my “cpu package” temp, as displayed by Amory Crate, jumps up from like 62 to 69(!) on the very next sample after I shut off my test program. So it's like somehow my test program is preventing Amory Crate from properly sampling the cpu package temp. What? How could it possibly be doing that?

Put a Sleep(1) in your update loop to see if that reduces the workload. Might help you see if its related to how much work you are processing per frame or some other issue with your graphics card/code/drivers.

NBA2K, Madden, Maneater, Killing Floor, Sims

Advertisement

The Ryzen chips are super sensitive to overheat – they will immediately shut down when they hit the magic value.

I struggled for a long time with throttling on a 1950x, which turned out to be because of a poor fit between CPU and cold plate for an all-in-one watercooler. Once I got a proper cold plate with a well machined bracket, and a very thin amount of paste (not the taped pre-applied stuff,) I could push much, much harder – almost like a new computer at the time :-)

So, my advice is: Unmount your cooler, check all the mounting screws for tension and alignment, re-apply only a very thin amount of paste (you should be able to essentially see through it) and screw it on evenly. That might just help you get more out of the CPU!

Why would a particular program do this? Because your program is doing very little, it's probably hitting some particular code path, that games that do a lot, don't hit, and that code path happens to tickle the bits that heat up that particular part of the core, rather than spreading it out. Would be my guess.

I also like the suggestion to add a ::Sleep(1) in your main loop, just to make sure it's not an infinite-framerate problem.

enum Bool { True, False, FileNotFound };

Hello and thanks for your replies,.

I'm reluctant to mess with the cooler because I already tried pretty hard to do the best job I could of dealing with the cold plate and the thermal paste and so on and I have no trouble with other programs, I'm even able to overclock pretty well, I just have trouble with this program.

I did play around with setting various values with glSetSwapInterval and just putting a direct sleep in using std::chrono's high resolution clock and sleep_until function.

I found that cranking up swap interval value causes really bad jittering and reduces the cpu package temp some, but only by a few degrees.

I found that setting swap interval to 0, and instead adding in a deliberate sleep actually smooths the jittering in comparison with using swapInterval(0) and no sleep. On this computer, with no swap interval and no sleep, I still don't see any noticeable cpu usage. I'm wondering if maybe my driver settings are forcing a vsync.

BUT I found that using swap interval 0 and using a deliberate sleep value reduces the resulting package temp by almost 15 degrees! It's still hardly touching the CPU in terms of over all usage. The CPU usage stays between 1% at about 3% whether my program is running or not.

Wow. So weird. Well, thanks for the suggestions. I guess I'll work on getting an fps value displayed and on letting user maybe select a target fps value and then I'll have the program slowly adjust the sleep amount to try to average about that fps, given however long my frame update is taking on their box. (If they enter an impossible target, I'll just set the sleep to 0.) Maybe I'll but in a “vertical sync” option which would toggle the glSwapInterval value between 0 and 1.

Even just a sleep of 500 nanoseconds reduces the resulting temp by over 10 degrees.

shavais said:
setting various values with glSetSwapInterval

There's a reason we're suggesting that you should use ::Sleep() from the Windows SDK.

The reason is that graphics drivers frequently just ignore the vblank / refresh intervals requested by applications.

The other reason is that the timer libraries in standard C++ frequently are terrible – for example, they may busy-loop for shorter interval sleeps, to be “more accurate.”

Once you have real scenes of real complexity, maybe you won't need so much sleeping on the CPU.

enum Bool { True, False, FileNotFound };

You shouldn't have to set an FPS limit from an end-user or anything like that. I suggested Sleep(1) to see what it does temporarily, because that should drastically stall everything related to your program and the GPU not receiving work to do. There usually isn't any reason to render faster than 120fps right now (well 240 monitors exist), so you can always check your frame time and try to sleep for the remaining time so that it doesn't run the program as fast as possible for no reason.

NBA2K, Madden, Maneater, Killing Floor, Sims

This topic is closed to new replies.

Advertisement