Advertisement

force opengl to use software rendering

Started by December 19, 2024 03:25 AM
11 comments, last by frob 2 hours, 26 minutes ago

I'm trying to write an opengl program that need to use pure software rendering for some test reasons.

But i just can't find a common way to force opengl using CPU only even if it's running on the platform that equipped with a GPU.

manually set compatibility profile or set the version of opengl into an older version (like 1.4) seems not work.

So is there any “official” or “recommended” way to achieve that (better if it's cross-platform)? Or any platform specified solutions?

“I'm trying to write an opengl program that need to use pure software rendering for some test reasons.”

I don't know what test reasons there would be. Maybe you can eloborate because I see no test reason as GPU's jobs are to draw things. I've been using GL for 20 years and never thought of this or used software rendering.

I'm just going to assume with experience there is no legitimate way to do this. GPU's are massively more complex than back in the day when you might have also had a software fallback. I don't think someone is writing driver fallbacks when you are not using hardware, which has very specific architecture in terms of how shaders run etc. I'd be surprised in any openGL DLL on windows you have implementation code for doing all the complexities of code that exists across many chips on a GPU.

NBA2K, Madden, Maneater, Killing Floor, Sims

Advertisement

dpadam450 said:
I'm just going to assume with experience there is no legitimate way to do this. GPU's are massively more complex than back in the day when you might have also had a software fallback. I don't think someone is writing driver fallbacks when you are not using hardware

I don't know about OpenGL from experience, but DirectX does still have that (https://learn.microsoft.com/en-us/windows/win32/api/d3dcommon/ne-d3dcommon-d3d_driver_type), so it is still possible/people are writing that kind of stuff. Thether a certain type of bug is a driver issue, or problem with the API).

For OpenGL, there does not seem to be a standard way, but stackoverflow suggests that there seem to be diverse libraries, depending on your exact version: https://stackoverflow.com/questions/10431591/opengl-software-rendering-alternatives.

Maybe until WindowsXP and / or older, Windows shipped with a CPU implementation of OpenGL. Iirc, the filename was opengl32.dll.
This might still work but lacking modern features ofc.

Weird. I can't imagine how long it takes to render a frame from any modern game on the CPU. The amount of parallel work between texture sampling, geometry shaders, pixel shader units. Running a pixel shader for each pixel. I'm surprised it even has a software implementation version for pixel shaders. I wonder if those are just interpreted on the fly like a scripting language or it compiles somehow to windows x64 code to run natively on windows? Does it interpret the pixel shaders and compile/store them as DLL's to call on the fly?

Still the original question, what exactly are you testing and running in software mode wouldn't really verify anything since your target is a GPU. If there is a bug running on a GPU you should probably fix the bug as is instead of thinking you should run in software mode to fix it which is a completely different setup. Like fixing a bug only seen on Xbox by running on a Playstation.

NBA2K, Madden, Maneater, Killing Floor, Sims

dpadam450 said:
I'm surprised it even has a software implementation version for pixel shaders.

I rather believe it had not. Probably it was just OpenGL 1.0/1, fixed function.
For some very simple games it worked fast enough.

Maybe you remember the Windows screensaver with the pipes, the 3D clock, and that stuff. This was using the software GL implementation.

Advertisement

OpenGL itself doesn't do it. That's by design and it's a good thing.

OpenGL works independently of the rendering system. It gives you a bunch of function calls that exist on the system, and the library implementation passes it along to whatever drawing subsystem actually does the work. It can be a graphics card on the machine sitting next to you, it can be across the network or an X terminal, or wherever else. Basically Program to OpenGL calls to core library to drivers to hardware renderers.

Probably the closest to what you're looking for would be the Mesa implementation. You would use their drivers so instead of interpreting the library calls with instructions to graphics hardware, it interprets the library calls with a software renderer. The software rasterizer isn't officially certified, but they do a lot of work to let it meet the standards.

The approach to selecting the OpenGL feature set is via the context construction. The pixel format and context attributes allow for some degree of ‘customization’ of the created context. Ex, one Windows there is the WGL_ACCELERATION_ARB attribute of the pixel format descriptor you can check. However, I'm of the same train of thought the others, in that you'll probably won't find any implementation from any of the IHV that support or return a pixel format with that attribute set.

If you put Opengl32.dll in the same directory as your executable, it will link to that instead of the system DLL. Which means you can run your own Opengl32.dll implementation, Mesa3D, or whatever, that doesn't use the GPU.

@sevenfold1 Thank you for your reply, that actually works. Seems carrying this DLL all along with my program is the only solution….

Advertisement