Advertisement

freeglut multithreaded on AMD GPUs

Started by April 21, 2020 01:12 AM
14 comments, last by taby 4 years, 9 months ago

My code is at: https://github.com/sjhalayka/julia4d3​

Basically, I took the multithreaded approach to keep my interface responsive while I calculate stuff. Now, one must create a new context inside the thread function, which I do. The thing is, it all runs on Intel no problem. On AMD, it works intermittently. Do you have any experience with freeglut and multithreading?

freeglut is a framework based on callbacks. It is for beginners. I've seen your work. You are not a beginner. Stop using that toy framework already.

🙂🙂🙂🙂🙂<←The tone posse, ready for action.

Advertisement

Thanks for your time.

I have a question: do you have an nVidia graphics processor? I'd like to know how the code runs on nVidia.

As for the framework, what do you suggest? DX12? The main reason why I'm stuck with GLUT is because that's what's needed for the GLUI graphical user interface library.

Works fine on Intel. If it works fine on nVidia, then the problem is the AMD driver. How does one talk to AMD about multithreaded compute shader support?

P.S. This app that I'm working on is Juila 4D 3, the sequel to Julia 4D 2 and Julia 4D 1. Both 2 and 1 used Direct3D 9.0c. I wanted to go with OpenGL so that it would compile, link, and run on Linux too.

test nVidia test

Why are all your ‘nVidia’s hyperlinked?

🙂🙂🙂🙂🙂<←The tone posse, ready for action.

Freeglut runs on nvidia and is used in multithreaded applications. "Multithreaded" is not tied to the windowing api.

That said, i am not sure if the deprecated freeglut supports multiple opengl contexts if you want to go that way, but glfw for instance does. It also gives you much better control over the main loop.

I would also say, if a tool forces me to use a deprecated version of an api (here: GLUI and GLUT), i would not use it. There are enough alternatives that work without such dependencies. Also, if i understand it right, there still are unresolved problems and your code is a patchwork of versions. Wouldn't it make sense to first address these points on the checklist, before thinking about the user experience ? The later OpenGL Versions give you much better control about data, synchronization and all that. You don't have to craete new buffers every time data changes.

taby said:
As for the framework, what do you suggest? DX12? The main reason why I'm stuck with GLUT is because that's what's needed for the GLUI graphical user interface library.

I've never used a framework. Opening a window and creating GL context, getting input or creating threads is little work using Win32 (a matter of few hours), and porting my abstraction for all this to another OS would be little work too.

GLUI looks as outdated as does GLUT. Sooner or later you have to get rid of both. Again, take a look at ImGui: https://github.com/ocornut/imgui​

It includes examples using multiple frameworks and gfx APIs: GLFW, SDL, pure Win32, etc, Probably it's easy to pick one you like from there.

Advertisement

taby said:
How does one talk to AMD about multithreaded compute shader support?

https://community.amd.com/community/devgurus/graphics_programming

I'm not sure if your issue is related to GLUT.

Once i had an issue of compute shaders producing just garbage on AMD. Removing all deprecated stuff (glVertex(), etc…) solved the issue. I'd never had figured this out myself, somebody here on the forum gave me the tip.

@taby I bet it is a problem of your multithreading approach. AMD is perfectly fine.
Intel is more friendly to bad synchronization than AMD.
When my synchronization is wrong(my own fault as a coder), Intel shows less than 10 wrong results from hundreds of millions of computations. While running the same(same wrong) code on AMD, it will show thousands of wrong results. I guess it has to see with the way AMD and Intel manage workloads internally. A correct code should not show a single wrong result neither on Intel, AMD or NVidia. Depending of the task you perform, it could be easier to catch these few wrong results on Intel, but i bet they are there.

I bet it is a problem of multithreading(or Freeglut?) and the behavior would repeat on NVidia. The GPU drivers are fine.

(My tests are unrelated to the differences in floating point math between vendors. I run the wrong code twice on Intel and then compare the two results. If the results are not the same, then there is a problem with synchronization and the wrong computations that show up are very few on Intel. I run the same wrong code twice on AMD and the two results vary in thousands of computations.)

Thank you all so much for your expertise.

Dear @nikito : the threading is fine. i even decided to wrap the drawing and compute shading sections in mutex locks/unlocks, so that they don't operate at the same time (I took that out, because it made no difference). Basically freeglut randomly complains that I didn't attach a display function to the second window. That's absurd, because I do it literally the next statement after the window is created. I get no such behaviour on Intel.

P.S. Have you ever compiled freeglut? I tried to compile the source, but it complained about missing functions. Anyone know how to overcome that kind of thing? It's like it's missing half the source.

GPU has its own sync. CPU has its own. If i were you, i would just get rid of freeglut.

I'm sorry, i can't help.

This topic is closed to new replies.

Advertisement