Advertisement

multi-threaded glx app.

Started by August 01, 2004 04:59 AM
4 comments, last by d4rk74m4 20 years, 6 months ago
Okay, I'm a bit of a noob to multi-threading so I probably don't know what I'm on about. Is it possible to have multiple threads rendering into their own drawables. I know that each thread would need their own rendering context, but would each thread be able to just render using their own context without taking a mutex and stopping other threads from rendering into their context. Okay.. i think I confused myself there.
If you use different contexts, I don't think you need to worry about stopping the other threads...Only one way to find out though...[wink].
----------------------------------------------------"Plant a tree. Remove a Bush" -A bumper sticker I saw.
Advertisement
I have been messing around with this for a couple of days, and discovered that if a thread doesn't take a mutex and call glXMakeCurrent when it wants to render something, and then call
glXMakeCurrent(dpy, None, NULL) before it releases the mutex it causes a segfault.

From what I've read in the gl manpages, each thread *should* be able to have their own contexts no problems. There are three possible reasons that I can think of for this not working:

1. The fglrx driver is dodgy (go figure)
2. NPTL
3. I have misunderstood something.


The context must be made by that thread as well.

To use the /same/ context with multiple threads you must do as you say - that it only one thread can own the context at a time (and glXMakeCurrent(dpy, None, NULL) should release it).

If you post code I could try it on my GeForce (my Radeon is out-of-commission at the moment).
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
The threads do create their own contexts. But thanks heaps, it had never occurred to me to test the code with an nvidia card. I'll pull my old TNT2 out when I get home and give it a go. Hopefully I'll have better results.
Okay, I believe the problem to be ATI's fglrx driver (well, at least 3.9.0, havent checked older versions yet). Using my TNT2 with the 6106 (i think that's right) driver, it works fine.

I'll post the test code in case I'm missing something anyway (excuse the messiness of it)

#include <stdio.h>#include <stdlib.h>#include <pthread.h>#include <X11/Xlib.h>#include <GL/gl.h>#include <GL/glx.h>#include <GL/glu.h>Display *dpy = NULL;int screen;XVisualInfo *vi = NULL;static int attr[] = {	GLX_RGBA,	GLX_DOUBLEBUFFER,	GLX_RED_SIZE, 4,	GLX_GREEN_SIZE, 4,	GLX_BLUE_SIZE, 4,	GLX_DEPTH_SIZE, 16,	None};void t() {	Window win;	GLXContext ctx;	GLfloat rot = 0.4f;	win = XCreateSimpleWindow(dpy, RootWindow(dpy, screen), 0,		0, 100, 100, 1, WhitePixel(dpy, screen),		BlackPixel(dpy, screen));	XMapWindow(dpy, win);	ctx = glXCreateContext(dpy, vi, NULL, GL_TRUE);	glXMakeCurrent(dpy, win, ctx);		glShadeModel(GL_SMOOTH);	glClearColor( 0.0f, 0.0f, 0.0f, 0.0f);	glClearDepth( 1.0f);	glEnable(GL_DEPTH_TEST);	glDepthFunc(GL_LEQUAL);	glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);	glViewport(0, 0, 100, 100);	glMatrixMode(GL_PROJECTION);	glLoadIdentity();	gluPerspective(45.0f, 1, 0.1f, 100.0f);	glMatrixMode(GL_MODELVIEW);		while(1) {		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);		glLoadIdentity();				glTranslatef( 0.0f, 0.0f,-6.0f);		glRotatef(rot, 1.0f, 0.0f, 0.0f);		glBegin(GL_TRIANGLES);			glVertex3f( 0.0f, 1.0f, 0.0f);			glVertex3f(-1.0f,-1.0f, 0.0f);			glVertex3f( 1.0f,-1.0f, 0.0f);		glEnd();		glFlush();		glXSwapBuffers(dpy, win);		rot += 0.05;		pthread_yield();	}}int main() {	pthread_t t1_info;	pthread_t t2_info;		XInitThreads();	dpy = XOpenDisplay(NULL);	screen = DefaultScreen(dpy);	vi = glXChooseVisual(dpy, screen, attr);		pthread_create(&t1_info, NULL, (void *)&t, NULL);	pthread_create(&t2_info, NULL, (void *)&t, NULL);	pthread_join(t1_info, NULL);	pthread_join(t2_info, NULL);		return 0;}


Edit: update. I can actually use this now with the fglrx driver, but I have to disable direct rendering. That's not so much of a hassle though, at least I'll be able to play games AND work on my project's without swapping video cards.

To do this without changing the code all I needed to do (thanks to someone on the rage3d forums) was set the LIBGL_ALWAYS_INDIRECT environment variable to 1.

[Edited by - d4rk74m4 on August 8, 2004 6:45:35 AM]

This topic is closed to new replies.

Advertisement