Supposedly nvidia added the __GL_FORCE_GENERIC_CPU enviroment variable so that Valgrind would work with the nvidia OpenGL library. I got this from the Valgrind FAQ:
quote:
NVidia also noticed this it seems, and the "latest" drivers (version 4349, apparently) come with this text
DISABLING CPU SPECIFIC FEATURES
Setting the environment variable __GL_FORCE_GENERIC_CPU to a non-zero value will inhibit the use of CPU specific features such as MMX, SSE, or 3DNOW!. Use of this option may result in performance loss. This option may be useful in conjunction with software such as the Valgrind memory debugger.
Set __GL_FORCE_GENERIC_CPU=1 and Valgrind should work. This has been confirmed by various people. Thanks NVidia!
I checked that that text is indeed in Nvidia''s FAQ for their latest drivers (the drivers I''m using, btw).
I''m setting the envoriment variable like this:
__GL_FORCE_GENERIC_CPU=1; export __GL_FORCE_GENERIC_CPU
and when I type ''export'' I see this at the bottom:
declare -x __GL_FORCE_GENERIC_CPU="1"
I''ve done the same thing with other enviroment variables the library uses (vsyning, etc) and those work. Yet when I valgrind my landscape engine I still get the same error I do without the variable set:
valgrind: vg_ldt.c:167 (vgPlain_do_useseg): Assertion `(seg_selector & 7) == 7''
failed.
sched status:
Thread 1: status = Runnable, associated_mx = 0x0, associated_cv = 0x0
==14934== at 0x40267641: __nvsym18200 (in /usr/lib/tls/libGL.so.1.0.4496)
Has anyone gotten Valgrind to work with their nvidia-opengl-library-using app? Anyone have any ideas why I can''t seem to get mine to work?