Lemonade101 said:
Of course i can't use shaders when it comes with the software rasterize, but it's not a big problem right now.
Sure you can! There have been plenty of software rasterizers that work with shaders, and work with various OGL features.
What you can't do is use the very old Opengl32.dll drivers supporting OGL 1.1 while still expecting them to automatically make OGL 4.6 calls. If you only include libraries for OGL 1.1, you're limited to March 1997 functionality.
IIRC even under 1.1 there was still the ability to query all the function pointers by name through glGetProcAddress(glFunctionName)
or on Windows specifically with the Win32 API function GetProcAddress()
. So even if you did link against a 1.1 library but somehow the underlying drivers supported all the library functions and you had code the followed the calling conventions and knew the constants, you could still request the pointers to the full modern feature set that the drivers support. It's basically the same way extensions have always worked, asking for a pointer to a function if it exists and then using it. The ‘man-in-the-middle’ OpenGL context doesn't need to directly implement the function, all it needs to do is provide whatever interfaces are asked about to connect the two sides. If you happen to use a newer library like OGL 1.5 or OGL 2.1 or 3.3 or 4.2, or the current 4.6, the core functions are automatically bound for you but I believe they are all still accessible through their address via function name lookup.
The Mesa implementation has an intermediate layer that processes shaders just fine, running them through an intermediate layer on various systems, although the docs say they've only got about 80% support if you are in software-only rasterizing, with only some of the more exotic, rarely used functions not implemented. They've got enough to pass their validation suite that pulls from many hundred games and tools.
Lemonade101 said:
i'm just curious about why opengl doesn't provide some functions that allow me to enumerate the devices on the computer.
Because of its design. OpenGL works through graphics contexts that are portable and cross-platform. Platform-specific commands and functionality are outside the GL.
Part of that is why for decades OGL was as portable as it was, working on Windows, Linux, Mac, game consoles like PlayStation (although they mangled some of it through PSGL), through feature phones and smart phones and tablets. OGL doesn't care about the operating system or software systems, if the graphics are done by hardware or software. As the spec describes, they provide a abstract "state machine that controls a set of drawing operations."
, and it is up to the the programmers and the GL implementors to implement whatever they need on their respective sides of that state machine.
It is designed to be portable, and accomplishes it quite well. Graphics systems through the eras of computing shifted back in the 1990s. Before then it was quite common to have terminals that were different from the machines running the software. It still is in the Unix world, but less common in the Windows world. The generic state machine model didn't need to be on any particular machine or have any particular implementation
The graphical display you see in front of you isn't necessarily where the program is running. As a simple example, you may have a program running on a supercomputer in the lab, but you want to display the graphics on your local machine. This is handled by several graphical systems, including the X Window System used by Linux, by having the graphical server get abstracted away. Your graphical display would be your desktop machine, and the graphical calls made by the program might be happening on the hardware right in front of you, or might be happening remotely. It is left up to the person who owns the system to decide.
For an OpenGL program, it' s about the window system that owns the context. If you've got an older SLI configuration with two graphics cards, if the window context is on one card then that's where OpenGL will do the work. If you're on a remote display like running an X terminal across a network, then the computer running the program can be in the lab while your graphical display and OpenGL context is on your local machine.
You can get information about the context like the context's OpenGL version, the vender string, the renderer name, and shader version.
Your program can create multiple OpenGL contexts, and since each could theoretically be associated with different rendering hardware they can have different OpenGL objects like shaders and buffer objects.
OpenGL doesn't technically exist until after you've created the context. That's part of the OpenGL Spec. You create that in a platform-specific way, which could be a Windows call, an X11 call, and Android system call, a PlayStation graphics subsystem call, whatever. As a result, it isn't OpenGL that is enumerating devices on the system, that's not part of the cross-platform library. Instead, you use whatever libraries are provided on your system (like wglCreateContext()
on Windows) with parameters appropriate for whatever system you're on to create your graphics context. Only after that call succeeds does OpenGL actually ‘exist’ as far as the spec is concerned. On Windows if you need to tie it to a specific device that's a platform-specific command, bind it to \\.\DISPLAY1 with a HDC, and pass the HDC to wglCreateContext(), which will then create the OpenGL context targeting DISPLAY1.