Advertisement

Question on OpenGL

Started by May 19, 2015 04:02 AM
5 comments, last by TheChubu 9 years, 7 months ago

I have been reading up on OpenGL, and there is a few things I need some clarification on,

The book I am reading, keeps talking about feeding data to OpenGL or the OpenGL context, but I am not exactly sure where OpenGL is. From my understanding, its integrated in the hardware, but I do not know where. Is it safe to assume its installed in the GPU?

My second question is where is the buffer memory located when it is allocated, is it in the hardware or virtual memory? And if hardware, where exactly? Also are the buffer binding points located in the GPU?

Also




    // Name of the buffer

    GLuint buffer;



    // Assigns a name to the buffer, and generates it

    glGenBuffers(1, &buffer);

    // Assigns the buffer to its target, or buffer binding point

    glBindBuffer(GL_ARRAY_BUFFER, buffer);



    // Used to initialze buffer.

    glBufferData(GL_ARRAY_BUFFER, 1024, data, GL_STATIC_DRAW);


    glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, nullptr);

    glEnableVertexAttribArray(0);

  

How does glVertexAttribPointer know where the buffer object is located?

As you can see, I'm having trouble visualizing whats happening, does anyone have any tips in understanding whats happening at the hardware level or does anyone have any supplementary information that can illustrate the under workings. I look forward to your responses.


From my understanding, its integrated in the hardware, but I do not know where. Is it safe to assume its installed in the GPU?

No, it is part of the GPU driver, it is just a software wrapper, an interface (API), to access the hardware in a certain way. There are many software solutions to access the hardware, e.g. OpenGL, Mantle or DirectX all access the GPU and provides many common and sometimes different ways to control the GPU. Much like the CPU, the operation system (OS) is just a software which controls the CPU and there are many OS available like Linux, MacOS or Windows.

My second question is where is the buffer memory located when it is allocated, is it in the hardware or virtual memory? And if hardware, where exactly? Also are the buffer binding points located in the GPU?

Both. The driver (OS of the GPU) is responsible to allocate the memory. Sometimes it is directly allocated on the video card, sometimes in the main memory of the computer. Sometimes it is transfered between main and video memory. The APIs, eg. OpenGL, provide different ways to access the memory. E.g. you can't access dedicated video memory directly, but you can command the driver to map a certain part of the memory in form of a buffer to the main memory, where you can access and manipulate it directly. Once you are done, you can unmap it and the driver will copy it back to video memory, so that the GPU have access of the data again.


How does glVertexAttribPointer know where the buffer object is located?

In this case, you first tell the OpenGL what buffer (=memory) you want to activate (called binding):


glBindBuffer(GL_ARRAY_BUFFER, buffer);

When you call the vertex pointer assignment, the last parameter is set to 0 (nullptr), this tells OpenGL to use the currently active (bound) buffer:


 glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, nullptr);
Advertisement

How does glVertexAttribPointer know where the buffer object is located?

In this case, you first tell the OpenGL what buffer (=memory) you want to activate (called binding):
glBindBuffer(GL_ARRAY_BUFFER, buffer);
When you call the vertex pointer assignment, the last parameter is set to 0 (nullptr), this tells OpenGL to use the currently active (bound) buffer:
 glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, nullptr);

Just to expand on that, this is why OpenGL is often described as a big global state machine, and it is one of the reasons many people think it is due for an overhaul. My understanding is that in the upcoming Vulkan, the first parameter to all entry points represents the state that is going to be changed with the function call.

Hi,

You can not access the hardware directly to manipulate it because GPUs and other hardware use machine / assembly language which humans can not understand (at least not to a practical extent). A human friendly language is needed to tell the hardware what to do, but this is indirectly. Between the two layers of hardware language and programming language is needed a kind of interpretive layer, in this case called an API (Application Programming Interface). If you had two people speaking different languages and they did not understand each other then you would need a third person as an interpreter, right? This is what the API does in the middle. OpenGL is that interpreter. Without some kind of interpretive system, then programming would not work for you. (For the sake of the reader, APIs such as OpenGL and Direct3D accomplish basically similar things as interpretive systems.)

OpenGL has it own rules for accomplishing things. Added to this, each coding language has its own coding syntax and other procedures. OpenGL supports a nice size set of different coding languages, but its best to become really familiar with one language before taking a second one. The more sophisticated the game, then the more demand for a scripting language which is a higher level coding language used for better productivity in programming gameplay. In those complex games, another language often is used for non-gameplay programming "underneath".

I'll leave it at that so the beginner's don't get too confused and can have something to digest.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

Thanks guys for the responses.

In addition, I was wonder what the purpose of shader storage blocks were. SSB feel redundant as don't uniform blocks and vertex attributes do the same thing.

I'm also having trouble understanding atomic memory operations. From my understanding they prevent multiple rewriting of a variable?

Atomic counters are also a problem. I cannot for the life of me figure out what they are and their use.


In addition, I was wonder what the purpose of shader storage blocks were. SSB feel redundant as don't uniform blocks and vertex attributes do the same thing.

SSBs can have WAY more memory available than Uniforms. For example, on some cards, the maximum Uniform array can be 16 KB but the SSBs can be 16 MB (Just an example).

If you're doing anything with compute shaders, the SSBs are how you pass data in/out.

I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming

My Twitter: https://twitter.com/indieprogram

My Book: http://amzn.com/1305076532

Advertisement


OpenGL supports a nice size set of different coding languages

Just a heads up: It only supports C. Everything else you see is a third party binding.

SSBs can have WAY more memory available than Uniforms.

That depends. On latest and greatest AMD hardware, you can allocate a 2Gb UBO if you want.

SSB feel redundant as don't uniform blocks and vertex attributes do the same thing.

Now, while UBOs can be 2Gb in size on some hardware, they have a minimum of 16kB, and the normal is 64kB. Thats where TBOs (Texture Buffer Objects) and SSBOs come in. Vertex attributes are fine and fast when you need data per vertex, but there are many things that dont vary per vertex. Say, a transform, or some material value.

That 'constant' or 'uniform' data is what you place on UBOs. Why do you use SSBOs if there are UBOs already that work in a similar way? Well, turns out SSBOs have more flexible memory layout options (I dont know the details, google for std430 layout if you want to read), and turns out that if you have 1MB of constant data for your frame, you want to upload that 1MB in as few memory transfers as possible. Transfer itself is cheap (remember, PCIe bus, rated at gigabytes per second) but API overhead and latency aren't.

For OpenGL its tricky to find where exactly the stuff you allocate really is. OpenGL is defined to work as if everything was on the GPU, but you dont really know. To be fair from the spec you probably don't even know if you're working with a GPU at all (some experienced OpenGL 1 or 2 era coder can tell you about software fallbacks for example). All the driver has to do is to make it seem like its like that, it doesn't necessarily has to do it.

Good thing is that you have things like ARB_debug_output and KHR_debug extensions, these allow the driver to report you whatever it wants to report, for example, on my nVidia card, one of the things it tells you is where the buffers you allocate are stored.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement