Advertisement

Next Generation OpenGL

Started by August 11, 2014 03:37 PM
75 comments, last by Sik_the_hedgehog 10 years ago

Curious on Nvidia's reaction to all this.

http://techreport.com/news/26922/amd-hopes-to-put-a-little-mantle-in-opengl-next

They shouldn't have any 'reaction' to it; it's not like they are saying "it will be Mantle" just that the Mantle designs are free to use... which makes sense.

On the desktop hardware has largely got to a point where there are no crazy ideas; devices have command queues, those queues consume buffers, those buffers contain work.

All you need is a way to reserve memory buffers, a way to create commands and a way to kick the work - the APIs don't need to be these huge sprawling things, not at the lowest level.

Pretty much beyond that is just faffing about which isn't needed 99% of the time. Hazard tracking, for example, shouldn't be a thing the driver needs to spend so much time doing; game frames are very regimented and the whole lock-discard thing which requires all that tracking is very much an outcome of how the APIs were designed; game frames are very regimented things and hazard tracking is pretty easy to do once the driver gets out of the damned way.

Heck, the only reason an API like this needs a dedicated graphics command queue is simply because current hardware has one; give it a generation or two and the Graphics Command Processor will likely become just another command processor on the front end.

Give me a way to create buffers.
Give me a way to queue work.
Give me a way to insert barriers.
Then get the frack out of my way.

Hobbyists might want a layer on top to automagically sort things out, so they can continue to live in their easy world, however a low level API should not be hobbled because one group wants it to be 'easy'; the low level should be fast and simple - you can layer back on top of it to get back to a more 'traditional' GL like setup but it should be just that - a layer.

Honestly, if OpenGL Next doesn't look like Mantle and D3D12 concept wise then they will have failed.
Advertisement
My biggest gripe with OpenGL is that the specifications are standard, but the implementations of the standards are vendor-specific...
Personally, I hope that "GL Next" is just a standardized mantle clone, and then a completely standard OpenGL implementation is created on top of "GL Next", meaning that GL would finally, actually be portable in practice!

This is my current take on the GL Next proposal.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Waiting for the day where GPUs ditch rasterization in favor of raytracing so all the current graphic APIs have to be thrown away as they'll become useless tongue.png

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

Waiting for the day where GPUs ditch rasterization in favor of raytracing so all the current graphic APIs have to be thrown away as they'll become useless tongue.png

Why would they become useless? Sure, a GPU raytracer looks completely different on the inside, but why does the API on the outside have to be radically different?

They would likely add a different shader stage (called "iridescence/reflection/refraction shader" or the like) and either remove or re-purpose the pixel shader. Spheres would likely be added as first class primitives for obvious reasons, but the rest would more or less remain the same. You still need to bind buffers and upload vertices to the GPU when you do raytracing. You still need textures, and you still need to define which vertices go together into one primitive, etc. Yes, the server would need to cache the whole geometry, but that's not your concern.

The pipeline would evaluate screen fragments much like the pixel shader drawing a fullscreen quad does right now, do an intersection with the closest geometry instead of rasterizing all triangles, and invoke the whatever-you-call-it shader, and do another ray intersection if gl_RayOut[] has been written to, or blend the color value otherwise. It is not that much different really (only the parts that you don't see anyway are).

Advertisement

Trolling succeeded \o/

But even then I was a bit serious. Current APIs still make several assumptions regarding rasterization, for example stuff like draw order (raytracing will completely trash this assumption). You may think that doesn't matter in practical scenarios but the APIs still demand draw order to be respected because it can affect the resulting image depending on what you're doing. Also you'll have to think about never using the depth buffer (probably it will still be there in case you want to render the depth for computation reasons, but it won't be used for rendering anymore). Oh, also you'll have to stick to a single pixel shader for the entire rendered image, since otherwise you break the massive SIMD-style parallelization used in GPUs. You can emulate the old behavior on such hardware, but it'll perform horribly in comparison. For the record, compute APIs would be safe from such a hardware change, since they only need the parallelization part of the GPU (which would remain intact).

Also I'm not sure anybody would bother adding spheres as a primitive. Like, they're only good at being spheres, and usually you need something other than a sphere. May as well not bother with it and make the hardware easier to design.

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

We already have a ray-tracing API, it's called the compute shader tongue.png laugh.png

Seriously though, ray-tracing hardware already exists in consumer devices, accessibly via the OpenRL API.


Seriously though, ray-tracing hardware already exists in consumer devices, accessibly via the OpenRL API.

PowerVR Ray Tracing is a revolutionary technology that brings lightning fast ray tracing to the world's leading mobile GPU.
Oh the pun! PowerVR you're killing me!

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Okay, here is another great opportunity for them to make a hit version of OpenGL (God! I miss the old days). There is hardware next generation coming and also other upto date API versions coming soon for putting a fire under their seats.

With the new X99 chipset, we are waiting for much faster motherboards and specifically chips (Bring the Haswell-E! smile.png that I've been waiting a decade to have! ) and DDR4 RAM ( I thought we would be on DDR5 by now! ). Some are saying that overall the computers will be about 50% faster in the next generation.

Will Khronos target that??? I believe they will miss the boat and it will sail without them once again, but I really would like to see OpenGL get with the times.

This might be their last chance to get in the race before they are left in the hardware junk heap forever (Where OpenGL is King of the hill! LOL) and saying that all you have to do is wait for their next update that will run on a fraction of the machines - LOL.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

This topic is closed to new replies.

Advertisement