Advertisement

Next Generation OpenGL

Started by August 11, 2014 03:37 PM
75 comments, last by Sik_the_hedgehog 10 years, 2 months ago
The Siggraph 2014 slides have been released; pdf here

Of particular note is page 72;

- compatibility break with OpenGL
- start from first principles

- clean modern arch
- multi-thread/core friendly
- greatly reduced cpu overhead
- arch neutral - support for tile-based as well as direct renderers
- predictable performance
- improved reliability and consistency between implementations

- explicit control - app tells driver what it wants


The first point is the most important for me; breaking GL compat means it won't be dragged down by legacy so yay!
how about killing finite state machine and the extension hell?
a "standard" runtime debugging layer?

Hope this will imported into OGLES or I will never touch the android sdk anymore (last time I did it made me sick and insane for the entire 2012 Summer period)...

OT: first preview of windows 9 and sdk 9 should arrive in September/October this year, hope to see the D3D12 preview too.
"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/
Advertisement

If they take a few pages out of the Mantle book then hopefully the finite state machine should be going away.

I don't see Khronos porting any of this new API into OGLES - the new API will be for desktop and mobile so if anything "GL Next" will be made available directly on Android.

If they take a few pages out of the Mantle book then hopefully the finite state machine should be going away.

I don't see Khronos porting any of this new API into OGLES - the new API will be for desktop and mobile so if anything "GL Next" will be made available directly on Android.

So, hopefully it will not be a new Longs Peak. Good news.
"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

The Siggraph 2014 slides have been released; pdf here
Interesting, these slides say that gl_CullDistance discards the whole primitive if any one vertex has a negative value (which is exactly what I'd expect, too... otherwise, if all of them need to be negative, it's just the same as clipping).

The specification, on the other hand, says "for any enabled cull half-space is negative for all of the vertices", as do the online manpages.

So, that begs for the question: Which one is it, then? laugh.png

That's easy, NVidia one way, Intel the other, AMD crashes.

Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>

Advertisement

Or it's just never implemented as it should - much like how GL_CLAMP and GL_CLAMP_TO_EDGE ended up being the same in most drivers (they are supposed to act differently when using texture borders, which practically nobody used).

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

Question: why do we suddenly need, in 2014, much closer access to hardware? When hardware has been getting faster at a crazy rate and phones have the same 3D power as gaming rigs used to, why did someone decide "we need a few percent more by completely redesigning all the drivers and APIs"?

If this was happening 10 years ago it would make sense to me, but these days we have such an abundance of GPU power and it is increasing faster than you can keep up!

Question: why do we suddenly need, in 2014, much closer access to hardware? When hardware has been getting faster at a crazy rate and phones have the same 3D power as gaming rigs used to, why did someone decide "we need a few percent more by completely redesigning all the drivers and APIs"?

If this was happening 10 years ago it would make sense to me, but these days we have such an abundance of GPU power and it is increasing faster than you can keep up!

Its largely about the CPU-side, not GPU power.
Single core is dead - GL and D3D are broken by design in this area. Games consoles have allowed us to drive the GPU efficiently from many threads, but no other devices have, thanks to these legacy APIs.

The APIs are heavyweight. How can it make sense that we spend 8ms of CPU time just on *generating commands*, so that the massively powerful GPU can then consume those commands in 16ms... We shouldn't be wasting any time on preparing commands.
The equivalent would be if Windows/Linux were written in Java -- "why should we care, CPUs can handle it" some might say, but IMHO it's crazy to endure 100%+ inefficiency for no reason.

If we can make CPU-side graphics drivers perform 5x faster, why not embrace that? If someone promised to make our physical CPU hardware that muh faster, we'd be all over it.

On ether reason is that OS's used to use software rendering, modern OS's now draw themselves (and all your apps) using the GPU. The day-to-day average workload of GPUs has shot up dramatically, leading to huge changes to they way that commands are submitted to them and how they can be virtualized.

Within games, this generation the GPU isn't going to be the realm of just the graphics programmer either. The GPU has replaced the PS3's SPUs, which ran a lot of physics and AI jobs last gen. Now, we're going to have console gameplay programmers writing compute jobs that interleave with graphics workloads asynchronously (putting graphics on a high latency queue and these gameplay jobs on a different asynchronous queue), so we need APIs on PC that allow is to use the GPU in he same way.

Question: why do we suddenly need, in 2014, much closer access to hardware? When hardware has been getting faster at a crazy rate and phones have the same 3D power as gaming rigs used to, why did someone decide "we need a few percent more by completely redesigning all the drivers and APIs"?

If this was happening 10 years ago it would make sense to me, but these days we have such an abundance of GPU power and it is increasing faster than you can keep up!

We don't "suddenly need" it in 2014, we've needed it for quite some time longer now; 2014 just happens to be the year when the APIs, hardware vendors and developers are all coming around to the same kind of thinking (it could have been any year, it just happens to be this year).

Previously there have been other solutions to the problem of API overhead - batching, instancing, primitive restart, texture arrays, draw indirect, etc - but those solutions have their own limitations and have been pushed pretty much as far as they can go. The new APIs just solve the same problems that these other solutions solve, but in a more general and more useful way.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement