Advertisement

Next Generation OpenGL

Started by August 11, 2014 03:37 PM
75 comments, last by Sik_the_hedgehog 10 years, 2 months ago

I posted that a while ago somewhere else:


Have ZERO validation during rendering by using a Pipeline object encompassing ALL states. (but not pointers to buffers/textures)
Fetching an invalid (not COMMITed) page returns zero.



/* STANDARD VERSION */
Device
{
/* EVENT */
HEvent CreateEvent();
void WaitForEvent( HEvent );
void WaitForMultipleEvents( HEvent[], count );


/* DATA */
void* MemAlloc( CPU|GPU, size, RESERVE&|COMMIT ); //c.f. VirtualAlloc
void MemFree( address, size, RELEASE|DECOMMIT ); //c.f. VirtualFree


Descriptor* CreateBufferDesc( address, size/*, DXGI_FORMAT[], count*/ ); //{Index/Vertex/Constant/Texture}Buffer ; For DEBUG it might be useful to type buffers too.
// Surfaces need a complex Descriptor because data is transformed from CPU memory to GPU memory... (Morten Order...) ; In the case of std layouts + restrictions it wouldn't need be
Descriptor* CreateSurfaceDesc( address, {1D, 2D, 3D, CUBE}, DXGI_FORMAT, samples, width, height, depth, /*layers,*/ lod ); //reinterpret width/height/depth, add layers if 3D_ARRAY allowed


/* PROGRAM/PIPELINE */
Program* CreateProgram( {Vertex, [ Hull, Tessellator², Domain, ] Geometry, Rasterizer², Fragment, DepthStencil², Blender²}, char* source );
// All items marked with "²" are states.
// Samplers are also states defined in their respective programs, and include a DXGI_FORMAT.
// Vertex Programs include the index DXGI_FORMAT, vertex buffer descriptions (inc. DXGI_FORMAT) and IB & VB[] "bindings". (For PipelineProgramDescriptors)
// Fragment Programs include all output buffers making the FrameBuffer.


Pipeline* CreatePipeline( Program[], count ); //That's the whole graphics pipeline in one object.


/* COMMAND QUEUE */
CommandQueue* CreateCommandQueue();
void Process( CommandQueue& queue ); // FIFO order queuing then execution.


/* DISPLAY */
void Show( Descriptor left, Descriptor* right, iRefresh ); //right is optional for stereoscopy. iRefresh : at which screen refresh to display it. (0=immediatly, 1=next... ; -1=next unless late already in which case immediatly)
// It does not work on a multi sampled Surface.
}


CommandQueue
{
//Every function in here is added to the CommandQueue and doesn't execute until the CommandQueue runs after having been enqueued on the Device.


/* EVENT */
void Insert( HEvent );


/* MEMORY */
void MemSet( address, size, value );
void MemCopy( src, dst, size );
void MemCopyEx( src, dst, size, DXGI_FORMAT ); // When src is CPUmem, data is linear. When dst is GPUmem it will be optimised to native [Textures]. => src@GPUmem & dst@CPUmem means data will be turned into linear.


/* PIPELINE SETUP */
void Pipeline( Pipeline* );
void PipelineProgramDescriptors( Descriptor[]*, uint32_t* counts ); // 1 Descriptor[] per Pipeline's Program
// That's akin to "SetShaderResources" and "SetConstantBuffers"... functions of D3D10+, except you just provide Descriptors array for each Program.
// IB & VB[] are also set using this function. The Program specifies the array layout.
// "Framebuffer" is also specified by Descriptors


/* DRAW */
enum MODE { POINTS, LINES(_ADJ), LINE_STRIP(_ADJ), TRIANGLES(_ADJ), TRIANGLE_STRIP(_ADJ), PATCHES };
void DrawArrays( MODE, First, nVertices, nInstances, BaseInstance );
void DrawElements( MODE, First, nIndices, BaseVertex, nInstances, BaseInstance );
void DrawArraysIndirect( MODE, Descriptor ); //Buffer format : First, nVertices, nInstances, BaseInstance
void DrawElementsIndirect( MODE, Descriptor ); //Buffer format : First, nIndices, BaseVertex, nInstances, BaseInstance
// w/ First : the starting point in VB(DrawArrays), IB(DrawElements) ; BaseVertex : a constant added to each index before fetching from VB ; BaseInstance : the base instance for use in fetching instanced vertex attributes.


/* DISPATCH */
void Dispatch( nX, nY, nZ );
void DispatchIndirect( Descriptor ); //Buffer format : nX, nY, nZ

}


Shading Language: similar to CLSL I'd think.
Here are an extended version idea:

/* EXTENDED VERSION */
Device
{
void SetPageFaultHandler( PAGEFAULTHANDLERPROC ); //Called either at each individual Page fault, or after each Show(...)
}


Shader Language:
-Needs access to existing Descriptors and Pipelines. (Optional access to existing CommandQueues ?)
-Needs CreateCommandQueue() & Process( CommandQueue& queue ).
-Needs all CommandQueue functions, except Insert( HEvent ).
I haven't written an application using this API yet (it wouldn't do anything anyway as it's not implemented), but I plan to in order to make sure nothing's missing.
I'd be curious to hear about other programmers ideas, and how well they think it would integrate into their engine(s), and whether it would make their lives easier...
I also suggested that for an upcoming OpenGL/ES revision when I was working for an IHV, but politics were getting in the way...
Many thanks to ATi for kicking everyone's bottom with Mantle, forcing D3D12 and OpenGL Next to get done !
-* So many things to do, so little time to spend. *-

Even AMD is on board...maybe they give up Mantle for OpenGL5 or at least make it easy to crossdevelop? The only missing is of course MS. But nowadays this shouldn't bother anyone anymore.


AMD are a little divided as a company it seems: on the one hand you have Mantle, on the other hand you have their OpenGL rep saying "we don't need Mantle"; make of that what you will.

(Also as a side note, MS are now part of Khronos, all be it currently only focusing on WebGL, having once been a part of the ARB many years ago.)
Advertisement

(Also as a side note, MS are now part of Khronos, all be it currently only focusing on WebGL, having once been a part of the ARB many years ago.)

Which is worrying. Microsoft's contributions to the ARB mainly seem to have been to claim they think they had some patents related to any new idea anyone suggested but they never got round to producing any evidence that they did or admitted they didn't. They were very much there as a way of blocking OpenGL development.

They were very much there as a way of blocking OpenGL development.


I would buy into that theory IF after MS left the ARB OpenGL in March 2003 development had gone into over drive...

However, 2004 saw the release of the much reduced OpenGL 2.0.
It was then two years until 2.1 was released.
Two years until the 3.0 release which was a complete clusterfuck. (See my Slashdot referenced thread on the subject

After that things sped up, but that was largely a reaction to the fact that with DX10 and DX11 OpenGL was woefully behind and it was almost viewed as a mad scramble to catch up.

I'm not going to claim MS were saints, they were in a position of dominance at the time so tended to throw their weight around a bit HOWEVER after they left the ARB was still a mess of infighting and companies trying to one up each other and it took years and a hell of a kick to sort that mess out; for the last 11 years now you can't blame MS for any of OpenGL's ills and that's consisted of two of the biggest API mistakes going.

As to how much influence they'll have when it comes to WebGL remains to be seen; in that space they aren't the biggest players and the company has performed a shift with regards to how it positions itself - this isn't the 90s MS any more, although if the major industry players let MS block things again then they are fools frankly.


They were very much there as a way of blocking OpenGL development.

I would buy into that theory IF after MS left the ARB OpenGL in March 2003 development had gone into over drive...

However, 2004 saw the release of the much reduced OpenGL 2.0.
It was then two years until 2.1 was released.
Two years until the 3.0 release which was a complete clusterfuck. (See my Slashdot referenced thread on the subject

After that things sped up, but that was largely a reaction to the fact that with DX10 and DX11 OpenGL was woefully behind and it was almost viewed as a mad scramble to catch up.

I'm not going to claim MS were saints, they were in a position of dominance at the time so tended to throw their weight around a bit HOWEVER after they left the ARB was still a mess of infighting and companies trying to one up each other and it took years and a hell of a kick to sort that mess out; for the last 11 years now you can't blame MS for any of OpenGL's ills and that's consisted of two of the biggest API mistakes going.
I wasn't trying to claim the problems with the ARB were solely down to Microsoft by any means - just comment on their involvement with it.

As to how much influence they'll have when it comes to WebGL remains to be seen; in that space they aren't the biggest players and the company has performed a shift with regards to how it positions itself - this isn't the 90s MS any more, although if the major industry players let MS block things again then they are fools frankly.

I'd be more worried if they were trying to get involved in OpenGL/ES itself rather than WebGL. I still think they need to be watched like a hawk though.

https://www.khronos.org/files/opengl45-quick-reference-card.pdf

Advertisement
I wasn't trying to claim the problems with the ARB were solely down to Microsoft by any means - just comment on their involvement with it.

There's not even much in the way of evidence of that.

In fact all of the evidence is that Microsoft badly wanted to break into the graphics workstation market with Windows NT, and for that they needed OpenGL. It doesn't seem to make much sense that they'd try to kill OpenGL based on that, does it?

The evidence is also that Direct3D came about only because the NT team didn't want to play ball with the consumer Windows team, so the latter had to go off and invent something of their own instead.

You see, Microsoft is frequently thought of as some great big monolithic oppressive regime (probably with cool marching music: all such regimes tend to), but they're not really. All the teams within Microsoft have historically tended to operate largely independently and there is - or at least was - a culture of pretty severe infighting between them.

The tired old "oh noes, teh Micro$oft iz trying to killzor teh OpenGL" myth is just that - a tired old myth. Microsoft needed a strong competitive OpenGL to enable them to get some of that sweet sweet graphics workstation market money.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

There's not even much in the way of evidence of that.

I'm going on what I remember from reading the ARB meeting notes at the time.

In fact all of the evidence is that Microsoft badly wanted to break into the graphics workstation market with Windows NT, and for that they needed OpenGL. It doesn't seem to make much sense that they'd try to kill OpenGL based on that, does it?

The CAD crowd weren't too interested in the suggested new features that made OpenGL more suitable for game developers and maintaining backward compatibility for them caused many of the problems in the Longs Peak era.

Microsoft weren't trying to kill OpenGL - just trying to stop it being a viable platform for producing Windows games (which would then be relatively easy to port to other platforms).

The CAD crowd weren't too interested in the suggested new features that made OpenGL more suitable for game developers and maintaining backward compatibility for them caused many of the problems in the Longs Peak era.

Word is that - despite what was put-out at the time - Longs Peak wasn't actually killed by the CAD crew. In fact, according to the GDC Longs Peak presentation, complete backward compatibility was an explicit goal of Longs Peak, so CAD vendors had absolutely nothing to worry about.

(which would then be relatively easy to port to other platforms).

This is also a myth.

OpenGL on it's own doesn't make a game easy, or even relatively easy, to port.

You've still got to port your networking code, your sound code, your windowing system code, your input code, your filesystem, and all of the other platform-dependent subsystems. OpenGL sure ain't gonna make those portable. "Instant portable (just add OpenGL)!" is probably as big a lie as the old list of platforms that people always trot out (PS3, Wii, etc) when making claims for portability.

In fact portability between different hardware on the same platform is an even bigger problem, and that's something that OpenGL has historically failed miserably at. You get GL_ATI_do_it_this_way, GL_NV_do_it_that_way and eventually after two+ years of tortuous nitpicking and squabbling over semantics you get GL_ARB_do_it_the_other_way. How is that possibly a good thing?

And that's probably on top of my personal wish-list for Next Gen GL: kill off vendor-specific extensions. If we're going to have extensions at all, start them at EXT and require two or more vendors to implement them before shipping in public drivers; promote to ARB at the appropriate time.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

And that's probably on top of my personal wish-list for Next Gen GL: kill off vendor-specific extensions. If we're going to have extensions at all, start them at EXT and require two or more vendors to implement them before shipping in public drivers; promote to ARB at the appropriate time.

The problem with this though, is if vendor X has some new feature which their hardware supports, then how can that be exposed? You can't add it to the API because not all of the vendors will support it. So the only solution, in my eyes, is to use extensions. It's a bit of a difficult situation, because on one hand you want to provide an "even playing field" for all the vendors so that developers can code to the API, and not to each vendor. But on the other hand, if vendors have some shiny feature, such as hardware accelerated ray tracing, then developers will also want to make use of that if it's available.

This topic is closed to new replies.

Advertisement