Curious on Nvidia's reaction to all this.
http://techreport.com/news/26922/amd-hopes-to-put-a-little-mantle-in-opengl-next
Curious on Nvidia's reaction to all this.
http://techreport.com/news/26922/amd-hopes-to-put-a-little-mantle-in-opengl-next
. 22 Racing Series .
This is my current take on the GL Next proposal.
Waiting for the day where GPUs ditch rasterization in favor of raytracing so all the current graphic APIs have to be thrown away as they'll become useless
Waiting for the day where GPUs ditch rasterization in favor of raytracing so all the current graphic APIs have to be thrown away as they'll become useless
Why would they become useless? Sure, a GPU raytracer looks completely different on the inside, but why does the API on the outside have to be radically different?
They would likely add a different shader stage (called "iridescence/reflection/refraction shader" or the like) and either remove or re-purpose the pixel shader. Spheres would likely be added as first class primitives for obvious reasons, but the rest would more or less remain the same. You still need to bind buffers and upload vertices to the GPU when you do raytracing. You still need textures, and you still need to define which vertices go together into one primitive, etc. Yes, the server would need to cache the whole geometry, but that's not your concern.
The pipeline would evaluate screen fragments much like the pixel shader drawing a fullscreen quad does right now, do an intersection with the closest geometry instead of rasterizing all triangles, and invoke the whatever-you-call-it shader, and do another ray intersection if gl_RayOut[] has been written to, or blend the color value otherwise. It is not that much different really (only the parts that you don't see anyway are).
Trolling succeeded \o/
But even then I was a bit serious. Current APIs still make several assumptions regarding rasterization, for example stuff like draw order (raytracing will completely trash this assumption). You may think that doesn't matter in practical scenarios but the APIs still demand draw order to be respected because it can affect the resulting image depending on what you're doing. Also you'll have to think about never using the depth buffer (probably it will still be there in case you want to render the depth for computation reasons, but it won't be used for rendering anymore). Oh, also you'll have to stick to a single pixel shader for the entire rendered image, since otherwise you break the massive SIMD-style parallelization used in GPUs. You can emulate the old behavior on such hardware, but it'll perform horribly in comparison. For the record, compute APIs would be safe from such a hardware change, since they only need the parallelization part of the GPU (which would remain intact).
Also I'm not sure anybody would bother adding spheres as a primitive. Like, they're only good at being spheres, and usually you need something other than a sphere. May as well not bother with it and make the hardware easier to design.
We already have a ray-tracing API, it's called the compute shader
Seriously though, ray-tracing hardware already exists in consumer devices, accessibly via the OpenRL API.
. 22 Racing Series .
Seriously though, ray-tracing hardware already exists in consumer devices, accessibly via the OpenRL API.
PowerVR Ray Tracing is a revolutionary technology that brings lightning fast ray tracing to the world's leading mobile GPU.Oh the pun! PowerVR you're killing me!
"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"
My journals: dustArtemis ECS framework and Making a Terrain Generator
Okay, here is another great opportunity for them to make a hit version of OpenGL (God! I miss the old days). There is hardware next generation coming and also other upto date API versions coming soon for putting a fire under their seats.
With the new X99 chipset, we are waiting for much faster motherboards and specifically chips (Bring the Haswell-E! that I've been waiting a decade to have! ) and DDR4 RAM ( I thought we would be on DDR5 by now! ). Some are saying that overall the computers will be about 50% faster in the next generation.
Will Khronos target that??? I believe they will miss the boat and it will sail without them once again, but I really would like to see OpenGL get with the times.
This might be their last chance to get in the race before they are left in the hardware junk heap forever (Where OpenGL is King of the hill! LOL) and saying that all you have to do is wait for their next update that will run on a fraction of the machines - LOL.
Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.
by Clinton, 3Ddreamer