Does anyone have experience with path tracing using OpenGL 4? Do you have a favourite implementation of such a thing? Thanks for your time and expertise.
OpenGL path tracer in C++
You have an RTX GPU. Why do you want to implement your own RT in OpenGL compute, although you could get the same with HW accelerataion using Vulkan / DX12?
Otherwise, this might help: https://www.kevinbeason.com/smallpt/
There are also countless shadertoys doing similar things using simple procedural scenes.
Meshes and acceleration structures is harder to find, since performance is usually too bad for games.
I notice that there is a lib called tinybvh. I’d like to see something use that lib. Basically, I’m betting that the Nintendo Switch 2 will not have hardware acceleration for ray tracing.
taby said:
I’m betting that the Nintendo Switch 2 will not have hardware acceleration for ray tracing.
Current state of rumor mill, afaik: ‘10 times more powerful. It will have RT and tensors for DLSS, but it will not have frame generation because it's architecture is Ampere.’
Imagine…
Nintendo: Can we have a new SoC for portable game console?
NV: Sure, with the newest and most important acceleration features we guess? RTX and DLSS?
Nintendo: Meh, not really. Just rasterizing triangles and some compute will do.
NV: How ignorant. Please leave our office. We'll serve your competitiors who are better informed on what's the future of video games!
Or, wait - a better one:
Gamer 2000: Take my money and give me that GTX!
NV: Sure! We are so happy about your interest! Thanks for all the money and support!
Gamer 2010: Oh wow! A GTX Titan for lots of $$$? Give it to me! I'll impress my friends with my shiny enthusiast rig!
NV: Sure, take it! And thanks for all the money still. We're already a stable company now, doing good business. Thanks!
Gamer 2020: Um… your RTX became quite expensive, you know? Don't you have some cheaper model for entry market?
NV: Sure, but it does not have enough ram and cores to be useful for games. Better spend more for something proper.
Gamer 2025: Doubling the price once more? Seriouslly? My support has funded your company. Can't you make me a good offer for that?
NV: Who are you? Oh, a gamer? Haha, that's for kids. We are an AI company now, building the future of humanity. We don't care about silly games anymore.
Either you pay premium or you leave, we don't really care.
hehe… never feed me with opportunities for NV rant :D
Yes, Nintendo is avoiding using generative AI, as far as I heard.
10x faster with RTX? That would be a dream. 🙂
taby said:
Yes, Nintendo is avoiding using generative AI, as far as I heard.
I like that.
But at the same time it makes me feel like a dinosaur, sitting with other dinosaurs on a couch and dismissing the opportunity. :D
taby said:
10x faster with RTX? That would be a dream.
Maybe 10x is a bit optimistic. But afaik Switch has 256 GPU cores and 2 will get >1500. Newer architecture, higher frequency… so 10 should be possible in some cases.
A Steamdeck is already 5x a Switch, has RT too and a similar price.
taby said:
But…. Steamdeck does not have Super Mario Party! 🙂
It has. And it has Sonic too, even Breath Of The Wild. They call it ‘Emulation’. : )
Edit: Isn't it nice when YOU can decide which programs to run on YOUR computer?
Gabe does good things… ; )
taby said:
Does anyone have experience with path tracing using OpenGL 4?
No, I've switched to D3D + HLSL some time ago. Maybe Vk impl. will happen. We do use compute based ray tracing though next to hw dxr.
There are problems hw rt is really bad at or poorly specified. But here we are - half baked approach from NV that everyone was forced to adopt. I'd say Linus Torvalds famous quote here tbh.
My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com
Vilem Otte said:
There are problems hw rt is really bad at or poorly specified.
Agreed, but which are the problems that affect you?
From my perspective the HW is fine, but the decision to blackbox BVH data structures and build was very shortsighted, hindering progress more than it helps.
I want vendors to specify their data structures, and APIs should have functions to query those specs so we can modify BVH data or build it ourselves entirely. This way something like Nanite could become traceable.
This does not fit Microsofts approach of avoiding vendor specific extensions, so an alternative would a BVH API accessible from compute shaders. But such thing is much harder to pull off now, and with the growing pile of chip architectures it's maybe already impossible.
I see that the blackbox has helped HW vendors, but it also blocked any innovation on the software side. And i'm really tired to follow NVs dictatorship on how graphics has to be done. We should decide that, and they should react to us with HW design, not the other way around as currently.
And my argument is: Moores Law is dead, so HW progress is stuck. But SW progress is still possible, and so the ball should be on us.
Also: The BVH data is ours, so it is our right to access it.
Some years ago this made me think it would be actually better to have vendor APIs again, like in the early days of 3D acceleration. Glide was was much simpler than OpenGL, and Mantle was simpler than Vulkan but it had more and important features.
Maybe it would be better to write one backend per vendor, than trying to have crossvendor APIs for the price of compromised performance and redundant complexity.
But not sure, and it's just pointless specualtion anyway.
However, beside myself i see quite a few devs complaining about the state of HW RT, but they rarely go into specifics.
It's not enough to just complain about missing flexibility, we need to make specific points and proposals i think. Might happen behind closed doors, but ther should be public discussion as well imo.
5 years have passed, Indiana Jones is the first game requiring HWRT, but there were no improvements on the API side at all.
Currently i think the fix will only come when DX12 and VK get phased out and replaced by new APIs again, which is something that happens only each 20-30 years it seems. I'll be dead til then, so using HWRT inefficiently seems the only way to use it at all. >:(