The possible future of games
Ive been watching a lot of tech talks and stuff like that lately and thought how amazing things could be for games if one day we could combine these two technologies
http://www.popsci.com/technology/article/2010-04/video-new-graphics-tech-promises-unlimited-graphics-power-without-extra-processing
and
http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html
we could theoretically have a whole game in a very accurate representation of our world created from all the pictures on the web. for example a game that uses the point cloud system for graphics could gather all the pictures of time square online and then get the point data using the second system and then have a near perfect replica of time square in game and with very minimal time from artists
just a thought on what the future of our games very well may be...
I'm working on a first person zombie shooter that's set in a procedural open world. I'm using a custom game engine created from scratch with opengl and C++. Follow my development blog for updates on the game and the engine. http://www.Subsurfacegames.com
Quote: Original post by Davidtse
http://www.popsci.com/technology/article/2010-04/video-new-graphics-tech-promises-unlimited-graphics-power-without-extra-processing
Only works for static scenes. Cannot do physics, collision detection, environment interaction. At least not even remotely at same scale.
There is also problem with entropy. Contents still needs to be produced, and that is the most expensive step. A set of 500 sphere is always just 500 spheres. One cannot zoom closer to see more detail. And building each scene takes time and costs money.
It's not really a presentation problem, creating the content is the bottleneck from creative side, not technical.
Quote: we could theoretically have a whole game in a very accurate representation of our world created from all the pictures on the web.
That makes for exactly one single game. Then what?
Movies do reuse props, but each scene is composed individually. Movies would be quite boring even they all played in Times Square and nothing but Times Square.
Cheap content results in cheap results. Movies would cost much less if they didn't need to reinvent a whole world for each movie - and they have current world at their disposal.
Well...about the point cloud thingy...
'you can't have more than unlimited detail in the scene, it's the end of the spectrum for graphics'.
lol, seriously?
Oh, and:
Yay! Endless power! Even if we cut the processing power in half, it will still be endless, since infinite divided by 2 is still infinite! How about that? Hell, we'll cut it again in half, and again, and again, and again! Guess what: Still infinite power! In fact, it turns out this technology doesn't actually require a CPU or GPU at all! Now, let's go build the perpetual motion machine!
Ok, seriously now...I've heard about this 'new tech' quite some time ago, and now that I just checked it out again nothing seems to have changed or progressed. Anyway, it's basically voxels, except they claim it's 'different' because 'we only process points that are visible'...like that doesn't happen already? Also, it's kinda funny how they never show any kind of benchmarks, siting the system specs and the framerate...well I guess with the 'infinite power' thing benchmarks are useless, as you can run this super detailed technology in your ancient cellphone at infinite framerate :)
But, who knows, maybe I'm an idiot and these guys are able to perform magic. We'll see :)
'you can't have more than unlimited detail in the scene, it's the end of the spectrum for graphics'.
lol, seriously?
Oh, and:
Quote:
Unlimited Detail is a software algorithm that gives unlimited geometry. When we say “unlimited geometry” we really do mean it. It really is Unlimited, Infinite, endless power, for 3D graphics.
Yay! Endless power! Even if we cut the processing power in half, it will still be endless, since infinite divided by 2 is still infinite! How about that? Hell, we'll cut it again in half, and again, and again, and again! Guess what: Still infinite power! In fact, it turns out this technology doesn't actually require a CPU or GPU at all! Now, let's go build the perpetual motion machine!
Ok, seriously now...I've heard about this 'new tech' quite some time ago, and now that I just checked it out again nothing seems to have changed or progressed. Anyway, it's basically voxels, except they claim it's 'different' because 'we only process points that are visible'...like that doesn't happen already? Also, it's kinda funny how they never show any kind of benchmarks, siting the system specs and the framerate...well I guess with the 'infinite power' thing benchmarks are useless, as you can run this super detailed technology in your ancient cellphone at infinite framerate :)
But, who knows, maybe I'm an idiot and these guys are able to perform magic. We'll see :)
The next leap will occur with real time unbiased rendering. It's barely doable on current top of the line GPUs. Raytracing is cool, but doesn't add enough.
When that happens, visuals will become indistinguishable from photographs, baring the double slit experiment.
But content creation problem remains. Such details means that each leaf must be modeled or generated procedurally, down to thickness and density to properly filter spectrum as Chlorophyll does.
After all this is done, then some spatial partitioning scheme can be used to buffer those like mipmaps do. But the actual scene will still take terabytes in size.
When that happens, visuals will become indistinguishable from photographs, baring the double slit experiment.
But content creation problem remains. Such details means that each leaf must be modeled or generated procedurally, down to thickness and density to properly filter spectrum as Chlorophyll does.
After all this is done, then some spatial partitioning scheme can be used to buffer those like mipmaps do. But the actual scene will still take terabytes in size.
For a technology that claims to have unlimited details, I certainly believe that the end result looks crap. Now, I understand those guys are no artists, and that's fine, the colors are horribly chosen and the scene is repetitive, but I don't mind.
However if you look carefully at the video when the camera gets close to some objects, like plants, you'll see that the "voxel" resolution is very low. It'd be equivalent to something like a texture of 128x128 per square meter. While nowadays, any half decent 3D game will have ground and walls with 2048x2048 textures.
Which brings me to the question: if you really have the technology to display unlimited details, why don't you demonstrate it in your videos ? Like, at least having an equivalent virtual resolution than what a 3D game provides..
I don't doubt that the technology works, but the "unlimited" part is pure marketing bullshit. That, and it's all static. Show me the same scene with destructible walls, tons of walking characters and a high resolution, then I'll be impressed.
Y.
However if you look carefully at the video when the camera gets close to some objects, like plants, you'll see that the "voxel" resolution is very low. It'd be equivalent to something like a texture of 128x128 per square meter. While nowadays, any half decent 3D game will have ground and walls with 2048x2048 textures.
Which brings me to the question: if you really have the technology to display unlimited details, why don't you demonstrate it in your videos ? Like, at least having an equivalent virtual resolution than what a 3D game provides..
I don't doubt that the technology works, but the "unlimited" part is pure marketing bullshit. That, and it's all static. Show me the same scene with destructible walls, tons of walking characters and a high resolution, then I'll be impressed.
Y.
Taking all the claims at face value, assuming no trickery, it's pretty impressive. Sure it looks rough, noisy and boring, but like all technologies, it would need time to grow and develop.
I just wonder however how much memory you'll have to use to store billions of voxels (that's what they are really, just rendered differently), and not just the voxels but the spatial sorting data they are claiming to use, and how it deals with dynamic scenes (probably not very well if it requires a massive amount of offline pre-processing and storage).
I'd like to see this running on a console or low-spec PC, but I assume that's just not feasable at the time. The claim seems to be about the underlying technology and the way it deals with complexity, but not actual raw performance. I'd like to see how this is gonna develop, and if it's not just another dead end.
I just wonder however how much memory you'll have to use to store billions of voxels (that's what they are really, just rendered differently), and not just the voxels but the spatial sorting data they are claiming to use, and how it deals with dynamic scenes (probably not very well if it requires a massive amount of offline pre-processing and storage).
I'd like to see this running on a console or low-spec PC, but I assume that's just not feasable at the time. The claim seems to be about the underlying technology and the way it deals with complexity, but not actual raw performance. I'd like to see how this is gonna develop, and if it's not just another dead end.
Everything is better with Metal.
Quote: Original post by Ysaneya
That, and it's all static. Show me the same scene with destructible walls, tons of walking characters and a high resolution, then I'll be impressed.
Y.
Well, about that...you can always follow a hybrid approach, where static world geometry follows the 'new tech', and animated characters are made out of polygons as usual. I believe this is somewhat the case with sparse voxel octrees too, for which Carmack has dedicated quite some time talking about them, and there is a good chance they will actually be used in real games in the near future.
Quote: Original post by oliii
Sure it looks rough, noisy and boring, but like all technologies, it would need time to grow and develop.
Looky here: infinite detail, in javascript. Rough, noisy and boring. A higher quality ">is merely this. Still boring, but theoretically infinite in detail.
Quote: I just wonder however how much memory you'll have to use to store billions of voxels (that's what they are really, just rendered differently), and not just the voxels but the spatial sorting data they are claiming to use, and how it deals with dynamic scenes (probably not very well if it requires a massive amount of offline pre-processing and storage).That isn't a problem - it's creating them.
As soon as some procedural creation gets involved, the result is noisy and boring. Human brain is the best pattern detector there is.
Eventually, one could probably end up with prefab content that would be assembled, but even that needs to be tweaked, or the result ends up feeling cheap.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement