Of Terrain Implementations & Future Considerations

posted in mittentacular
Published August 30, 2007
Advertisement
In between the number of Counter-Strike: Source games and my reinvigoration with Bioshock (once I got over the mid-game, relatively dull, hump), I've been working on this whole terrain geometry clipmap thing. Right now I'm focusing on getting the entire mesh -- that is, all of the clipmap levels at once -- rendering at once so I can ensure that my overall clipmap "ring" is being handled all sorts of properly.

For those people who aren't completely up-to-snuff on their terrain geoclipmap knowledge, which I assume would be most people, the idea is that the entire terrain mesh can be, essentially, rendered in its entirety using the coarsest clipmap level. That is not usually the desires course (tee-hee) of action, being that the coarsest level of detail is far from being an accurate representation of the terrain dataset, but it's a good start for visualizing the geoclipmap algorithm. The purpose of the algorithm is to use the coarsest clipmap level as the least detailed mesh for what will end up being a nested grid of increasingly detailed clipmap levels, very much like its namesake algorithm in structure. The height data for such a geometrical organization is then handled within the Vertex Shader for the rendering of the mesh; a vertex texture will set the height for a given vertex of a certain clipmap level in the VS (with possible geomorphing handled around that same spot). So, as the user moves around, as I understand it, the active levels will move with the camera to keep a consistent level of detail. But ,yeah, here are some shots from my experiments with the block orientations per level.



One of the things that I began wondering while I was trying to organize the basic geometry structure sprung out of the fact that I accidentally had Fraps running since I had just played Bioshock (it seemed to be the only screenshot utility capable of handling DirectX 10 apps). Here I was, rendering all of the clipmap levels in full at once in this program trying to see what the blocks looked like under my recently-implemented calculations, and Fraps was telling me that, with all of the polygons in view, I was getting about 120-200fps. Each of these blocks takes up varying amounts of world space, based on their vertex scaling, but every block regardless of scale contains about 7,800 triangles all sent to the GPU via a triangle list in Direct3D 10 (I'm running on a 640MB nVidia 8800GTS). Every clipmap level has, at least, twelve of these blocks which make it up -- and given the algorithm specifications, most levels will have additional geometry for detail with cracks and level differences along block edges. So, for every level, we're talking about 94,000 polygons, give or take. I have six levels right now, so for all but the finest level, that's a total of 470,000 polygons, and add in the poly-count for the finest level (which has sixteen blocks) and that raises the total polygon count per frame to about roughly 600,000 polygons -- and even taking the minimum framerate I was getting while running my demo with all these polygons in the scene (including my MD5 mesh), that adds up to about 71,135,600 MTris/second. I have a great deal of difficulties believing that I'm really sending 600,000 polygons per frame to my GPU and still getting a framerate in the 100+ range. I'm also currently doing one draw-call per blocks, so that's 76 instances of ID3D10Device::DrawIndexed per frame -- so this is, basically, completely unoptimized repeated renderings of a static vertex and index buffer.

These vertices are composed solely of two floating-point values, and the polygons are still, as of yet, untextured, so I can rely on the fact that numerous textures applied to this grid in whatever manner I end up using will slow things down a bit, but all of this still begs the question: why would I waste what should end up being two to three weeks in development (other than weekends I only end up getting one-to-two hours of development in depending on if there's a game I feel like playing any given night) implementing a terrain rendering algorithm if I could easily come up with an advancement to my tried-and-true geomipmapping algorithm that I could implement in a couple days. The quick answer to this is: well, there's really no need to. For me, this is just an exercise and, given my current tasks at my job, I need something complex to wrap my head around in my spare time. For most practical implementations, though, unless a particularly large dataset is necessary, I think I could come up with an algorithm with similar tech requirements as the geoclipmapping algorithm, but with a far more simplistic implementation; same concept of relying on vertex textures for height values for the grid, but with a more straightforward mesh structure.

When I wrote my book on the matter of terrain rendering, I was continually asked by friends of the developer persuasion "Why not just use a brute force method of rendering?" Back then, there was a compelling argument for clever management of the mesh for frustum culling reasons but that would be the extent of the implementation; and it's a good argument, and one that is even more true today with graphics cards that are exponentially better than the ones we had access to back in 2002. For game developers, I think the days of algorithms like geometry clipmapping and ROAM 2.0 are few and will be of use for only the most specialized of games. The complexity of an implementation of these algorithms simply does not warrant the results they will give.

This entry got a bit longer than I was expecting, so I'll cut it off for now.
0 likes 0 comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement