True vertex displacement, as it is implemented in current consumer cards, has a fundamental drawback - the original geometry must generally be finely tessellated to take advantage of the effect. This is exactly the reason why normal mapping (and derivative techniques such as parallax mapping) are so popular.
As for the original question; from those options, HDR is definitely the technique I would research more [smile] In addition, using GLSL can't do any harm, provided your target configuration supports shaders.
Cool new stuff to do with openGL
displacement mapping trough vertex textures are possible, but only trough software since i use stencil shadows, to do all that in hardware requires the addition of some kind of polygon processing shader stuff, where one could subdivide meshes and build stencil volumes in a shader.
But then again, it would be cool to add a more freeform displacement mapping in the future.
Allthough i do have an GF6800 witch they say support vertex textures in the vertex program, glsl does not seem to support it just right now.
HDR is definitly one thing to continue to work on, infact many of the effects you guys suggested require it in order to get rid of some visual artifacts(blur effects, good pixel based volume fog, some post process effects and so on).
The anoying thing is that if you use HDR rendering, you loose all of the blending capabilities and the accum buffer, this means that if you are going to blend a polygon you need to copy the screen to another p buffer and then use a shader to manualy blend them together.
This method is not only slow buy totaly anoying, and only because nvidia thought it was a bit slower than normal.
Ambient occlution, i don't know, i don't think that it works that well on large scenes.
Perhaps a different aproach is possible though, if you combine it with defered shading.
But then again, it would be cool to add a more freeform displacement mapping in the future.
Allthough i do have an GF6800 witch they say support vertex textures in the vertex program, glsl does not seem to support it just right now.
HDR is definitly one thing to continue to work on, infact many of the effects you guys suggested require it in order to get rid of some visual artifacts(blur effects, good pixel based volume fog, some post process effects and so on).
The anoying thing is that if you use HDR rendering, you loose all of the blending capabilities and the accum buffer, this means that if you are going to blend a polygon you need to copy the screen to another p buffer and then use a shader to manualy blend them together.
This method is not only slow buy totaly anoying, and only because nvidia thought it was a bit slower than normal.
Ambient occlution, i don't know, i don't think that it works that well on large scenes.
Perhaps a different aproach is possible though, if you combine it with defered shading.
www.flashbang.se | www.thegeekstate.com | nehe.gamedev.net | glAux fix for lesson 6 | [twitter]thegeekstate[/twitter]
how about Direct3D support? [wink]
how quickly you can do this is a real test of how good your engine is [smile]
then again you say 'Cool new stuff to do with openGL' so I guess not :)
I would suggest skeletal animation. Having this is more useful than any of the others.
Then:
a generalized shading system...
render to texture support through some kind of CRenderTexture object or such.. This with a shader system makes lots and lots of other effects very very easy to implement. (Pbuffers are god awful messes however)
how quickly you can do this is a real test of how good your engine is [smile]
then again you say 'Cool new stuff to do with openGL' so I guess not :)
I would suggest skeletal animation. Having this is more useful than any of the others.
Then:
a generalized shading system...
render to texture support through some kind of CRenderTexture object or such.. This with a shader system makes lots and lots of other effects very very easy to implement. (Pbuffers are god awful messes however)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement