Advertisement

blending lightmaps

Started by December 10, 2002 01:42 PM
20 comments, last by Crispy 22 years, 2 months ago
Ok, thanks for the clarification regarding the color blending!

It seems I tweaked it a bit from somewhere - don''t know what I did - all of a sudden the lightmap is shearing on the "front face side" (thanks, man - I often get stuck on a word (grin)).

Customary screenshots:



From below:



PS - sorry for the lack of response - schools to blame...

Crispy
"Literally, it means that Bob is everything you can think of, but not dead; i.e., Bob is a purple-spotted, yellow-striped bumblebee/dragon/pterodactyl hybrid with a voracious addiction to Twix candy bars, but not dead."- kSquared
ok, looking at those screenshots, I guess that :
1- you render your lightmaps in a second pass (you already told it btw),
2- you render your lightmap using another geometry than the terrain geometry,
3- you render your lightmap using the primitive GL_TRIANGLES (not GL_TRIANGLE_STRIP, not GL_QUADS...),
4- you already map your texture coordinates and your texture is ready (I mean, texturing is not the problem here),
5- you encounter Z-fighting (ie flickering due to depth buffer).

For point (1) and (4) there's not much to say, and other points could be corrected IMO.

(2) and (5) : What I mean by "you render your lightmap using another geometry than the terrain geometry" is that the tesselated quad (oui la lightmap est plaquée sur un carré découpé en plein de mini-triangles, j'en mettrais ma main à couper ) where you render your lightmap uses vertex coordinates that are slightly different than the terrain vertex coordinates. The problem is the "slightly difference". When the polygons are rendered by OpenGL through the pipeline, the depths that are computed at each vertex of the lightmap will be "a little bit" different than the depths that were computed by the terrain. Yes they are different, not that much different but yes they are. This is due to floating point precision issues. This results in depth comparisons that sometimes fails sometimes not, so when depth testing is enabled sometimes the pixel will be rendered sometimes not. That's why your polygons are "brushed" (ils ont une trame plus ou moins réguilière mais ne sont pas complètement pleins). This is what we call "Z-fighting". Maybe you've ever heard of it. (btw please take no offend if you already know what Z-fighting is ! I just don't know your OpenGL knowledge).
There are a few methods for escaping the problem. Recently, Roming22 experienced a similar problems (with shadowmaps, really similar to your lightmaps to say the least) that we discussed here (edit: changed stupid typo for the link). If you read this discussion, you'll see at the end of the thread that I enumerated some of the most populars methods that removes or attenuates Z-fighting.

(3) : It's possible that you render your mini-triangles (the ones that tesselates your lightmap) with the GL_TRIANGLES primitive. In that case, if you use it in combination with backface culling (ie glEnable(GL_CULL_FACE)) you have to be careful of the order of the vertices that are sent for each triangle. In the above picture, it looks like half traingles are sent CCW and the other half are sent CW (if you don't know what I mean by CW or CCW feel free to ask). What is really strange is the bottom picture where all triangles are visible ! It's like backface culling is not the problem, unless you sent degenerated quads (you don't, do you ?).


BTW, don't blame school. "Blame yourself or God"

[edited by - vincoof on December 12, 2002 6:13:20 PM]
Advertisement
Okay, I offset the lightmap slightly so that it doesn''t overlap with the terrain - I thought of this a while ago, but I then thought that was only a very extreme measure (mostly because I though of it ). Anyway - it looks fine now that I resorted to simple alpha blending. Nevertheless, it seems I have to do everything per-vertex after all since otherwise the brightness is calculated in steps (and different faces won''t "blend" the light) - gotta look into some way of optimizing - probably something more fundamental regarding the terrain engine - all those distance calculations are bound to kill the cpu when there are numerous active lights in the world. I am familiar with z-fighting though I''ve never had the privilige to deal with it. As for the culling part - in the sample shots the lightmap isn''t culled at all - for degugging reasons.

BTW - which solution would you suggest - creating a pool of "light" objects and bind them to an object when a projectile or something like that is fired, or dynamically create and destroy the lights as projectiles are fired and as they leave there world/hit something?

quote:

BTW, don''t blame school. "Blame yourself or God"



I should warn you that you''re treading a fine line with such statements - I''m easily drawn into rather serious discussions when it comes to sharing views on "rhetoric" subjects... Anyway - school''s to blame, I refuse to look at it any other way . And I most certainly won''t blame myself - if it were up to me, I''d have made the Earth rotate at 1/8 its current rate some 2 billion years ago and we''d all beliving 96-hour days now(yum)

PS - went through your cel-shading tutorial. Very commendable and I''m sure many people find it useful (including myself). Just one thing, though - you might want to ask NeHe to add a link to it in the original cel-shading tutorial - most people won''t find it under the downloads section!

Crispy
"Literally, it means that Bob is everything you can think of, but not dead; i.e., Bob is a purple-spotted, yellow-striped bumblebee/dragon/pterodactyl hybrid with a voracious addiction to Twix candy bars, but not dead."- kSquared
Culling is not an issue then. I think that the culling bug I described should be here because of Z fighting, nothing more.

About the dynamic vs. static number of objects discussion, it''s up to you to decide. In fact, it depends on the application : in a generic application, dynamic (theorically infinite) number of lights are needed, in a specific application (a game for instance) you may know some limits that you can take into account.
Either way, I recommend dynamic number of objects : you don''t waste so many resources, it''s sometimes easier to write a program for them, and it''s always more flexible than static number of objects.

Thanks for downloading the tutorial. Also I don''t know if you have noticed, but there is a complete line-by-line walkthrough of the vertex program. Be sure to check it out

About the link to NeHe''s site, I''ve never pointed the exact link because the tutorial number is subject to change (it already happened). Instead, I prefer writing "go to http://nehe.gamedev.net and take a look at the cel-shading tutorial (number 38)" or something like that, which I''ve written 4-5 times in different places in the code or the doc
You''re right - there''s no knowing how many dynamic lights there will be. A question of curiosity and another one of interest (out of need for speed) however:

1) Why do games (such as Quake) etc use lightmaps and multitexture them instead of incorporating the lightmaps into the actual textures during BSP build time? I mean - the number of textures wouldn''t change, and there would be an extra texture unit to use since most older gpu''s support up to 2 TU''s only. I''ve no direct experience, but I''m sure pre-RIVA chips didn''t suppport more than one.

2) Is there a rational way to approach the following problem:

A triangle fan:


The yellow region is the right upperhand corner of a patch of vertexes. Due to unfortunate alignment, however, the last triangle fan doesn''t fit into the patch, but rather its center (0) is on the edge so that vertexes 1, 2 and 8 are not part of the patch. Is there a way of creating "partial" triangle fans or is this case simply unfortunate enough not to be able to use them?

Crispy
"Literally, it means that Bob is everything you can think of, but not dead; i.e., Bob is a purple-spotted, yellow-striped bumblebee/dragon/pterodactyl hybrid with a voracious addiction to Twix candy bars, but not dead."- kSquared
1). Static lightmaps are computed offline using algorithms such as radiosity to increase lighting realism. But dynamic lightmaps are mapped in realtime, so there''s no highly realistic lighting.

2). Again, I''m not sure I understand your problem, though it''s a very good idea to post a picture. Do you want to optimize the pipeline by rendering as few triangle strips as possible ? Do you have a problem because the green and yellow part have not the same number of quads (in height), thus your rendering algorithm does not work ?
Advertisement
Yeah I figured out the lightmap thing myself. It was really a silly notion.

Regarding the triangle fans - imagine a grid with equal distances between the dividing lines. Now suppose you want to render the grid as triangle fans instead of triangles which would result in thrice the bandwidth use. Now the problem arises when you try and divide an odd number of squares (n - 1, where n is the number of line intersections among a full number of triangle fans). You''ll end up with a non-aligned division with the last triangle fan being centered on the edge of the grid. This means that the current grid (or patch) doesn''t include information about three of the vertices that have to be drawn in order to create the last triangle fan (1, 2 and 8) - not draing it would result in the last column (between verts 6-7, 5-0, 4-3, etc) being not drawn at all. See drawing: verts 1 through 8 form a triangle fan centered at 0 with three vertices outside of the patch. In my view this prevents me from using triangle fans altogether. However, only using up 1/3 of the andwidth does sound tempting. The qustion is, can I tell opengl to stop drawing every m''th triangle fan after 5 vertices or should I simply create the last column using some other method? Hope this makes more sense.

Crispy
"Literally, it means that Bob is everything you can think of, but not dead; i.e., Bob is a purple-spotted, yellow-striped bumblebee/dragon/pterodactyl hybrid with a voracious addiction to Twix candy bars, but not dead."- kSquared
So you want to reduce the pipeline load by sending triangle fans instead of separated triangles. That is a very good idea !
Nonetheless, I recommend triangle strips instead. It''s a bit more flexible since you don''t have to worry about the odd/even case, and it''s faster than triangle fans because the pipeline load is reduced (except for the very special case of a 9x9 grid).

Though, if you really want to cross your triangles (I mean, half quads splitten with a slash, the other half splitten with backslash), let me point how useless it is. Many people think it looks better but in fact the difference is imperceptible. I don''t want to hurt ppl who believe this is a good algo, I just want to open some eyes on one of the most stupid ideas ever mentioned in computer graphics.

As for the "skip n vertices" capability, no you can''t tell OpenGL to forget some. In fact, you can do it with degenerated triangles but I''m not sure that''s what you want.
Alrighty then - once they give us some slack at school, I''ll implement triangle strips.

Now for something completely different (yup - I''m a Monty Python fan): again - I don''t want to create a new thread for this: got a reply from nVidia today regarding the reflections (that are messed up) on my Riva:



Have you called ValidateDevice() to check that the card can do what you are asking?

Is it a non-pow2 texture?



The second one is a fact - I''m using p^2 textures. Ran a google on the first one, though and it seems they''re referring to a DirectX function (which I know nothing about...). What does the specified function do and is there an OpenGL equivalent for it (well - there''s the glGetXXX family, but I don''t know how I should phrase the "check that the card can do what you are asking" part in the current context). I don''t really want to reply to them before I''ve dug deep enough to know what I and they are really talking about. Then I guess it was my mistake that I didn''t specify I''m using OpenGL in my original post... damn.

Crispy


"Literally, it means that Bob is everything you can think of, but not dead; i.e., Bob is a purple-spotted, yellow-striped bumblebee/dragon/pterodactyl hybrid with a voracious addiction to Twix candy bars, but not dead."- kSquared
Even though I know nothing about DirectX, and even less about Direct3D, what I know is that ValidateDevice() is not an OpenGL function, by any means.
If you did not specified that you were using OpenGL, that''s possible that this nVidia guy did not guess it.

As for the "check that the card can do what you are asking", LOL yes your card can do it ! You''re asking OpenGL1.0 or OpenGL1.1 features, no more. And as long as your driver exposes OpenGL1.1 core (btw, did you call glGetString(GL_VERSION) ?) the driver MUST BE ABLE TO EXPOSE ALL THE FEATURES SPECIFIED IN OPENGL1.1, be it in hardware or in software, no excuses !
Moreover, if you''re compiling with MSVC under Windows, and if you''re NOT using OpenGL extensions, you''re 99.99% sure that you don''t use anything greater than OpenGL1.1.

One more thing : you should tell him that you can send him a demo that shows the bug, with source code included (if you have the rights!). Everytime I reported a bug to driver developers, they never refused a little program that shows the problem explicitly (and simply).

This topic is closed to new replies.

Advertisement