Advertisement

Lighting question. Pretty simple

Started by April 07, 2003 12:23 PM
11 comments, last by leggyguy 21 years, 10 months ago
Hi. A very quick question. Am i right in thinking that opengl lighting is based around vertices? If so, then it would eplain a problem I am having where a large quad I have in my scene is well lit at the edges and corners, but the center of this quad is much darker. If this is the problem, then I imagine my current large square would look much better if it were built up of several smaller squares. My large square would then have many more vertices in it, and therefore get more light. Am I right in this idea, or is it craziness?
it is based around vertices. more particularly it is based on the normal vectors of the vertices. also whether or not you have GL_SMOOTH or GL_FLAT set.

you should read the openGL redbook chapter on lighting. it will help immensely.
http://fly.cc.fer.hr/~unreal/theredbook/

-me
Advertisement
Yes, it is based on the vertices. The finer your mesh, the more accurate the lighting looks.

However, there may be ways to get the light to calculate itself across the surface, and some implementations of OpenGL (read: certain video card manufacturers) may provide access to it. But I am unaware of such things. Anyone care to elaborate?
It's not what you're taught, it's what you learn.
Doesn''t OPenGL support pixel shaders through hardware extensions?
OpenGL''s standard lighting equations are all based on a per-vertex basis. The "big quad" problem has been discussed thousands of times and this is what you can do :

1- nothing. if you can accept that, don''t waste anymore time on it.

2- tesselate. that is pretty t&l intensive and is not perfect (you still can zoom and need a better tesselation), but it''s very easy to setup and works good if the camera has some constraints (eg finite zoom).

3- per-pixel lighting. that is pretty fill-rate and texture-rate intensive and (generally) very hard to setup, but the result is (almost) perfect.

The latter is by far the most popular, but this needs a decent graphics card (GeForce+, Radeon7500+) to be "able" to do it, this needs very recent graphics cards (GeForce3+, Radeon8500+) to render it at a "reasonable" framerate, and this needs top-of-the-line cards (GeForceFX, Radeon9700+) to render it "almost" as fast as per-vertex lighting.
Don''t forget lightmaps (dynamic textures rendered on a surface to replace the lighting calculations)
It's not what you're taught, it's what you learn.
Advertisement
You''re right, Waverider. That makes a fourth point, eh eh.
One quick request --- I know what per vetex light is and I know what per-pixel Ligt is, but could you please explain tessalation.

CORRECT ME IF I AM WRONG IN MY ASSUMPTIONS

It is possible to emulate per-pixel light with software, but in order to do that you need a very complex and timesaving algorithm. For example Radiosity.

Or am I wrong ?
Maybe Radiosity just the method for the hardware to do per-pixel light ?

Please untangle my thoughts.
Actually, Radiosity is way better than per-pixel lighting. Radiosity takes into account the light contribution of every neighbour face and it is almost impossible to do it in realtime today.
Generally, Radiosity maps are pre-computed and then applied as lightmaps but since the lightmaps are not computed in realtime, this means that the lights don''t move and that you don''t get the specular highlights.


Tesselation means that vertices are added over a surface.
For instance you can split a quad in a 10x10 quad grid.
The surface itself has the same shape, but there are more polygons over it, so vertex-based computations (such as OpenGL''s lighting) give better (quality increased), slower (speed decreased) results.
I thought Quake implemented Radiosity. IF so was it in realtime ?

Thanks anyway

This topic is closed to new replies.

Advertisement