Advertisement

Why lens flares?

Started by June 24, 2010 03:03 PM
19 comments, last by way2lazy2care 14 years, 4 months ago
I was watching an animated film last night, "The place promised in our early days"
http://www.imdb.com/title/tt0381348/
now there were plenty of lensflares in that.
Sometimes it suits the vision.
Realism is often not desired, look at lighting on a heavily overcast day, often its very boring, no shadows etc.
With a game adding shadows in etc (sexing it up) can aid the person playing the game.
Same with reflections in water, often theres none, yet games add them (even though its easier + more realistic not to)
Quote: Original post by japro
Modern high quality lenses (not cheaply made 15x zooms) are almost flare free if used properly. You will have a hard time producing a lens flare on purpose with some of them.

Not entirely true, but I see your point.

It's all about artistic style. It's really no different than with HDR photography and HDR rendering in games. With HDR photography you are trying to correctly expose both underexposed and overexposed portions of the same image (often by taking multiple exposures and combining them). In games, it seem that we are doing the opposite - trying to imitate images that aren't really set up for HDR.

Eg:



It really comes down to what looks good, not necessarily what is right.

Advertisement
Quote: Original post by Moe



It really comes down to what looks good, not necessarily what is right.
I don't agree with the HDR stuff in a way: eyes do see like shown on the bottom-right image. Maybe the terms are fishy, but the outcome is realistic.

In fact, maybe it should be called low dynamic range instead. (since the eye can't cope wide ranges, thus bright stuff saturate to white)

That's why I usually don't agree on "over-HDR".
Quote: Original post by szecs
In fact, maybe it should be called low dynamic range instead. (since the eye can't cope wide ranges, thus bright stuff saturate to white)

Actually, the eye can cope with an extremely wide dynamic range, much more than any camera can. AFAIR, the dynamic range of an eye is something around 1:5,000,000,000. It also does something called local adaptation. Basically, it tonemaps individual parts of the image separately, depending on local contrast. Some high end cameras now do that (although far from perfectly), but only very few realtime 3D renderers, because it takes shitloads of performance and the results are not very convincing.
Quote: Original post by Promit
Short version: It's because game developers wish they were making movies but none of them knows anything about movie production.


Hm...judging by most movies that get made these days, it seems to me that it's movie directors that wish they were making videogames.

Quote: Original post by szecs
Not the mention that it's impossibly to display the focus of the eye: it's not that only the periphery is blurred, you sometimes can't even sense the colors/shapes of things in the peryphery (that's a philosophical stuff: you see it, but you cannot "decode" the image. Disturbing to think about it, so i go to bed).


No, that's to do with the structure of the human eye; highest 'resolution' is in the focal point and has a high number of 'cones' to detect colour. Away from that focal point 'rods' become more common which have a lower detail and don't detect colour but do have much better light sensitivity.
Advertisement
Quote: Original post by phantom
No, that's to do with the structure of the human eye; highest 'resolution' is in the focal point and has a high number of 'cones' to detect colour. Away from that focal point 'rods' become more common which have a lower detail and don't detect colour but do have much better light sensitivity.

Yep, and that's also why we don't actually see "images". All what we see is an illusion created by our brain. We're running around seeing the world through a minuscule pinhole, while everything around is completely blurry. Due to constant unconscious eye movement, we're basically scanning the environment on the go, through that little pinhole, while our brain uses the partial information to build and continuously update a mental image of our surroundings. And most of that mental image is not even well defined.

That's also why it is so hard to compare an image on a screen with what humans really see - we don't even see complete images, only bits and pieces that continuously fade in and out of our consciousness.

Oh, and you can actually train the brain to make more use of the peripheral vision field. If you like watching stars at night with your naked eyes, it's often better to look at them slightly off-fovea, since the periphery is much more light sensitive.
Quote: Original post by Sirisian
oh lol. Not sure. I've never seen Bob Ross put a lens flare. nuff said.


Bob Ross paints lens flares out of photographs... before they're developed.
Well, here's something that I've noticed that I think is often overlooked: monitors are not as bright as the sun. OK, this on its own is not insightful at all, but one of the main criticisms is that "bloom" and lens flares don't help simulate "what the eye sees." This is true, sort of, but, for whatever reason, lens flares and weird fake bloom effects do a very good job of tricking me into thinking I'm looking at a very bright light, like the sun, when, in actuality, it's the same brightness as the text box I'm typing in.

The point is that things like "real" HDR with tone mapping or dynamically adjusting exposure simulate things that the human eyes and brain already do, which is very cool and obviously makes game scenes, you know, actually visible, but they actually hurt the illusion of a real scene entering the real eyes of the player, whereas cheesy effects like lens flares manage to actually improve this illusion (though in the case of lens flares I can't really explain why).

The truth is that until monitors have an absurdly high contrast ratio and a high color depth to go with that (that is, an actual high dynamic range), it really is more effective to simulate photographs than to try to simulate "real vision."
-~-The Cow of Darkness-~-
It's just my eyes then?
If I look into a mainly bright area, my eyes adapt, so I will see the bright areas fine, but the smaller dark areas fade to black.
The opposite is the same, if I look into a mainly dark area, my eyes adapt, so the few bright areas will saturate to white. Only in real lights of course, not images/monitor.
This is simulated pretty well with computer "HDR" imho.

My pupil can only control the overall brightness of the images, but not parts. Okay, "tonemapping" in my brain compensates a lot, but cannot handle the brightness range (at a time) that's shown in the bottom-right image (if it's made on a very sunny summer noon).
No matter how cool my brain is, if the info doesn't reach my retina, then it can't reproduce anything from it.

Okay, I have a little nyctalopia, but not so strong.

This topic is closed to new replies.

Advertisement