I don't know if you'd agree with me in asking you this, but why can't more recent game entries look this real? So would it be correct to say if I was to use the same sort of photo mapping technique they used for this game, it would be a trade off of not having dynamic lighting in favor of this more 'photorealistic' look and vice versa?
You can ask whatever you want
A simple example of the problem is -- imagine standing in a "T-pose" (arms outstretched sideways) outdoors in the sun, at midday. The backs of your hands are lit by the sun, but your palms are in shadow.
Lets say we then "photoscan" you like this and put you in a game.
When the game character animates so that their palms are facing upwards, their palms are still shadowed, and the backs of their hands (which are now facing downwards) are fully lit... which looks very wrong.
Another issue is that almost every real-world material is view-dependent -- this is a fancy way of saying that the appearance of the material is different, depending on the angle that you view it from.
e.g. if you look directly at a window, you can see through it, but if you look at it at a glancing angle, it starts to act more like a mirror (the Fresnel effect).
The extreme example of this is an actual mirror -- every photograph that you take of that surface is going to be completely different depending on where you place the camera!
You might think this isn't a big deal for skin, but these view-dependent reflections make up about 3-4% of the total light that your eye receives from skin. It's a subtle detail, but still very important it making something look believable.
Those old photo-textured games looked very good at the time, but they don't actually look that great next to modern games any more. You can still use that technique, but only if you're happy with completely fake lighting. It doesn't come off as photorealism in the end, but a kind of weird hyperrealism, due to the incorrect specular highlights, and the incorrect directional lighting.
Modern games do actually still use a variation on this technique though!
e.g. Here's an actor standing in the middle of 72 high resolution cameras:
And here's a 3D model and a photographic texture reconstructed from those 72 photographs:
Or here's some skin rendered in a modern game engine, using fully dynamic lighting (
also using a 3D model and skin colour texture captured from a 3D scan like above):
In these examples, the artists have to remove all the lighting information from the "phototexture" so it appears as if the object was standing in a white room where all the walls/roof/floor were white lights, or outside on a cloudy day. They also have to remove any "highlights"/"sheen", as that's the view-dependent part of the lighting. After that, they're left with a fairly flat and boring colour texture.
Then, they have to hand-author specular/roughness textures, that determine what the highlights/sheen will look like, and be careful to get this to match real skin.
Then you put that colour texture, the specular/roughness textures, and the normal map (which you also got from photo-scanning) into the game engine, and it can "re-light" the model dynamically.
Examples of the kind of software and services for capturing this data are below:
http://www.agisoft.ru/products/photoscanhttp://ir-ltd.net/