I'm currently learning how to do shadow mapping from online tutorials. As far as I know, this technique involves
two passes from shader. The first one is from light's view, and store the final image into a depth texture, in which
all pixels are actually just floating point of depth. The second pass is from camera's view using normal method.
The part I don't understand is why the technique compares these two final images with their depth values. Since
the cameras' angles (camera's and light's) are different, how can they be compared?