Quote:
Original post by Kylotan
No, it's not about the lack of bandwidth, it's about how rendering pipelines work. When you are sending pixel 200 to the gfx card, pixel 1 might not even have been rendered yet, never mind pixels 2-199. That's what makes the cards so fast - while one thing is being transformed, the next is being lit, the next is being projected, blended, checked against a z-buffer, etc. They incur a little bit of latency in return for maximum output bandwidth.
I know that pixels are rendered parallel trough multiple units, but other stuff you said does not make any sense to me.
Quote:
Original post by Kylotan
If you then want to take a snapshot, but want an accurate depiction, you have to wait for everything in that pipeline to make its way to the end.
Don't you always have to wait everything to get to the end ?? Before you flip your buffers, back buffer needs to be rendered fully. I don't get this part.
Quote:
Original post by Kylotan
Then the whole pipeline has to stop while you copy data from the render surface back to main memory, because you don't want new data overwriting what you're reading. So you start incurring that latency once per read, rather than potentially just once per program.
This makes even less sense than the last one to me. Why would the pipeline stop while i read the data back to main memory. Once I render the AI scene frame in to custom render target, it's in video memory not in back buffer as far as i know, and from there it's only a bandwidth issue.
Maybe I am wrong at something hear, so please correct me.