So I've heard that rendering front-to-back can increase performance due to the z buffer. I understand how this works but what happens to blending? In order for blending to work you need to render from back-to-front. Moreover:
For blending, the technique I know and use, is that in every frame I calculate the distances of all the objects from the camera, then order the objects in descending order from larger distance to smallest and then render first, the objects which are farther from the camera.
This has two disadvantages, first the calculation of the distances and the ordering take some time and secondly I can't use front-back rendering. The first disadvantage can be optimized with a tree queue quite well but still I need to go through all the objects in order to generate that tree (nlogn total time).
Is there a way to mix this blend technique with front-end rendering? or am I stuck except if my scene does not use transparent objects at all?