Quote: Original post by kseh
I don't know, it seems to me that if you can find a use for those few hundred thousand transitors that were wasted or speed up a routine that's called thousands of times a second by a couple microseconds that there'd be some sort of benifit.
You are missing the forest for the tree.
If Intel cared about those few hundred thousand of transistors, they would never be able to design the 731 million transistor monster that is i7.
If Google cared about those few hundred thousand cycles lost to Python, they'd never be able to scale their applications to run on thousands of servers.
If games where programmed in assembly, the game development scene would have remained somewhere in the mid-90s level of complexity.
If hardware was programmed on a low-level, you'd have to write different code for each and every GPU and sound card you wished to support (remember the pre-Vesa days?) DirectX, OpenGL, your OS steal *thousands* of cycles away from your game - would you trade that away for more "efficient" programming?
The more complex the hardware and software becomes, the more 'human-efficient' tools we need (i.e. higher-level, more abstract) in order to drive it. Every increase in human-efficiency costs in 'machine-efficiency' and increases hardware requirements. However, hardware capacity increases to more than match those requirements. In the end, which is more important: 'human-efficiency' or 'machine-efficiency'?
Personally, I stand firmly on the side of human-efficiency: the average human life is 780 months. If I can use Ocaml to create a program in 1 month versus 3 in C++, I'll use the former even if the latter runs faster. Computers are here to serve us, not vice versa!