quote: Original post by Xai
NO MATTER how much training is required, NO MATTER how much code is reused ... the LIBRARIES, like OpenGL, and the game engines, like Unreal Tournament .. are HUGE advances in developer coding abilities IN THE LARGE ... and the proof is in the results ....
The comparison you are NOT making in the above is "in comparison to custom-developed software". "Standards" improve the ability to push the envelope without worry - OpenGL, shader languages, network protocols, etc. None of those are reuse at the level I was responding to. They may contribute to larger movements, but they are not themselves pluggable systems, nor do they constitute anything near a reasonable definition of "components".
Engines are indeed components, but as mentioned they have their own problems - the level of customization required for novel projects, in particular, is often so high that reuse becomes insignificant or even counterproductive. For stock content, of course, it's easy to reuse systems - that's as true in my slice of the software field as in the gaming slice. But stock content is also not competitive.
Components, so we are clear, do USEFUL WORK. A brain that, for example, gives units basic pathfinding and target evaluation abilities is a component. A shader that produces a specifically lifelike explosion, coupled with the textures which flavour the mix, constitute a lower-level component. A level editor such as UnrealEd constitutes a higher-level component, although at that level the definition starts to impinge on the lower strata of "systems", which are largely not the same thing.
quote: The current software developers understanding of programming 3D is now AT A HIGHER LEVEL than it was 15 years ago, do to open gl like libraries which provide an ABSTRACT MODEL which developers learn and master, which is HIGHER LEVEL than the previous model ... step by step we build bricks out of air ... on top of the previous bricks, slowing building our castles ...
I reject the classification of OpenGL as a component. OpenGL is the ultimate example of reuse at a low level. It doesn't do anything useful by itself. You have to make all the hard decisions; the only thing that OGL does is give you a place to send it to the gfx card. Much the same principle holds for most of the abstractions which have been most pervasive over the years - TCP/IP stacks are another excellent example. These are reuse, but they are not in and of themselves workers. They perform a useful function, but they don't automatically compress, decode, verify, or otherwise mangle your data so that you don't have to.
quote: and it IS component reuse, for all the components that you didn't have to cut in half ... just not for the ones you replaced.
I don't think we're coming at this from the same angle. Even if people reuse everything in an engine, if it's hurting them significantly to do so then the original point stands.
The subject of the thread is basically advancing the state of the art, in particular with respect to speed of development. One has to keep in mind, then, that the results of this faster development must compete with products developed in other paradigms. If a graphics engine isn't capable of delivering at a particular level, or a sound system doesn't blend things dynamically enough, or an AI uses too inflexible a conceptual model, or any of thousands of other small errors, the developer is going to be left at a disadvantage that costs them money. If it takes 2 years to develop the game instead of 1 and it sells 10 times as much, well, was that worth the extra time spent? My answer is that mostly, it is indeed worth the extra time. And you can be certain that in almost any situation, the percentage of reuse DROPS as that extra year drags by, because the number of needs which don't fit the original implementation is so very likely to grow. I'm sure that some shops beat that particular metric, but as a general rule as a system grows its special-casing grows faster.
That's where reuse constantly falls down - it doesn't, in the long term, provide value commensurate with the effort required to harness it, or does so only marginally. You can reuse an engine for at most two generations before its assumptions are outdated. Give it thirty years, when the computational limits have been hit, and those trends will slow dramatically. But while hardware is evolving, software has to evolve as well - do more, do it better, with new tools.
Plus, the discipline as a whole only just starting to tap mental models of engineering. If the composition of the wood a building architect had at their disposal changed every month and in addition the definition of a hammer changed to a newer, better one every couple of days (especially if he had no objective metric against which to measure its performance), he'd go nuts trying to find the patterns of architecture.
ld
[edited by - liquiddark on January 13, 2004 12:07:07 PM]