Advertisement

Has the improvement in technology encouraged sloppy programming?

Started by September 21, 2009 09:19 AM
29 comments, last by Fiddler 15 years, 1 month ago
Quote: Original post by kseh
I don't know, it seems to me that if you can find a use for those few hundred thousand transitors that were wasted or speed up a routine that's called thousands of times a second by a couple microseconds that there'd be some sort of benifit.


You are missing the forest for the tree.

If Intel cared about those few hundred thousand of transistors, they would never be able to design the 731 million transistor monster that is i7.

If Google cared about those few hundred thousand cycles lost to Python, they'd never be able to scale their applications to run on thousands of servers.

If games where programmed in assembly, the game development scene would have remained somewhere in the mid-90s level of complexity.

If hardware was programmed on a low-level, you'd have to write different code for each and every GPU and sound card you wished to support (remember the pre-Vesa days?) DirectX, OpenGL, your OS steal *thousands* of cycles away from your game - would you trade that away for more "efficient" programming?

The more complex the hardware and software becomes, the more 'human-efficient' tools we need (i.e. higher-level, more abstract) in order to drive it. Every increase in human-efficiency costs in 'machine-efficiency' and increases hardware requirements. However, hardware capacity increases to more than match those requirements. In the end, which is more important: 'human-efficiency' or 'machine-efficiency'?

Personally, I stand firmly on the side of human-efficiency: the average human life is 780 months. If I can use Ocaml to create a program in 1 month versus 3 in C++, I'll use the former even if the latter runs faster. Computers are here to serve us, not vice versa!

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Most programs are bottlenecked at maybe 1% of the code anyways. So why not code in a more readable manner, and then optimize the bottlenecks by profiling your program later on?
Advertisement
Compilers are good at generating code, thats what they do. Once you start fiddling with assembly, most likely at 99% of cases, your programs overall efficiency goes down, in terms of speed anyway. Full program optimizations and link time code generation has better big picture of your source code. And at least some compilers turn all optimizations off when they see little inline assembly.

Now I don't say it cant be done, for the %1 seen from profiler.

In terms of executable size, I dont really care.
I wouldn't say it is encouraging sloppy programming. It just doesn't really encourage good programming.
scottrick49
Wirth's Law - "Software is getting slower more rapidly than hardware becomes faster."

Personally, I think one big cause for software slowdown is that there is more software produced these days, which caused the number of programmers to increase faster than the number of good programmers. The number of people with both passion, talent, and skill isn't nearly large enough to fill all the demand.

That produces such common mistakes as GUI applications with sorted lists that resort the list after each item insertion even when thousands of items need to be inserted before the user can do anything. Such mistakes turn what should be O(n*ln(n)) into O(n^2*ln(n)), O(n^3), or worse, and can make (for ex) displaying a list of files in a heavily-occupied directory take minutes when it should take milliseconds.

As for optimization, of course micro-optimization shouldn't be done unless absolutely necessary, but at the same time, you need to make sure you have a full suite of test cases to profile with, and they should include extreme situations like working on several orders of magnitude more items than you anticipate being used by most users, so that it works well for everybody all the time.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
>>Wirth's Law - "Software is getting slower more rapidly than hardware becomes faster."

Ild say wirths law has no basis in reality (perhaps in some bad examples),
but Ild say this is more likely to occur.

start up program X back on win 3.11 + it will take longer than program X (version Y) today on vista

The reason why stuff is quicker now I believe is due to the limited memory back then (I forget how much you had then 4->32mb perhaps), all I remember is you start something + then you hear the HD start to crunch (even alt-tabbing between programs was enuf to start this up)

Nowadays due to having GBs memory the HD doesnt work as much. Hell I dont even think about it before starting a new program, just start it. Back in win3.11 it was like can I have what I have in memory at present + then afford to start up a new program without the PC grinding to a halt.
Advertisement
Sloppy code to me implies badly written code, not in the sense it runs slowly, in that its unreadable, inconsistant, hard to work with or extend, and buggy.
Which in my opinion has nothing to do with the performance of the hardware you're working with, as people who write bad/sloppy code will write sloppy code no matter the machine they're working on.
With well written code (that may not most performance optimal solution) you will always be able read it and be able to improve it easily, often without having to fully re-write or spend hours figuring out what its meant to do in the first place.
Quote: Original post by Fiddler
If games where programmed in assembly, the game development scene would have remained somewhere in the mid-90s level of complexity.


Wait, complexity is a good thing now? I'm one from the bunch who prefer oldish games to the new ones. I don't want a game to be complex, I want it to be fun -- and I think I'm not the only one.
Quote: Original post by Oxyd
Quote: Original post by Fiddler
If games where programmed in assembly, the game development scene would have remained somewhere in the mid-90s level of complexity.

Wait, complexity is a good thing now? I'm one from the bunch who prefer oldish games to the new ones. I don't want a game to be complex, I want it to be fun -- and I think I'm not the only one.

If you're aware that you're "not the only one" then you should also be aware that this is a big industry that provides a wide range of experiences to a wide range of people with a wide range of preferences, some of which include complexity as entertainment. So what exactly is not a good thing, now? Everyone not catering specifically to you?
_______________________________________Pixelante Game Studios - Fowl Language
Quote: Computers are here to serve us, not vice versa

+1


If we talk games alone, I think so far we've been exploring the crust of things driven mainly be quick development of cheap processing power. Multicore and the increasingly more complex devices stand in the future. I think 'programming' will be pulled to its stretches to cope with the new challenges/new types of machines that will emerge.
i.e the hardware drives the programming techniques not the other way arrwound. If we had 10 Ghz processors we would still do centralized programming rather then distributed, programmable hardware wouldn't exist (GPU, because everything would be done by the CPU) and there would be far less programmers because the non-distributed programs require less code and the code is less diverse.

[Edited by - Calin on September 23, 2009 9:26:03 AM]

My project`s facebook page is “DreamLand Page”

This topic is closed to new replies.

Advertisement