Advertisement

Has the improvement in technology encouraged sloppy programming?

Started by September 21, 2009 09:19 AM
29 comments, last by Fiddler 15 years, 1 month ago
Quote: Original post by Extrarius
Wirth's Law - "Software is getting slower more rapidly than hardware becomes faster."


I'm not so sure this holds anymore. Even a low-end computer can run 95% of the applications used by a typical user nowadays (a quad-core and 4GB of memory for running *Word*)?

Games have traditionally pushed hardware capabilities but with most games targeting consoles there is an upper limit to graphics complexity. Indeed, I've yet to find a game that doesn't run full speed on my mid-range 4850 (Crysis excluded).

Obviously, Wirth's Law still applies in the workstation and server markets but these have a completely different class of requirements from the hardware.

Quote: Original post by Oxyd
Quote: Original post by Fiddler
If games where programmed in assembly, the game development scene would have remained somewhere in the mid-90s level of complexity.


Wait, complexity is a good thing now? I'm one from the bunch who prefer oldish games to the new ones. I don't want a game to be complex, I want it to be fun -- and I think I'm not the only one.


What's stopping you from writing an old-school game in Ocaml and OpenGL instead of C and software blitters?

The game would still be old-school but you'd finish it with one tenth the effort and would probably produce much less sloppy code to boot.

Human-efficiency vs machine-efficiency.

Quote: Original post by scottrick49
I wouldn't say it is encouraging sloppy programming. It just doesn't really encourage good programming.


Wait, you are saying that assembly encourages good code and Haskell doesn't? You are mistaken. :-)

When's the last time you saw a goto in C? (Unavoidable in assembly!) A dangling pointer in C#? (unavoidable in C/C++) A null reference exception in Ocaml? (Unavoidable in C/C++/C#)

The higher-level the language, the less sloppy your code becomes.

Quote: Calin
i.e the hardware drives the programming techniques not the other way arrwound. If we had 10 Ghz processors we would still do centralized programming rather then distributed, programmable hardware wouldn't exist


Quoted for truth!

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Quote: Original post by Fiddler
When's the last time you saw a goto in C? (Unavoidable in assembly!) A dangling pointer in C#? (unavoidable in C/C++) A null reference exception in Ocaml? (Unavoidable in C/C++/C#)

The higher-level the language, the less sloppy your code becomes.


Just because it bugs me, the use of goto statements in of itself isn't sloppy programming it's the indescriminate use of goto statements that is. So what if they're unavoidable in assembly? They're not meant to be avoided in assembly. Though other languages might be superior technology permitting higher complexity they do not necisarily result in superior quality products.

Quote: Original post by Extrarius
Personally, I think one big cause for software slowdown is that there is more software produced these days, which caused the number of programmers to increase faster than the number of good programmers. The number of people with both passion, talent, and skill isn't nearly large enough to fill all the demand.


I believe that this describes the situation very well. Technology has advanced sufficently enough to make it easier for a larger number of people to learn programming basics. Of these people only a small number have a passion, talent, or ever develop any kind of expertise. However, all the rest still get jobs programming and end up coding a passible quality of product. Over time that product requires enhancements that were never imagined necissary. It becomes unwieldy and difficult to maintain. This creates a percieved need to enhance technology. And since the technology is now easier to use, it's easier to teach more prospective programmers.

Having more programmers isn't necissarily a bad thing. Technology that allows for more complex products isn't a bad thing either. I think that completly loosing sight of how we got to be able to produce those complex products is a bad thing.

Aside. I stumbled accross this this morning. I have no desire to go back to those days of programming nor would I call the techniques the guy used to develop his blackjack game appropriate. Writing a program that you yourself are unable to modify doesn't strike me as a good practice at all. Still, I have an appreciation for what it was he was trying to accomplish.




Advertisement
Quote: Original post by LockePick
If you're aware that you're "not the only one" then you should also be aware that this is a big industry that provides a wide range of experiences to a wide range of people with a wide range of preferences, some of which include complexity as entertainment. So what exactly is not a good thing, now? Everyone not catering specifically to you?

Not a good thing is introducing a complex solution where a simple one would suffice. You are definitely right that sometimes complexity is the fun part: simulators of various kinds come into my mind first. I'm just saying that I got the feeling that most of the complexity today is introduced "Just because we can". If this industry is as wide and serving as wide spectrum of expectancies as you're saying, where are the AAA-rated titles that are simple?
Quote: Original post by Fiddler
Quote: Original post by scottrick49
I wouldn't say it is encouraging sloppy programming. It just doesn't really encourage good programming.


Wait, you are saying that assembly encourages good code and Haskell doesn't? You are mistaken. :-)


I was referring more to the fact that in a low resource environment, sloppy programming will be more noticeable. Today, many applications are running on systems that are far more powerful than necessary, so even if they are coded poorly, nobody will ever notice.

In a low resource environment, good programming is rewarded because the user will have a better user experience. In a high resource environment, it doesn't really matter if you do it 'right' or 'wrong' since the speed difference (while maybe an order of magnitude in difference) is imperceptible to the user.
scottrick49
I pretty much agree with what Anon Mike stated.

For me, it's not that improved technology has encouraged sloppy programming, rather, improved technology has lowered the standard of who gets hired as a programmer.

I honestly believe that programming is an artform. Sure, you can train yourself to be better, but some people are just more naturally gifted than others. The problem is that regardless of if they're a good programmer because of talent or practice, in the end, the person requires a higher salary.

Honestly, effienciency is a thing of the past. Forget about new technologies such as languages helping in efficiency and bugs. Sure, they help a little bit. But in the end, a compiler only makes a program which does what we tell it to do. It might be able to eliminate instructions or change the order around to help speed things up a little, but those are microoptimizations compared to the horrible algorithm used which slows down the program more than it should. And there will never be a language nor compiler which can correct that appropriatly, because how is a compiler suppose to know what you wanted to do, or why you did something one way compared to another?

The days of needing to be a competent programmer are long gone. Sure, there are some companies which hire talented people, but for the most part, most jobs do not require such a skill set. Who cares if the program takes 2 seconds or 2 minutes today? Tomorrow it will only take 1 minute at worst. The only speed demanded by jobs is how quickly you can get the product to the customer.
Quote: Original post by Nytegard
Honestly, effienciency is a thing of the past.


You don't do console game programming for a living do you? We still need to have a decent knowledge of the underlying hardware and can't just naively spraff code at the screen and expect it to work well.

Of course that doesn't stop some people from trying XD
Advertisement
I think that Mr. Braben (intentionally) compares apples and oranges.

Modern average computers have on the order of 30,000-200,000 times as much main memory as the machine he refers to, and a CPU that is easily 10,000-15,000 as fast (not taking into account multi-core and SIMD), plus a GPU.
On the other hand, modern games are in no way comparable to 1984 games. I haven't played the particular game "Elite", but the "typical" early 80s games that I remember had a sheer abysmal diversity and terrible (or better, lacking) AI. Let's not even talk about the graphics. Still, as we weren't used to anything better at that time, they were darn amazing.

While it is certainly worthwile to write hand-optimized assembly on a 32k 8-bit non-superscalar machine, this is nowhere near true on a modern system. High level languages make a flawless implementation and maintenance much much easier, and the shortest executable code is not necessarily the fastest. Writing assembly that rivals a compiler's output in speed on a superscalar machine is hard, to say the least.

1980s development was mainly about implementing algorithms with your software. Today's development is focussed on using algorithms (and libraries that use these) that have already been implemented to actually produce what you want. General-purpose standard libraries may not be 100% ideal for every case, and they may add some bloat, but they are usually well tested and mostly error-free, which is a big plus for a software's quality. I wouldn't call using what's proven to work "sloppy".

On the other hand, assets such as textures and sounds typically take several hundred megabytes (and more), so a few dozen kilobytes more or less don't really matter.
Quote:
Original post by Nytegard
The days of needing to be a competent programmer are long gone.

The only speed demanded by jobs is how quickly you can get the product to the customer.

Yes, the only speed demanded by most jobs is quickly getting a product out to the customer, but that very much requires competent programmers.
New technology and methodologies have given us the option of trading programming time/complexity for CPU time/memory/etc. You can still drop down into C/ASM for programs that you need to optimize to that level. Heck most higher-level languages give you the option of dropping down levels just for the parts you need to. What we now have are tools that allow us to focus on writing working code, and identify the parts we need to focus on.

That being said, I find that most of the people who complain about software bloat do one or more of the following:

1) Don't remember how buggy old software was
2) Don't remember how insecure old software was
3) Mistake low latency for low throughput
4) Severely underestimate the capability of current software and hardware
5) Severely overestimate the capability of old software and hardware
6) Enjoy writing low-level code, and get upset that development is shifting to higher-level languages in more and more areas
Quote: Original post by cyansoft
Quote:
Original post by Nytegard
The days of needing to be a competent programmer are long gone.

The only speed demanded by jobs is how quickly you can get the product to the customer.

Yes, the only speed demanded by most jobs is quickly getting a product out to the customer, but that very much requires competent programmers.
Or a lot of programmers, which is often the case.

The demand for software is much higher than is the supply of highly skilled programmers. For many problems though, you in fact dont need highly skilled programmers to generate software. You just need a lot of programmers who are willing to hack on it until it works well enough to shove out the door. Additionally, highly skilled programmers aren't that easy to identify, either, without a lot of exposure to their past work. That, coupled with the fact that the typical means of formally teaching programmers thier craft involves no [or near to no] emphasis on design or software architecture, and you end up with a big mess indeed.

Then again, there were 'sloppy' programmers 10 years ago. And 15 years ago. We are now expecting our computers to do a lot more of our every-day tasks than we used to, and this results in a huge surge in the need for software in a domain where software efficiency is less important, and even correctness in the strictest sense is in many cases not vital.

In the domains where it is important, you still see software held up to a very high standard. I would argue that it is held up to a higher standard than ever before, actually, since we have more formal tools for examining programs than we have ever had before. We have domain specific languages for reasoning about things like compiler semantics preservation, and concurrency control, and have stochastic models for examining fault tolerance in systems. We have validation tools that can prove constraints. All this, with compilers that can run circles around the hand-tuning of yesteryear. Arguably, the continued use of things like C and inlined assembly are adverse to this effort. While more 'efficient' in terms of cycles, they are drastically more difficult to reason about as compared to something like haskell or C#/Java, or whatever future [better] language we may come up with next.

This topic is closed to new replies.

Advertisement