Advertisement

Linux, Opensource, Games

Started by June 25, 2003 11:48 PM
17 comments, last by Dreq 21 years, 4 months ago
BTW : Mandrake''s URPMI solves those RPM dependency-problems automatically. Just add RPM-sources (CD''s, servers etc), and type "urpmi i-want-this-packet.rpm"...
Newbie programmers think programming is hard.Amature programmers think programming is easy.Professional programmers know programming is hard.
Many of your Linux users may be seasoned in terms of getting things installed and working. It's a completely different audience. The support is different, and you'd likely get more helpful (i.e. developer-centric) feedback from users who are testing your program or who come across bugs/errata.

Releasing source code to the Linux world mightn't be good for client/server games, as users could find holes, abuse them, and ruin the game for other players, but it might work (and might mean a more well-received product) for single player games. I suppose it's all just a matter of testing the waters and seeing what fits in your case.




MatrixCubed
http://MatrixCubed.cjb.net



[edited by - MatrixCubed on July 1, 2003 1:02:11 AM]
Advertisement
quote: Original post by MatrixCubed
Releasing source code to the Linux world mightn''t be good for client/server games, as users could find holes, abuse them, and ruin the game for other players, but it might work (and might mean a more well-received product) for single player games. I suppose it''s all just a matter of testing the waters and seeing what fits in your case.


Or the contrary, your bugs will be exposed to you allowing you to fix them ( furthering your knowledge in the process ) thus thwarting the smaller amount of people who didn''t need it in the first place to engender exploits. As a result, your game improves.

Security through obscurity doesn''t work.


.zfod
Sorry, it doesn''t work that way.
I am working at a MMORPG, so I did study this problem.
For example, while everything is stored and verified by the server (and the server is very anal about security, it checks everything, and if you try to cheat, you are logged, and disconnected), there are some stuff that the server can''t verify.
For example, a "night vision" spell, which allows you to see at night. In an isometric enviroment, like my game, the server still sends you the charachters you have in your range, it''s up to the client to show them bright or dark, according to the global luminosity.
Now, if you have the source code, you can modify it so you can press a button, and make it full day, in the middle of the night. Otherwise, you''d have to cast a spell (loose some ragents, mana, etc.)
There is also a problem with having a button making the walls dissapear or be transparent so you can see through them, etc.

Height Map Editor | Eternal Lands | Fast User Directory
quote: Original post by zfod
Security through obscurity doesn''t work.

Actually, as I have stated before in previous threads, there is no conclusive proof that this is true. While I certainly believe it is true, I am not going to proclaim this as a fact until a number of studies are done proving it.

However, if you''re talking about security ONLY through obscurity, you are certainly correct. However, in the real world, closed source software does in fact implement actual security measures, you just can''t read (in code) what they are.

I would still be very interested in seeing a study that compares open source vs. closed source security. It would just be hard to develop a test that would truly test this, since it''s obvious what results you''ll get if you test, say, a crappy closed source program against a hardened open source one, or vice versa.
quote: Original post by Strife
quote: Original post by zfod
Security through obscurity doesn''t work.

Actually, as I have stated before in previous threads, there is no conclusive proof that this is true. While I certainly believe it is true, I am not going to proclaim this as a fact until a number of studies are done proving it.
Simple proof: Assume we have two security-dependent products A and B. A is open source, B is closed. Both of them have wide distribution. Good examples for A and B would be Apache and IIS. If a vulnerability is discovered in product A, there are two possible outcomes: the first is that the individual discovering the vulnerability is malicious and then seeks to use that vulnerability to exploit other users of that software. Since product A is open source, however, it''s only a matter of time before other individuals - at least one of whom must be "good" - discover the same vulnerability and users are notified and a fix is suggested/offered, either by the product authors or by users. Outcome 2 is that the good user discovers the vulnerability first and the malicious user has a much smaller window of opportunity to exploit it. Since open source users are typically comfortable with frequent upgrades and patch proactively, the fix propagates rapidly and all''s relatively well with the world.

With product B, the vulnerability is far more likely to be discovered by a malicious individual because "good" users of properitary software tend to assume that the product is buggy but there''s nothing they can do about it; very few good users perform security audits on their software, especially if its release is accompanied by serious marketing spiel about its updated security focus or features. Consequently, the vulnerability is exploited and, because the sources are closed, it takes longer for a fix to be developed and distributed.

While these aren''t solidly mathematically derived reasonings for why obscurity isn''t security, they are socially accurate. Many crypto specialists have asserted that a system is only secure when a potential "thief" is fully aware of all the details of how the system works - in the realm of software even possessing the implementation - and yet unable to coopt the system without access to the system secret (password, private key, etc). In short, a system is only as secure as its secret. This is, after all, the entire basis of public-key infrastructure.

This doesn''t inherently support the notion of making both client and server open source in a game that depends on consistency, though. If you choose to make the client open source, then you must effectively distrust everything about the client (generally a good idea), though that places a much heavier workload on the server.

@Dreq:
Lots and lots of closed source applications exist for Linux, and work just fine across several distributions. Some simply require a minimum glibc version; some require additional libraries like SDL, SDL_mixer, et al; some don''t require anything, having statically linked all code necessary into the executable (just provide an ELF binary). Relying on either abstraction layers (SDL, etc, which have been built for the platform already) or guaranteed components (xlib, glibc) is best - and link to them dynamically.
Advertisement
But see, Oluseyi, this is not a scientific experiment. That is why I refuse to completely acknowledge that the "securty thru obscurity" argument is bogus. Again, I agree with you, but until I see a scientific study proving it, I can''t really buy it.

And another problem with doing such a study is this: Each program being tested is intrinsicly different to begin with. I.e., one may have more bugs in the first place, and could thus be more vulnerable.
quote: Original post by Strife
But see, Oluseyi, this is not a scientific experiment. That is why I refuse to completely acknowledge that the "securty thru obscurity" argument is bogus. Again, I agree with you, but until I see a scientific study proving it, I can''t really buy it.
How would you propose testing the assertion? What kinds of software and what kinds of controls do you think can be used, and how relevant are such laboratory results to the real world (except as asinine academic arguments)?

Say we developed identical products and released them to groups of users as open and closed source products. They''d be superficially differentiated sufficiently that it wouldn''t be assumed that there was significant code similarity. What then? What do we look for? What do we test? What do we record?

The statement can''t be verified or refuted scientifically in the abstract. It can only be evaluated sociologically.
Heh,

I'm not making an argument for open-source, so let that be known first off.

It is an opinion on prudent design methodology, of course. I can't scientifically prove without a shadow of a doubt that security through obscurity doesn't work in every single case. A strong argument can be made, though.

Oluseyi refers to the concept of "Kerckhoffs' principle" from Auguste Kerckhoff's 'La cryptographie militaire'. One of the basic principles he outlined regarding the security of a system is that its details must not be required to be secret. If the details of the system are known by the enemy, it should not prevent anyone from using the system. This is also truncated by Claude Shannon's maxim -- "The enemy knows the system".

If the security of your system fails when the working details are known or 'discovered', you have a poorly designed system.

This doesn't mean that if you don't provide source to everything that your system is inherently insecure either. What this means is that if you're keeping your flaws a secret, that is the methods that make your system 'work' contain flaws that subvert the goal of the system, then the security of that system is poor.

The harshest case is obviously in the world of cryptography. If your algorithm has not survived years of abuse through public review, it will not be considered a secure system. Sure, people might use it, but that doesn't speak any volumes about its state of security.

If you're concerned that providing client source is going to cause the issues you've described, you should realize that there are a good number of people out there that can do those things without your provided source. Thus, as stated before, you should inherently distrust the client for whatever you deem necessary to be important not to compromise.

The above is the fundamental concept I use to say that 'security through obscurity does not work'.


.zfod

*EDIT -- spelling error*

[edited by - zfod on July 3, 2003 7:41:26 PM]

This topic is closed to new replies.

Advertisement