Advertisement

how to know most hack possiblities and find best way to handle them

Started by January 08, 2015 09:28 PM
39 comments, last by moeen k 9 years, 9 months ago


and create intrusion detection systems

IDS won't help alot here. Your game protocol is custom so off-the-shelf pattern matching of packets by an IDS won't work. It will let you know if someone tries to brute force SSH your server though.

In fact, most online game hacking simply uses the facilities built in to the game protocol itself. They simply inject code into the client binary or build their own binary to send traffic to the server that looks legit and indistinguishable from normal traffic.

I agree, you can't just install snort, that is unless you create your own rules. I mentioned creating an IDS, so it would be yet another app written by the game developers to suit the needs of the game's server smile.png

I suppose it depends in your view.


What depends on the view? The fact that scripting languages may also have vulnerabilities?
I'm aware of no legit viewpoint that claims that you will be immune from pointer/buffer problems just because you're using a scripting language.

My view is that if you have no pointers, you cannot accidentally dereference them or point them straight to hell.


You cannot. But can you trust the people who built the platform you're using to also have no bugs? (Spoiler alert: No, you cannot!)

The Java error you linked is a typical C/C++ problem which couldn't have happend in code written in say C#/Java


I think you misunderstood the bug. The bug is in the Java implementation. The bug allows a malicious user to inject "legitimate" data into an application written in Java, and by doing so, start executing arbitrary machine instructions, thus being able to "own" the machine.
Your software is written in Java, so your software doesn't have pointer/buffer problems -- but Java itself does, so the interface you expose to the world DOES have those problems.

Some of the most damaging vulnerabilities in the last few years have been systemic vulnerabilities -- bugs in TCP/IP stack implementations, SSL libraries, graphics drivers, command shells, and the like, which may allow anyone to execute code on your machine. However, this is a slightly different kind of problem than insecure games -- these bugs allow an arbitrary attacker to use your machine resources. The game-specific hacks allows a hacker to fool your game/servers in some what that leads to advantage or value in the game, so the list of potential attackers is somewhat limited compared to the list of attackers that care about owning arbitrary machine resources. (That doesn't mean it's zero.)

Anyway -- first make sure that your game is fun, and that you have a way to explain to the world that the game is fun so that you actually get plsyers. Make sure the game design is reasonably well architectured (all vital game state verified on the server.) Then improve as needed, if needed, as your resources allow. A fun, successful game that loses 20% to hacking is a whole lot better than a boring game with no users that loses 0% to hacking.
enum Bool { True, False, FileNotFound };
Advertisement

I think you misunderstood the bug. The bug is in the Java implementation. The bug allows a malicious user to inject "legitimate" data into an application written in Java, and by doing so, start executing arbitrary machine instructions, thus being able to "own" the machine.
Your software is written in Java, so your software doesn't have pointer/buffer problems -- but Java itself does, so the interface you expose to the world DOES have those problems.


No, you misunderstood me. smile.png Java is a complex piece of software written in C/C++, where it is ridiculously easy to make these kinds of mistakes. Had the software (the platform itself) been written in something less error-prone (I don't know, Ada maybe?) then perhaps it wouldn't have been possible to make said mistake. I'd argue that C and C++ are horrible languages for security/stability critical software such as platforms, it's just that they are (sadly) pretty much the only realistic choices.

It's always possible (highly likely) that the next lower level will have exploitable issues, even down in the hardware, but that's no excuse to not do everything in your power to make sure that your own code does the intended and doesn't introduce additional bugs/holes. Some languages and platforms are designed for power, others for performance, some for easy development and others for security/stability. I'd argue that choosing a platform that was designed with security/stability in mind is your best bet, even if there are no guarantees.

The "depends on your view" part was about looking at the software itself or the complete solution. I think both are very important view points. From my view, if I use Java and Java has a bug, it's not a bug in my software but a problem that affects my solution.

Fun trivia: In the embedded industry, there are lots of different safety certifications, some of which require that you can prove that every single instruction in your software has been executed with the desired result. That's at the bottom of the certification ladder. You still have to account for potential hardware issues and/or timing issues. The higher levels will barely allow you to have conditionals.

Had the software (the platform itself) been written in something less error-prone (I don't know, Ada maybe?) then perhaps it wouldn't have been possible to make said mistake.


I'm not sure such a platform exists. All the widely used platforms are written in C/C++ (and perhaps some amount of assembly.)
C#/CLR, Java, Python, Rails, PHP, Node -- they're all in turn implemented in C. In fact, most of them, in turn, generate assembly code from the parsed scripted code, to run faster, and that assembly code generation may also have bugs.

Although there is some research into minimal and provable systems bootstrapping -- typically, some minimal LISP, where you can prove that the system doesn't escape outside of its bounds, and then all the libraries are written in LISP. And run very slowly.

Security is best done by, first, using restrictive whitelisting (and rejecting anything not on a whitelist,) and second, being active, aware, and on the ball with mitigation for any problems as they come up.
enum Bool { True, False, FileNotFound };

Well, I partially agree on the platform thing. It's a bit of a chicken and egg issue.

Even if your code is 100% correct and flawless, you may still suffer from hardware issues. Does that mean you shouldn't care at all about writing safe/secure software and just fix issues when they are seen?

Well, I partially agree on the platform thing. It's a bit of a chicken and egg issue.

Even if your code is 100% correct and flawless, you may still suffer from hardware issues. Does that mean you shouldn't care at all about writing safe/secure software and just fix issues when they are seen?

A lot of security flaws you will not know about until they are discovered. This is the "nature of the beast". However, you can mitigate the risk by writing secure code to start with. This is a habit mainly, which has to be picked up and stuck at. Once you are in the habit of writing secure code, you will do so automatically and it will help immensely.

Advertisement


My view is that if you have no pointers, you cannot accidentally dereference them or point them straight to hell. It means you cannot introduce that kind of bugs. The Java error you linked is a typical C/C++ problem which couldn't have happend in code written in say C#/Java (well, there are always exceptions ofcourse).

What does that have to do with getting your binary hacked? You are going to get hacked whether you use pointers or not.


so your software doesn't have pointer/buffer problems

buffer overflows have nothing do with pointers in your code- not having pointers doesn't prevent buffer overflows.

A buffer overflow is when you have a buffer on your stack (like char mytext[256]; in C) and then you read input into it or copy into it without making sure the size copied is less than the buffer size.

If this buffer resides in memory in the stack before the instruction pointer, then overflowing the buffer and changing the value of the instruction pointer to point to the buffer where you inserted x86 opcodes will allow you to run that code when the current function returns.

The instruction pointer is put there in the stack by your CPU when it jumps to a function so it knows where to jump back to when it returns.

It is impossible to cause a classic buffer overflow directly in an interpreted language because array accesses are bounds checked. The following in C would probably crash the program:

char n[10];

n[-MAXINT] = 83;

In a scripting language the illegal offset would be filtered out, usually just causing a warning and being discarded.

It is impossible to cause a classic buffer overflow directly in an interpreted language because array accesses are bounds checked. The following in C would probably crash the program:

char n[10];
n[-MAXINT] = 83;

In a scripting language the illegal offset would be filtered out, usually just causing a warning and being discarded.


Or, in a more sane non-PHP language/platform, you might end up with a catchable and easy to handle exception or at least a reliable crash/shutdown.

A buffer overflow is when you have a buffer on your stack (like char mytext[256]; in C) and then you read input into it or copy into it without making sure the size copied is less than the buffer size.

Which it typically done by passing a pointer to (the first element in) that buffer to your input or copy function. You are (usually) not passing an array or a buffer, you are passing a pointer. It's up to the programmers to check boundaries and whatever in every single place in every single function where you are using pointers/buffers like this. At some point, someone inevitably misses one such check at one such code location and bam, you have a potentially exploitable bug. In some cases, you have checks but just accidentally write <= instead of < and bam, bug/hole/crash/whatever. The worst part is that compilers and even advanced static analysis tools have near zero chance of catching these mistakes.

In (insert just about anything other than C/C++ here) you'll work with language supported arrays/buffers/ranges with defined boundaries where the compiler/platform/tool chain have a chance of verifying what you are doing and at the very least detect errors at runtime. Some languages are designed in such a way that you cannot even write code that attempts to move outside of the bounds. Due to design decisions of C, even if the compiler can detect that you are writing outside of the bounds, it can't really do anything about it cause it can't be sure it wasn't your intention to do so. From a performance perspective, this can often be a good thing, but from a security/safety/stability perspective, it's a nightmare.

You complete solution will be a stack with hardware, os/drivers, a potential application platform (ie JRE/.NET) and your own application on top. Hardware and OS issues are more or less beyond your control, you'll have to react and fix when issues arise, and then rely on firewalls, antimalware, redundancy etc to reduce the effects of issues.

The platform is somewhat optional. Not using one means writing your full application from scratch, typically using C/C++, with giant probabilities of issues as well as potentially low productivity. An advantage is that the issues are less likely to be found than issues in a platform, especially if it's a non-distributed server-side only application. Using an application platform typically means introducing a layer that is beyond your control where you need to react and fix issues as they appear (ie by software updates), but it also means that your own application will have less issues since the platforms and associated languages prevents most of them. It's my strong belief that using a sane platform will enhance your overall security.

There is no such thing as being completely secure, but you can affect/improve security by making decisions about your platforms etc. It's not the only thing you should be doing, but it's one thing you should be doing.

It is impossible to cause a classic buffer overflow directly in an interpreted language


...unless there are bugs in the language interpreter, or the libraries it uses. In which case you can exploit (or accidentally run into) those bugs and find yourself with a buffer overflow anyway.

That's not a theoretical concern. We use a lot of PHP at work, and we run into bugs in the language, runtime, and libraries, with some frequency.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement