It is impossible to cause a classic buffer overflow directly in an interpreted language because array accesses are bounds checked. The following in C would probably crash the program:
char n[10];
n[-MAXINT] = 83;
In a scripting language the illegal offset would be filtered out, usually just causing a warning and being discarded.
Or, in a more sane non-PHP language/platform, you might end up with a catchable and easy to handle exception or at least a reliable crash/shutdown.
A buffer overflow is when you have a buffer on your stack (like char mytext[256]; in C) and then you read input into it or copy into it without making sure the size copied is less than the buffer size.
Which it typically done by passing a pointer to (the first element in) that buffer to your input or copy function. You are (usually) not passing an array or a buffer, you are passing a pointer. It's up to the programmers to check boundaries and whatever in every single place in every single function where you are using pointers/buffers like this. At some point, someone inevitably misses one such check at one such code location and bam, you have a potentially exploitable bug. In some cases, you have checks but just accidentally write <= instead of < and bam, bug/hole/crash/whatever. The worst part is that compilers and even advanced static analysis tools have near zero chance of catching these mistakes.
In (insert just about anything other than C/C++ here) you'll work with language supported arrays/buffers/ranges with defined boundaries where the compiler/platform/tool chain have a chance of verifying what you are doing and at the very least detect errors at runtime. Some languages are designed in such a way that you cannot even write code that attempts to move outside of the bounds. Due to design decisions of C, even if the compiler can detect that you are writing outside of the bounds, it can't really do anything about it cause it can't be sure it wasn't your intention to do so. From a performance perspective, this can often be a good thing, but from a security/safety/stability perspective, it's a nightmare.
You complete solution will be a stack with hardware, os/drivers, a potential application platform (ie JRE/.NET) and your own application on top. Hardware and OS issues are more or less beyond your control, you'll have to react and fix when issues arise, and then rely on firewalls, antimalware, redundancy etc to reduce the effects of issues.
The platform is somewhat optional. Not using one means writing your full application from scratch, typically using C/C++, with giant probabilities of issues as well as potentially low productivity. An advantage is that the issues are less likely to be found than issues in a platform, especially if it's a non-distributed server-side only application. Using an application platform typically means introducing a layer that is beyond your control where you need to react and fix issues as they appear (ie by software updates), but it also means that your own application will have less issues since the platforms and associated languages prevents most of them. It's my strong belief that using a sane platform will enhance your overall security.
There is no such thing as being completely secure, but you can affect/improve security by making decisions about your platforms etc. It's not the only thing you should be doing, but it's one thing you should be doing.