So Occam's razor means we don't have to declare conspiracy conjecture as fact to explain them.
Mind that Occam's Razor it's not a "law of nature", and that it has a slightly different meaning. It doesn't mean "the easiest explanation is the correct one", nor "the least troublesome explanation is true". He said, and that is a wise thing, that you should not needlessly complicate things when a satisfying explanation exists.
Is "duh!" a satisfying explanation when obvious bugs are built into previously working code (referring to Heartbleed here) without any obvious need? Is it a satisfying explanation for a very major exploit, or rather a dozen of them, that has demonstrably been used by governmental agencies for years (which must either have been obvious enough for them to find easily, or deliberately placed so they knew about them), especially in an age of "revelations" à la Snowden (who, truth being told, didn't reveal anything that you didn't already know, only now it's "kinda official" whereas before it was "tinfoil-hat theory").
I'm not saying that there isn't a possibility that some of these exploits weren't done by accident, but some surely were done on purpose. And the sheer number and the fact that nobody seems to review mission-critical code raises a red flag for me. When you publish code that is known to be under constant attack (say, a TLS library), you necessarily have a different standard.
Simply "duh!" is not a satisfying explanation, so I'm reluctant to accept Occam.
I've shipped games containing "use after free" bugs (just like this flash one) which persisted in the codebase for YEARS later, until eventually being found by a code audit (and lots of luck) -- these bugs could potentially allow user save games to inject code into the game (and potentially hack the game's host OS).
Ah yes, but there's a couple of very important differences here.
First of all save games are something the user creates, not something that runs as drive-by when browsing a website. The user may create a malicious save game file and inject code, subvert the game, ok... but... that's not really big time scary. The user can already do that anyway. Having physical access to his own computer, he can do just what he wants, he doesn't need your bugs.
But let's assume it was something different, maybe not the save game but a flaw in the network protocol that you inadvertedly built in, so someone knowing a user's IP address (that would be the ISP or you, if you host a server-based game) could exploit it. Your code is already running on the client's machine, what else do you want? You don't need that exploit.
To another user, it's pretty useless unless they run portscans on random addresses or happen to know someone's IP address (and know that he has your game running) and manage to get past firewall/NAT/router/whatever, which usually isn't quite so easy. Most people are unable to MITM someone, and few are able to directly target someone at all, except maybe at a place like Starbucks' with inadequately set up WiFi.
The ISP knows your IP address, of course, and they can tell from the traffic that the user runs your game (and when!). But the ISP is already much more powerful than that, they control anything the user downloads. They could rather easily redirect DNS or any connection to any server, present wrong certificates, and finally replace any contents with malware (well, not that easily, and this isn't very likely to happen either... but... still... if there's someone who can fuck you hard, it's your ISP). So, they don't need your exploit either.
Most importantly, however, it's apples and oranges. What you wrote was code for a game. You didn't write an operating system or a general-purpose, widely deployed "run code on user machine" platform such as Java or Flash. These are different things.
Of course a game should preferrably be error-free and should not offer means to exploit a user's machine, but this is not such a hard requirement. A game shouldn't crash or have memory leaks either, but again... if it does, so what. After all, it's a game, and it's running on someone's home computer.
When writing software that controls a nuclear powerplant or a software that manages medical records or financial records, the expectations are much different. These programs really shouldn't fail, and they really shouldn't allow someone to do certain things. The requirements are hard because the possible consequences are hard.
Now, the thing is, all these super secure, super audited, certified programs run under an operating system which doesn't meet the requirements, and not few have a "run any kind of code here" platform installed which doesn't either. Yes, the computer controlling a reactor would reasonably (at least, hopefully) be airgapped, but that isn't the point.
If you can expect that high risk mission-critical stuff will run on a system, you must ensure to the best of your ability that this system is failsafe (both in a sense of crashing and exploits). Something like "allows you to run code on several million computers without the average user noticing" definitly counts towards "super high risk, mission-critical". It's not the same ballpark as "couple of teens play this on their home computer or mobile phone".
Also, why bother with a security hole at all when you could just force them to insert a proper downloader of your malicious code which only triggers when a special cryptographically signed piece of data is seen?
Because the people who are the most interesting don't run Windows Update. They're either wearing tinfoil, or their copy of Windows is pirated.
Besides, if I think about how Microsoft secretly placed a malware downloader (KB3035583) among its recommended updates not long ago, it looks like they're trying that anyway. Makes you wonder what's the intention behind the decision of their announcement of giving everybody in possession of some version of Windows (including people who have pirated it) a valid Windows 10 license, too.
Why would you give your best, most recent software version to someone who demonstrably steals your stuff and who is never going to pay you? Obviously because you want them to have that particular software for some reason. Marketing could be a reason (but you already know they're never going to pay you!), or trying to counter the adpotion of free software (pretty silly if you have to give your stuff away for free, too), trimming down botnets could be another (unlikely), but replacing their systems with something that has some particular property or functionality might be another valid motive.
When the next war begins, wouldn't it be nice for the US government to send a kill code over the internet, and all computers in China stopped working within a second or two? Of course, for that to work, the correct software must be installed on all those computers first.