I'm denying the idea that "open-source is more secure" because... what... faries?
First of all, for the sake of clarity, I'm not saying that open-source software is necessarily more secure, just that it seems to me more likely--even if only a little bit--to be more secure.
As to the reason, the general idea is "potentially more eyes on the code".
The only way you make something more secure is to actually be motivated to make it more secure.
I won't argue against that, but I'd like to add that the number of motivated people may have an effect: the more (motivated) people that look at a piece of code, the more likely an issue is to be found.
For open source, this generally means interest in the product to be made secure, potentially motivated by monetary incentives provided by individuals or companies with a vested interest.
For closed source, this generally means interest in the product to be made secure by the actual developer (depending on work conditions), and threat of lost revenue or actual lost revenue.
I would argue that for open-source, this generally means interest in the product to be made secure by the actual developer, and any interested external parties.
In some cases there would be no such external parties, in which case it all comes down to a question of motivation on the part of the development team, just as in the case of close-source software. In cases in which there are such external parties, however, the same applies, but the efforts (due to motivation) of these parties may add to those of the development team, which isn't feasible with closed-source software.
As to motivation, Wikipedia notes that:
There is substantial evidence that monetary rewards are not effective outside the context of very rote work.[5] In some cases, monetary incentive plans may decrease employee morale, as in Microsoft's stack-ranking system, where the total reward amount is fixed and employees are graded on an artificially fitted distribution[6]
On another topic, I'd like to retract something that I said previously:
I don't know about the RAM or CPU usage--I haven't looked into either--but my own experience with the HTML5 version of the YouTube player was enough to convince me to install a Firefox extension that enabled me to switch back to the Flash version. I'm not sure of whether I had performance issues--I think that I may have--but the main issue for me was that for some reason the player lacked some of the resolution options offered by the Flash version, including the resolution that I find works best for me (480p), being of acceptable quality for most videos while streaming via my connection without buffering.
In all fairness, I'm using a fairly old machine, and running Ubuntu.
As it happens, I've just discovered that the problem appears to lie with the Linux version of Firefox. While the Windows and Mac versions (I gather) run the HTML5 player happily, the Linux version has several relevant components disabled by default. I could enable them, I gather, but I'm hesitant to enable elements that are--presumably--considered unready, and in any case I do most of my Youtube-watching on my 'phone these days.
In any case, I stand corrected on this point!