Now Playing Tracks

"Given enough eyeballs, all bugs are shallow"

(This post was originally a comment on a post by +Alex Ruiz and is reposted here by request)

In the wake of Apple’s SSL “goto fail” bug followed quickly by a defect in GnuTLS, we are now dealing with #heartbleed , arguably the worst security vulnerability in the history of the Internet. In each case, the defect was in “open source” code for years before being discovered.

As news and understanding of just how bad this vulnerability is, and in trying to reconcile cognitive dissonance with old, largely unchallenged adages like “open source is more secure” and “given enough eyeballs, all bugs are shallow” many are questioning whether these statements are really true or not.

So, is open source more secure, simply because it is open?

We are at a point in which we can make some observations, but I would propose that we are far from being able to draw any reasonably supportable conclusions about it. To be clear, I am not arguing that closed is more secure or open is more secure. I am pointing out details and observations that would be explored in an actual rigorous pursuit of answering such a question. If I am arguing anything, I am arguing that we don’t have enough data to answer that question yet.

The blue team today is everywhere, so I’ll red team this (note that I can argue against many of the following points as well. http://en.wikipedia.org/wiki/Red_team):

Yes, open source allows anyone to do a security audit. Including the NSA, including organized crime syndicates, including disorganized criminals, including hostile governments. If these organizations are more zealous in their participation, the net effect is less safety. When a vulnerability is introduced into code, it exists as a potential exploit. It becomes realized when someone identifies it. Which someone will depend on motivation, access, and skill.

Yes, open source allows anyone to do a security audit, but can they (are they even capable in skill?), and do they? GnuTLS and OpenSSL are giving a strong hint of an overall diffusion of responsibility problem in the model. DoR (bystander effect) is a well studied phenomenon, and it appears to be a potential root cause in this case. The problem is that we have two security defects in a row that have been in the open for multiple years at a time. This should raise some questions about the model, and it should be cause for reflection.

Yes, closed source is more difficult and time consuming to audit. However, this is true not just for white hats, but for the black hats as well. For example, the various groups jailbreaking the iPhone relied (black to Apple, white to others) on the fact that the kernel source was open. When Apple started holding it back, it slowed them considerably. Likewise, they started holding back disclosures to use for future jailbreaks. The interplay of human motivation and development approach are inextricably intertwined in answering this question.

The more secure methodology then is determined by whether white hats are more zealous and aggressive than black hats. Unfortunately, “not my job” is showing up a lot in this topic as well, which would suggest that black hat groups are more motivated and focused, because they are taking the initiative. Point here is, the code “being open” makes it neither more or less secure. The human behavior around it what results in increased or decreased security. In a population of motivated black hats and unmotivated white hats, it’s reasonably to hypothesize that this backfires. For example, if it came out that the NSA was using that, that would support such a hypothesis.

"Given enough eyes, all bugs are shallow" doesn’t even stand up to logical analysis, much less experiment. Infinite "eyes" with no knowledge or understanding of the ‘C’ language will never find many defects, much less obscure security vulnerabilities. So we can throw that out and say, "given enough highly skilled eyes that are given enough time and interested in doing the analysis, all bugs are shallow” (this is further debatable, as we can look at a correlation between skill level and defect recognition. That is, for any given level of skill, defects beyond that skill level will never be recognized.)

In closed approaches, highly skilled and specialized eyeballs are more often hired (which means they are incentivized to do the boring and tedious job) and accountability is clear (the owner of the ‘closed’ system is accountable compared to an ‘no owners’ model). Who should have caught OpenSSL? Who is, at the end of the day, accountable for OpenSSL? The 47 independent contributors to it? Red Hat? Google? The Linux Foundation? Everyone? No-one? The advantage of closed software in this regard is that accountability is clear, and they generally have the financing to hire those highly skilled security experts who do not generally work for free. (an extension of this would be the proprietary company that releases open source, but takes accountability and ownership of it. another extension, the company that profits from open source, they did not write it, but they benefit from it)

"Open source is more secure" may be true, but it’s likely to be followed with lots of qualifiers like: if said source has a clear steward or ‘owner’ and if that steward/owner can attract and retain skilled security talent interested in vetting that code and if they outnumber and outperform the skilled security talent the bad guys are hiring.

In the end, code is more secure when the right (white hat) people with the right (security analysis) skills perform audits before the wrong (black hat) people with the right skills do. There are many variables and nth order effects in all the arrangements of that.

http://click-to-read-mo.re/p/6vza

To Tumblr, Love Pixel Union