It seems as though every week here in North America the news outlets report on another hack attack or data breach. (The unreported ones are a topic for a different time.) From OPM and other government agencies to Sony and now Ashley Madison, it is becoming clear that Internet defenses just aren’t as strong as they need to be. And the attacks aren’t just happening here – data breaches are happening all over the world.
As has been pointed out many times, the bad guys only need to find a single flaw while the good guys need to protect against everything. This is true and generally leads to the philosophy that anything not explicitly permitted is prohibited. We stress this strongly in Learning Tree’s System and Network Security Introduction. The problem in most of these cases is that there are so many potential points of failure that is difficult to completely control them all. There is the underlying operating system (over which the organization may have little or no control in a shared server or cloud environment), the web server itself, the application, and maybe a database or three. Each of these has potential vulnerabilities.
Learning Tree has courses on perimeter defense and actual penetration testing. These are valuable and essential practices for all organizations, and I find it hard to believe that the implementers at the attacked sites were not at least somewhat aware of how to protect the sites and to test their defenses. So what allowed the bad guys in?
In general, organizations don’t publicly reveal specific details of how attackers got in and that can be very wise. If everyone knows the exact details, less-knowledgeable attackers may exploit those vulnerabilities in other sites before they can be patched or otherwise closed. The basic problem, though is that humans are fallible. We make mistakes either through lack of knowledge or lack of experience. I’ve talked about educating programmers before, but it is far more than that.
Over thousands of years we’ve learned a lot about physical security. We know not to leave bridges lying around next to the moat of a castle and we know to put up security bollards around vulnerable buildings. But cyber security has only been around a few decades, and we have a lot to learn to ensure that anything not explicitly permitted is prohibited (security bollards aid in this in the physical world). In many cases, the theory is clear, but the implementation is difficult. One can allow access to only a web server (TCP port 443, for instance), but if that server has a bug in its extensive code – Apache has over 1.7 million lines of code exclusive of comments and blank lines, – then if the bad guys can find it, that creates a cyber security breach. (For comparison, Linux has nearly twenty million lines of code.)
Are we doomed to living in a society where websites are frequently exploited? Is all privacy dead? Can anything be done? I am far from a doomsday prophet. I think there is hope for protecting our data and making data breaches rarer, but it will require work. It will require encryption of data: very strong encryption of data. And it will require securing the keys to that data. While that won’t prevent all data breaches – as Edward Snowden showed us, it will hopefully reduce the impact of any breaches. There is also a human element and there lies an unknown.
What else can we do? Let us know in the comments below.
To your safe computing,