Is There Really Such a Thing as Security Through Obscurity?

DARPADARPA, the U.S. Defense Advanced Research Projects Agency, has issued an interesting Broad Agency Announcement for the SafeWare program (details here) seeking a “highly efficient and widely applicable program obfuscation method with mathematically proven security properties.” What’s going on?

DARPA wants to create programs that people can run without figuring out how the programs work. That’s relatively easy to conceptualize, but very difficult to accomplish.

There are plenty of enthusiastic but ill-informed ad-hoc techniques. People hoping to gain security through obscurity by intentionally hiding the logic of their software have typically employed strangely disorganized “spaghetti code” logic and meaningless names for variables and functions. (Perhaps seeking inspiration from unintentionally bad programming techniques!) The International Obfuscated C Code Contest is a humorous riff on this approach.

The problem is that this naive technique doesn’t really hide logic or sensitive data from motivated attackers. It certainly makes the code harder to understand and maintain, but it doesn’t provide security. Matthew Green has written an excellent explanation of why the naive approach doesn’t work, and why fundamental work in obfuscation is significant but it won’t immediately lead to “unbreakable code”.

DARPA has set the bar high: “Proposed research should investigate innovative approaches that enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.”

They list a number of criteria which must all be satisfied. The first is that a successful de-obfuscation attack would require the solution of a computationally hard mathematical problem. This is analogous to how the security of the RSA cipher relies on the difficulty of factoring large integers. They elaborate that polynomial increases in program runtime must lead to exponential increases in attack work.

Another requirement is that a full understanding of the method itself provides no advantage to the attacker. This is Kerchoff’s principle, which was published in 1883 but which sadly is often overlooked today.

Finally, the technique must apply to practical applications while not relying on impractical platforms or environments.

This isn’t going to be an easy or immediate fix. We are, after all, talking about advances in fundamental computer science.

Unfortunately, this work makes it tempting for people to say that naive attempts at obscurity will provide security. That just isn’t the case, we have to be careful about reading too much into the work.

We have have mentioned research work in homomorphic encryption, a related technology, in Learning Tree’s Cloud Security Essentials course for some time now, and I mentioned it here last year. This an area to to watch carefully!

Bob Cromwell

Type to search blog.learningtree.com

Do you mean "" ?

Sorry, no results were found for your query.

Please check your spelling and try your search again.