March 22, 2014

Isolated execution, or why open CPUs are so important.

With all that recent hysteria associated with NSA and all different sorts of spy trojans it’s a good time to think about truly secure computation. Is it feasible at all? Of course you can use a variety of fancy anti-malware software and even run some sort of isolated OS, like popular Joanna Rutkowska’s Qubes OS. But does it give you a real security?

One of the biggest trends going now is isolated execution. The idea is very simple, you can run some payload code (normally executed as a user-level application) with somewhat strong level of isolation from other applications, or even from supervisor software, running at more privileged level. Sounds great, right? There is a number of problems associated with this approach, like if your application is isolated from the OS, how would it get any data coming from OS in response of a system call, etc. These problems have been more or less solved in research papers.

Many of these solutions still rely on some critical software, normally hypervisors. To prove their point, they usually make an argument that the hypervisor can theoretically be small enough to be formally verified, or at target system can use measured boot mechanism, provided by a TPM. Unfortunately no one has shown working formally proved hypervisor yet. And why are you supposed to trust your TPM chip fabricated somewhere in china by some third-party random performing secure boot for you? Moreover, successful attacks against TPM have been demonstrated.

Looks like we need to do security in hardware. So, reasonably would be to limit trusted computing base (TCB) to the most essential hardware, like the CPU itself and possibly memory, the bus, etc. However, newest research papers tend to exclude even the RAM out of TCB. Summarizing, we have decided we need a hardware security (isolation) features and we need them to be implemented in hardware. That’s where a new big thing from Intel, called SGX coming to. It is a hardware isolated execution mechanism, it provides your applications with an ability to run completely isolated from the rest of the system, even your OS! It also provides you with a mechanism to share some data between trusted (isolated) and untrusted application. Unfortunately, Intel SGX is not available yet. Rumors say it will appear on first chips shipped starting from December 2014. We’ll see.

Intel SGX is a great giant step to secure computation. But there is also a downside… Why are you supposed to trust Intel? Or any other chip vendor. With current level of complexity in modern processors and the number of transistors in them, there could likely be some security holes, or even intentionally left backdoors. One very interesting research paper shows how it’s easy to design a hardware backdoor and build it into a processor and the malicious mechanism is not something limited, targeting some very specific algorithm or whatnot. It is a flexible, configurable execution mode. You can outline some basic hardware backdoor design even yourself. Like, some exact instruction sequence triggers a mechanism in the processor that switches currently executed program to supervisor mode. Having such a simple tiny backdoor could be enough to do a lot of bad things with a carefully constructed attack.

So, what’s the possible solution? How can you check that there is no backdoor in your CPU? In software world it’s usually done by source code audit. Opening the source code of an application makes it possible to get the application audited by thousands of people. Researches show that in general, open-source has less bugs than comparing to proprietary software. Obviously it’s very hard to embed a backdoor into a widely-spread open-source project.

How can we apply that to the hardware world? Many people will be surprised to find out that there exist open-source processors. Several of them. First, the most heard one is OpenSPARC, but it seems like the project is dead after Oracle bought Sun Microsystems. Meanwhile, there is a very nice project called OpenCores. They have all kinds of open-source hardware cores, including general-purpose CPUs and specialized cores, like encryption cores, etc. The most noticeable core is OpenRISC processor core, a full featured RISC processor. Yes, you can really do things on it. For example, on this video the guy plays a game on this processor, running linux. 

It’s so cool that you can download a processor source code written verilog, modify it, then put it on a FPGA chip and run it! And what does it give us in terms of security? A great framework! Just imagine how can you implement all possible security hardware features and be sure that no one puts a backdoor on your chip. There is no underlying layer where attacker can hide. Or at least, you can download a processor source code and audit it, to make sure that there is no any bad code in it. I think it should become our future. We need to run open-source programs on open-source hardware, everything available for audit. It’s the only way to the real security.

1 comment:

  1. Nice post! I liked the way you get to the point which is always useful. Most blogs are a bit too verbose for me, this one fits nicely. Cheers! Chip Level Training in Hyderabad