"The nation's security and economy rely on infrastructures for communication, finance, energy distribution and transportation - all increasingly dependent on networked information systems. When these networked information systems perform badly or do not work at all, they put life, liberty and property at risk." --National Research Council, Trust in Cyberspace (F.B. Schneider, editor)
Cornell is a leader on a broad range of research issues related to computer security. We tackle the fundamental problem of ensuring the security and reliability of our global critical computing infrastructure. And Cornell faculty are also quite visible on the national policy scene, as part of the NSF-funded TRUST Science and Technology Center, and through individual faculty member involvement in advisory boards to DARPA, DoD, and Microsoft.
Currently, we have many active research projects aimed at developing a science and technology base to enhance information assurance and ensure the trustworthiness of networked information systems. These project areas range from system and network security to reliability and assurance. The breadth and depth of the projects undertaken at Cornell are a direct result of the well-integrated, diverse and collegial environment that our department provides. Our work draws its strength from the synergy between the groups working on security, programming languages, operating systems, logic and formal methods.
Projects
Frenetic. Led by Nate Foster, this project is developing high-level languages for programming distributed collections of network switches. Frenetic makes it possible for developers to specify the behavior of an entire network using a single program that a compiler translates to low-level code that can be executed on each switch. This provides an exciting opportunity to enforce security, reliability, and performance guarantees using language-based techniques.
Nexus. Emin Gun Sirer and Fred B. Schneider are leading the development of a new operating system, called Nexus, for trusted computing. Newly emerging secure coprocessors make it possible to build systems that can provide unprecedented security guarantees; unfortunately, past efforts at achieving these properties have yielded systems that restrict the user's control over her local machine. Nexus is a new operating system, built from scratch, that introduces new system abstractions, mechanisms and a novel system architecture for taking advantage of secure computing hardware. It enables users to leverage security guarantees of secure coprocessors without limiting flexibility and control over the local software configuration. The resulting system enables novel applications, including a social networking application called Fauxbook where users are provided with strong privacy guarantees, even when the developers are acting as adversaries.
Netquery. Sirer and Schneider have also been looking at how to use the new hardware capabilities provided by secure coprocessors to build next generation networks. Until now, networks have consisted of data- and control-planes. NetQuery is a practical, trustworthy knowledge-plane for wide area networks. This knowledge-plane can be used to disseminate properties of network elements, which, in turn, enables novel applications that can reason about what the network is providing and modify their behavior accordingly. Such reasoning can involve network properties that are hard to measure, for example, the number and capacity of failover links. Further, NetQuery uses trustworthy computing techniques to ensure that ISPs can reveal selected properties of their networks without revealing proprietary or sensitive data.
ConCoord. There are also other ongoing projects in cloud computing and fault tolerance that impinge on security. For example, the ConCoord project, a collaborative effort by Emin Gun Sirer and Robbert van Renesse, has been examining how to provide fault-tolerance for cloud computing applications. Whereas Google's Chubby lock service and Yahoo's ZooKeeper provide a file-like API to replicated passive state, ConCoord provides an object-oriented interface to a transparently replicated, active object. This difference in the API enables ConCoord to achieve higher performance than ZooKeeper, long recognized to be the fastest and most widely adopted fault-tolerance service.
Jif. In this language, Andrew Myers pioneered adding security types to a real programming language (Java). The Jif compiler tracks the correspondence between data and the information flow policies that restrict its use, so that security is enforced in an end-to-end fashion through a combination of compile-time and run-time mechanisms. Jif is at the core of the new Fabric system developed by Myers' group, which allows distributed systems to be programmed as though they run on a single computer. Security enforcement in Fabric is decentralized. Yet Fabric nodes and programs from different and mutually distrusting security domains can securely share information, computation, and code. Another Jif-based system is Swift, which automatically partitions web applications into server-side Java and client-side JavaScript, while enforcing secure information flow. Jif was also used to develop Civitas, a secure voting system. It is the first voting system implementation that allows voters to vote securely while provably providing universal verifiability, voter verifiability, anonymity, and coercion resistance. It achieves this by combining sophisticated cryptography with language-based security methods.
Timing channels. Myers has recently been studying new methods for controlling timing channels at the operating systems and programming language levels. The technique of predictive mitigation provably controls how much information leaks via timing by making timing conform to predictions generated using only public information. This technique can be applied at the language level to control leaks through low-level hardware features.
The Isis2 project is exploring roles for security in scalable services for first tier of modern cloud computing data centers, where elasticity and rapid response have often been seen as more pressing needs than high assurance or security. Isis2 uses a variety of cryptographic tools to ensure that data replicated within such services cannot be stolen by applications sharing the same cloud that have gained the ability to spy on the network. The technology is packaged as an easily used software library which can be downloaded from Cornell under a BSD license and requires little more of the developer than the skills required to create an interactive GUI. Current research is focused on scalability and performance of the technology but, in the longer term, we want to expand our effort to explore high assurance for the "whole story" in cloud settings: the client platform, the Internet layer and the service within the cloud itself.
Joe Halpern is looking at logics that can deal with both qualitative and quantitative aspects of security. In addition, he is applying game theory to model aspects of security by extending standard solution concepts in game theory so that they can deal with faulty players and hresource-bounded players.
ECC. This project, led by Dexter Kozen, addresses issues of performance and ease of implementation in the verification of basic safety properties for untrusted mobile code. In contrast to other more general approaches, it sacrifices language and implementation independence for performance and succinctness of certificates. Verification takes place at the level of native code and does not require just-in-time compilation. We ensure a basic but nontrivial level of code safety, including control flow safety, memory safety, and stack safety. A prototype has been implemented for SCHEME to x86. Current work includes applying this technology to boot-time drivers for plug-in components in the context of the IEEE Open Firmware standard.
Fault-tolerant distributed systems, algorithms, and protocols are notoriously hard to build. Going from a specification to an implementation involves many subtle steps that are easy to get wrong. Moreover, fault-tolerant algorithms require many sources of diversity in order to make them survive a variety of failures. Programmers, even different ones, tend to make the same mistakes when they are implementing code from a specification. Van Renesse and Schneider are using stepwise refinement to derive distributed algorithms from specification. Each step can be formally checked. Moreover, one can often make different design choices as you make these refinements, leading to sources of diversity that can be exploited for fault tolerance. Bickford, Constable, and Van Renesse are working to have this process be mostly automated using NuPrl, and we are already able to synthesize a variety of executable consensus protocols.




