Cornell Systems Lunch

CS 7490 Fall 2019
Friday 12PM, Gates 114

Robbert van Renesse


Sponsored by Facebook

The Systems Lunch is a seminar for discussing recent, interesting papers in the systems area, broadly defined to span operating systems, distributed systems, networking, architecture, databases, and programming languages. The goal is to foster technical discussions among the Cornell systems research community. We meet once a week on Fridays at noon in Gates 114.

The systems lunch is open to all Cornell Ph.D. students interested in systems. First-year graduate students are especially welcome. Non-Ph.D. students have to obtain permission from the instructor. Student participants are expected to sign up for CS 7490, Systems Research Seminar, for one credit.

To join the systems lunch mailing list please send an empty message to cs-systems-lunch-l-request@cornell.edu with the subject line "join". More detailed instructions can be found here.

Links to papers and abstracts below are unlikely to work outside the Cornell CS firewall. If you have trouble viewing them, this is the likely cause.

Date Paper Presenter
August 30 Phantasy: Low-Latency Virtualization-based Fault Tolerance via Asynchronous Prefetching
Shiru Ren, Yunqi Zhang, Lichen Pan, Zhen Xiao (PKU)
IEEE Transactions on Computing, Feb. 2019
Zhen Xiao (PKU)
September 6 From Unikernels to Nabla Containers
Abstract: As industry interest continues in lightweight units of execution for the cloud, Linux containers continue to be plagued by a perceived lack of isolation. At the same time, unikernels have offered an alternative to containers that is not only lightweight, but also inherits the isolation properties (and some downsides of) VMs. In this talk I will argue that virtualization is not necessary for unikernels to maintain a similar level of isolation to VMs. I will also describe ongoing efforts toward a new container runtime called Nabla, based on running unikernels as processes, discuss some of the challenges in bridging the gap between containers and unikernels, and highlight future research directions. Bio: Dan Williams is a Research Staff Member at the IBM T.J. Watson Research Lab in Yorktown Heights, NY, where he works in the cloud research organization on unikernels and secure containers. He was fortunate enough to have been well prepared for a research career at IBM with a PhD from Cornell, where he worked on virtualization in the cloud with Hakim Weatherspooon (advisor) and trusted computing with Gun Sirer and Fred Schneider. This will be his first time returning to the department since January 2013!
Dan Williams (IBM)
September 13 Software Fairness
Modern software contributes to important societal decisions, and yet we know very little about its fairness properties. Can software discriminate? Evidence of software discrimination has been found in systems that recommend criminal sentences, grant access to loans and other financial products, transcribe YouTube videos, translate text, and perform facial recognition. Systems that select what ads to show users can similarly discriminate. For example, a professional social network site could, hypothetically, learn stereotypes and only advertise stereotypically female jobs to women and stereotypically male ones to men. Despite existing evidence of software bias, and significant potential for negative consequences, little technology exists to test software for such bias, to enforce lack of bias, and to learn fair models from potentially biased data. Even defining what it means for software to discriminate is a complex task. I will present recent research that defines software fairness and discrimination; develops a testing-based, causality-capturing method for measuring if and how much software discriminates and provides provable formal guarantees on software fairness; and demonstrates how framing problems as fairness-constrained contextual bandits can reduce not only bias but also impact of bias. I will also describe open problems in software fairness and how recent advances in machine learning and natural language modeling can help address them. Overall, I will argue that enabling and ensuring software fairness requires solving research challenges across computer science, including in machine learning, software and systems engineering, human-computer interaction, and theoretical computer science.
Yuriy Brun (UMass)
September 20 Automatically Repairing Network Control Planes Using an Abstract Representation
Aaron Gember-Jacobson (Colgate), Aditya Akella (UWisc-Madison), Ratul Mahajan (Intentionet), and Hongqiang Harry Liu (Microsoft Research)
SOSP 2017
Aaron Gember-Jacobson (Colgate)
September 27 PicNIC: Predictable Virtualized NIC
Praveen Kumar (Cornell University) Nandita Dukkipati, Nathan Lewis, Yi Cui, Yaogong Wang, Chonggang Li, Valas Valancius, Jake Adriaens, Steve Gribble (Google), Nate Foster (Cornell University), Amin Vahdat (Google)
SIGCOMM 2019
Praveen Kumar
October 4 Ocean vista: gossip-based visibility control for speedy geo-distributed transactions
Hua Fan and Wojciech Golab
VLDB 2019
Matt Burke
October 11 Remzi Can Aksoy and Manos Kapritsos (University of Michigan)
Aegean: Replication Beyond the Client-Server Model
SOSP 2019
Manos Kapritsos (UMich)
October 18 Proof-of-Burn

Proof-of-burn has been used as a mechanism to destroy cryptocurrency in a verifiable manner. Despite its well known use, the mechanism has not been previously formally studied as a primitive. In this paper, we put forth the first cryptographic definition of what a proof-of-burn protocol is. It consists of two functions: First, a function which generates a cryptocurrency address. When a user sends money to this address, the money is irrevocably destroyed. Second, a verification function which checks that an address is really unspendable. We propose the following properties for burn protocols. Unspendability, which mandates that an address which verifies correctly as a burn address cannot be used for spending; binding, which allows associating metadata with a particular burn; and uncensorability, which mandates that a burn address is indistinguishable from a regular cryptocurrency address. Our definition captures all previously known proof-of-burn protocols. Next, we design a novel construction for burning which is simple and flexible, making it compatible with all existing popular cryptocurrencies. We prove our scheme is secure in the Random Oracle model. We explore the application of destroying value in a legacy cryptocurrency to bootstrap a new one. The user burns coins in the source blockchain and subsequently creates a proof-of-burn, a short string proving that the burn took place, which she then submits to the destination blockchain to be rewarded with a corresponding amount. The user can use a standard wallet to conduct the burn without requiring specialized software, making our scheme user friendly. We propose burn verification mechanisms with different security guarantees, noting that the target blockchain miners do not necessarily need to monitor the source blockchain.


Dionysis Zindros is a PhD student at the University of Athens advised by Professor Aggelos Kiayias and a researcher at IOHK. His interests include the provable cryptographic design of decentralized system protocols, in particular open blockchain protocols, with a focus on the interoperability of blockchain systems. He has presented at IEEE Security & Privacy, Financial Crypto, Black Hat Europe, Black Hat Asia, and Real World Crypto. He is the co-founder of OpenBazaar. In the past, he worked at the incident response development team at Google in Zurich and at the product security team at Twitter in San Francisco. Dionysis holds an Electrical and Computer Engineering degree from the National Technical University of Athens.

Dionysis Zindros
October 25 StrongChain: Transparent and Collaborative Proof-of-Work Consensus
Pawel Szalachowski, Daniel Reijsbergen, and Ivan Homoliak, Singapore University of Technology and Design (SUTD); Siwei Sun, Institute of Information Engineering and DCS Center, Chinese Academy of Sciences
2019 USENIX Security Symposium
Danny Adams
November 1 Accurate and Efficient Off-CPU Performance Analysis
For the purpose of performance optimization, we often need to identify events that are limiting the throughput of the application or creating long latencies. Such events can be categorized into two types: events that execute certain instructions on the CPU (i.e. on-CPU events) and events that wait for other events (i.e. off-CPU events), and both of them can lead to performance problems. While on-CPU analysis is quite well studied, we find existing off-CPU analysis methods are either inaccurate or incomplete. Our works try to develop theoretical models to accurately capture problematic off-CPU events and develop methods to efficiently record corresponding events in both the application and the OS kernel. In this talk, I will present two of our recent works: wPerf tries to identify off-CPU events limiting the throughput of an application and TailMRI tries to identify off-CPU events leading to tail latencies. Our evaluation shows that, by optimizing the problems reported by these tools, we can achieve up to 4.8x improvement in throughput and up to 60x reduction in tail latencies in the applications we have studied.
Yang Wang received the bachelor's and master's degrees in computer science and technology from Tsinghua University, in 2005 and 2008, respectively, and the doctorate degree in computer science from the University of Texas at Austin, in 2014 (advisors Dr. Lorenzo Alvisi and Dr. Mike Dahlin). He is now an assistant professor in the Department of Computer Science and Engineering, the Ohio State University. His current research interests include distributed systems, fault tolerance, scalability, and performance analysis.
Yang Wang (OSU)
November 8 Datacenter Admission Control Protocol (DCACP)
Data center transport has two goals: low latency for short flows, and high network utilization. Observing that transport designs based on rate control are ineffective in achieving low latency for short flows, recent transport designs use scheduling and/or admission control of packets (eg, pFabric, pHost, NDP, Homa, etc.). These designs can achieve near-optimal latency for short flows on an average and even at tail. However, rather unfortunately, we show that all of these designs can lead to near-zero throughput for the realistic case of workloads that mix permutation, incast and outcast traffic patterns. We present Data Center Admission Control Protocol (DCACP), a transport design that builds upon classical switch scheduling literature to orchestrate admission and scheduling of individual packets into the network. DCACP operates directly on commodity hardware, and achieves near-optimal network utilization (with worst-case guarantees) while maintaining the short flow performance of the above designs.
Qizhe Cai
November 15 DZQ: Lossless Ethernet Without PFC
Saksham Agarwal, Qizhe Cai, Rachit Agarwal, David Shmoys, and Amin Vahdat
Saksham Agarwal
November 22 ACSU Luncheon, no meeting.
November 29 Thanksgiving Break, no meeting.
December 6 I10: A Remote Storage I/O Stack for High-Performance Network and Storage Hardware
Jaehyun Hwang, Qizhe Cai, Rachit Agarwal, and Ao Tang
Jaehyun Hwang