Cornell Systems Lunch
CS 7490 Fall 2020
The Systems Lunch is a seminar for discussing recent, interesting papers in the systems area, broadly defined to span operating systems, distributed systems, networking, architecture, databases, and programming languages. The goal is to foster technical discussions among the Cornell systems research community. We meet once a week on Fridays at 11:45 on-line by Zoom.
The systems lunch is open to all Cornell Ph.D. students interested in systems. First-year graduate students are especially welcome. Non-Ph.D. students have to obtain permission from the instructor. Student participants are expected to sign up for CS 7490, Systems Research Seminar, for one credit.
Links to papers and abstracts below are unlikely to work outside the Cornell CS firewall. If you have trouble viewing them, this is the likely cause.
The Zoom link is https://cornell.zoom.us/j/99550502984?pwd=Y28xdlhBSmJKak9TdGRnN3UveWNudz09 (accessible from .cornell.edu and select other domains).
|September 4||Organizational meeting
|September 11||HotStuff: BFT Consensus with Linearity and Responsiveness
Maofan (Ted) Yin (Cornell), Dahlia Malkhi (VMWare), Michael K. Reiter (UNC), Guy Golan Gueta (VMWare), Ittai Abraham (VMWare)
|Haobin Ni (video, slides)|
|September 18||Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider
Mohammad Shahrad, Rodrigo Fonseca, Íñigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, and Ricardo Bianchini, Microsoft Azure and Microsoft Research
USENIX ATC 20
|Yueying Li and Burcu Canakci (video, slides)|
|September 25||Specification and verification in the field: Applying formal methods
to BPF just-in-time compilers in the Linux kernel
This talk presents our ongoing efforts of applying formal methods to a critical component in the Linux kernel, the just-in-time compilers ("JITs") for the extended Berkeley Packet Filter (BPF). Building on our automated verification framework Serval, we have developed Jitterbug, a tool for writing JITs and proving them correct. We have used Jitterbug to find 30+ new bugs in the BPF JITs for the x86-32, x86-64, arm32, arm64, and riscv64 architectures, and to develop a new BPF JIT for riscv32, RISC-V compressed instruction support for riscv64, and new optimizations in existing JITs. All of these changes have been upstreamed to the Linux kernel.
|Xi Wang (UW)|
|October 2||Computational wireless sensing at scale
Computational wireless sensing is an exciting field of research where we use wireless signals from everyday computing devices to enable sensing. The key challenge is to enable new sensing capabilities that can be deployed at scale and have an impact in the real world. In this talk, I will show how to enable computational wireless sensing at scale by leveraging ubiquitous hardware such as smartphones. Specifically, I will present core technology that can wirelessly sense motion and physiological signals such as breathing using just a smartphone, in a contactless manner. To achieve this, we transform smartphones into active sonar systems. I will show how we can use this technology to detect potentially life-threatening conditions such as opioid overdoses as well as sleep apnea. Finally, I will talk about my work that leverages new hardware trends in micro-controllers and low power wireless backscatter technologies to enable sensing applications ranging from object tracking to sensing using live insects such as bees.
|Rajalakshmi Nandakumar (video)|
|October 9||Scaling AI Systems with Optical I/O
The emergence of optical I/O chiplets enables compute/memory chips to communicate with several Tbps bandwidth. Many technology trends point to the arrival of optical I/O chiplets as a key industry inflection point to realize fully disaggregated systems. In this talk, I will focus on the potential of optical I/O-enabled accelerators for building high bandwidth interconnects tailored for distributed machine learning training. Our goal is to scale the state-of-the-art ML training platforms, such as NVIDIA DGX, from a few tightly connected GPUs in one package to hundreds of GPUs while maintaining Tbps communication bandwidth across the chips. Our design enables accelerating the training time of popular ML models using a device placement algorithm that partitions the training job with data, model, and pipeline parallelism across nodes while ensuring a sparse and local communication pattern that can be supported efficiently on the interconnect.
Bio: Manya Ghobadi is an assistant professor at the EECS department at MIT. Before MIT, she was a researcher at Microsoft Research and a software engineer at Google Platforms. Manya is a computer systems researcher with a networking focus and has worked on a broad set of topics, including data center networking, optical networks, transport protocols, and network measurement. Her work has won the best dataset award and best paper award at the ACM Internet Measurement Conference (IMC) as well as Google research excellent paper award.
|Manya Ghobadi (MIT)|
|October 23||Fast and secure global payments with Stellar
Marta Lokhava, Giuliano Losa, David Mazieres, Graydon Hoare, Nicolas Barry, Eli Gafni, Jonathan Jove, Rafael Malinowsky, and Jed McCaleb (Stellar Development Foundation)
|Florian Suri-Payer and Ted Yin|
|October 30||Approximate Partition Selection for Big-Data Workloads using Summary Statistics
Kexin Rong, Yao Lu, Peter Bailis, Srikanth Kandula, Philip Levis (Stanford and Microsoft)
|Saehan Jo and Junxiong Wang|
|November 6||CrossFS: A Cross-layered Direct-Access File System
Yujie Ren, Rutgers University; Changwoo Min, Virginia Tech; Sudarsun Kannan, Rutgers University
|Yu-Ju Huang and Kevin Negy|
|November 13||Stream Processing at Google Scale: Challenges in hosting a Google-wide service
Abstract: The core data processing team at Google has longer than a decade experience in running stream processing systems that host many of Google’s revenue critical pipelines. Our systems provide strong correctness properties, such as exactly once semantics and globally consistent output, in the presence of various kinds of failures --- notably the failure of an entire data center. In this talk, I will present the set of design principles that have enabled us to run google-scale data pipelines with strict latency and completeness SLOs. I will also walk through two real production system architectures that incorporate these principles to support applications such as aggregation and streaming join at a really large scale.
Bio: Venugopalan “Rama” Ramasubramanian is a Senior Staff Software Engineer and Tech Lead Manager in the Core Data Processing team at Google. He is a lead architect, designer, and implementer of a large-scale stream processing service. His team is responsible for building and maintaining the infrastructure for some of Google’s most revenue-critical data pipelines. Prior to joining Google, he was a research scientist in Microsoft Research with a PhD in Computer Science from Cornell University.
|Rama Venu (Google)|
|November 20||Semi-Final Exams, no meeting.|
|November 27||Thanksgiving Break, no meeting.|
|December 4||Oracle Autonomous DB: A path to the future of databases
New Oracle Autonomous Database provides automation of tasks that were typically performed by DBAs or experienced user. This includes Auto-Indexing, Auto-Materialized Views, Auto-Zonemaps, and Auto-Partitioning. These autonomous tasks eliminate the need for DBAs so users can concentrate on running their applications rather than on tuning them. In this talk we will elaborate on tools that were needed to automate the tasks, overview of the features including their main challenges, and usage of Machine Learning technology applied in some of their implementations in particular Auto-Materialized Views. In addition, we will elaborate on some challenging future Database areas where automation could be very beneficial. Areas will include automatic performance improvements of series of sql statements generated by typical multi-statements reports.
Bio: Andrew Witkowski is a Vice President in Oracle Corporation. He holds an M.S. in Electrical Engineering and a Ph.D. in computer science. He manages a top layer of Oracle query processing including Optimizer, Execution of SQL Statements, External Tables, Parallel Query, Oracle procedural language PL/SQL, Materialized Views and On-Line Redefinition. He has worked on many SQL extensions including Analytic Functions, SQL Spreadsheet, SQL Pattern Matching, Multi-Dimensional Zonemaps and External and Hybrid Partitioned Tables. He has published several papers in SIGMOD and VLDB conferences. He has 61 US patents including 8 pending ones. Previously he worked at Teradata and Jet Propulsion Laboratory.
|Andrew Witkowski (Oracle)|
|December 11||Rethinking networking for an "Internet from space"
Abstract: Upstart space companies are building massive constellations of low-flying satellites to provide Internet service. These developments comprise "one giant leap" in Internet infrastructure, promising global coverage and lower latency. However, fully exploiting the potential of such satellite constellations requires tackling their inherent challenges: thousands of low-Earth orbit satellites travel at high velocity relative to each other, and relative to terrestrial ground stations. The resulting highly-dynamic connectivity is at odds with the Internet design primitives, which assume a largely static core infrastructure. Virtually every aspect of Internet design --- physical interconnection, routing, congestion control, and application behavior --- will need substantial rethinking to integrate this new building block.
This talk will focus on one such problem, that of deciding which satellites should be connected to which others to form a performant network. I will draw out why traditional tools for network design are ill-suited here, and show how a simple, novel approach can improve network throughput by 2x compared to the standard method for interconnecting satellites. Lastly, I will highlight several open questions, and discuss our ongoing work on building tools to explore them.
Bio: Ankit Singla is an assistant professor at the Department of Computer Science at ETH Zürich. He holds a PhD from the University of Illinois at Urbana-Champaign. Ankit works on the design and analysis of large-scale networks like data center networks and the Internet. His work has received the Best Paper Award at IMC 2020, Best Dataset Awards at PAM 2017 and 2020, and the IRTF Applied Networking Research Prize for 2020. He is also the recipient of a 2018 Google Faculty Research Award.
|Ankit Singla (ETH Zurich)|