- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Parallel Computing: From Luxury to Sine Qua Non — Examples from Biology and Machine Learning (via Zoom)
Abstract: Parallel computing, also known as high-performance computing (HPC), dates back to the late 1950s, with advances in supercomputing in the 1960s and 1970s. HPC has been a pillar of science since its inception, driving scientific discoveries in simulation-dominated areas of science such as fluid dynamics and climate modeling that require large-scale computational resources. Until recently, parallel computing was considered a luxury rather than a necessity in most areas not involving simulation. Today, that view is changing dramatically. The flood of data in areas such as genomics and machine learning, and the need for high-quality outcomes, which often increases the algorithm complexity, pose enormous computational challenges. From genomics to machine learning, parallel computing is no longer just an option, but something that is essential to advancing the field through new HPC approaches. In this presentation, we provide an overview of parallel computing and show how this area of research is undergoing a revival with examples from genomics and machine learning, in particular, long-read de novo assembly and graph neural network, respectively.
Bio: Giulia Guidi is a project scientist in the Applied Math and Computational Sciences Division at Lawrence Berkeley National Laboratory. In January 2023, she will join Cornell University as an Assistant Professor of Computer Science. Giulia's research focuses on parallel and distributed computing. In particular, she focuses on challenges at the intersection of large-scale computational biology algorithms and software infrastructures. She applies high-performance computing techniques, computer systems, and architectures to make genomics data processing faster and more flexible for the community and to enable higher-quality bioinformatics and biomedical research. She also focuses on developing software to make cloud infrastructures more accessible for high-performance scientific computing. Giulia received her PhD in Computer Science from UC Berkeley in August 2022 under the supervision of Aydın Buluç and Kathy Yelick, and she was awarded a SIGHPC Computational & Data Science Fellowship in 2020.