Date Posted: 11/23/2020

Associate Professor of Computer Science at Cornell Tech, Alexander "Sasha" Rush and coauthors won the Best Demo Award at the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) for their paper "Transformers: State-of-the-Art Natural Language Processing." Authors include: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush.

Abstract: 

Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered state-of-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments.

______________________________________

Rush won the Best Paper Award at the ACM 2020 Design Automation Conference (DAC) for "AdaptivFloat: A Floating-Point Based Data Type for Resilient Deep Learning Inference" with coauthors Thierry Tambe, En-Yu Yang, Zishen Wan, YunPan Deng, Vijay Janapa Reddi, David Brooks, and Gu-Yeon, Wei.

Abstract:

Conventional hardware-friendly quantization methods, such as fixed-point or integer, tend to perform poorly at very low word sizes as their shrinking dynamic ranges cannot adequately capture the wide data distributions commonly seen in sequence transduction models. We present AdaptivFloat, a floating-point inspired number representation format for deep learning that dynamically maximizes and optimally clips its available dynamic range, at a layer granularity, in order to create faithful encoding of neural network parameters. AdaptivFloat consistently produces higher inference accuracies compared to block floating-point, uniform, IEEE-like float or posit encodings at very low precision (≤ 8-bit) across a diverse set of state-of-the-art neural network topologies. And notably, AdaptivFloat is seen surpassing baseline FP32 performance by up to +0.3 in BLEU score and -0.75 in word error rate at weight bit widths that are ≤ 8-bit. Experimental results on a deep neural network (DNN) hardware accelerator, exploiting AdaptivFloat logic in its computational datapath, demonstrate per-operation energy and area that is 0.9× and 1.14×, respectively, that of equivalent bit width integer-based accelerator variants.

The Design Automation Conference (DAC) is the world's leading technical conference and trade show on electronic design automation. DAC is where the IC Design and EDA ecosystem learns, networks, and does business. DAC is also where the latest technical research is presented. DAC covers all topics related to the design complex systems on chip: Embedded System design and verification down to physical layout verification and test. Now in its forty-eighty consecutive year, DAC is the most respected name in the chip design community.

______________________________________

Rush received an Honorable Mention for a demo paper, "Torch-Struct: Deep Structured Prediction Library," at the 2020 Association for Computational Linguistics (ACL).

Abstract:

The literature on structured prediction for NLP describes a rich collection of distributions and algorithms over sequences, segmentations, alignments, and trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based frameworks. Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library.

In other news from Rush, see also details on the 2020 International Conference on Learning Representations (ICLR).