Ayush Sekhari

Ayush Sekhari

Contact Details:

Email: as3663 [at] cornell [dot] edu
Office: 324, Bill and Melinda Gates Hall, Cornell University, Ithaca, NY - 14853

I am a Ph.D. student in the Computer Science department at Cornell University, where I have the great fortune to be advised by Prof. Karthik Sridharan and Prof. Robert D. Kleinberg. I am interested in theoretical aspects of Machine Learning and Computer Science. Recently, I have been working on understanding non-convex learning from a stochastic optimization viewpoint. I am also interested in theoretical aspects of reinforcement learning (RL).

I completed my undergraduate from Indian Institute of Techonology Kanpur (IIT Kanpur), India in 2016 and, after that, spent a year at Google Research as a part of the Brain Residency program (now called AI Residency).

Recent News

  • RL with feedback graphs paper accepted at NeurIPS 2020
  • We received a best student paper award at COLT 2019 for our work on finding stationary points in stochastic convex optimization.
  • Spent summers 2019 with the Learning Theory team at Google Research, NY working with Claudio Gentile and Mehryar Mohri .
  • Spent summers 2017 with FICC Macro Strats and Trading in Securities division at Goldman Sachs - Hong Kong.
  • Awarded Presidents Gold Medal 2016, IIT Kanpur for best academic performance in the graduating batch.
  • Selected for Google Brain Residency Program 2016.


  1. SGD: The role of Implicit Regularization, Batch-size and Multiple Epochs
    with Satyen Kale and Karthik Sridharan. (under submission)

  2. Agnostic Reinforcement Learning with Low-Rank MDPs and Rich Observations
    with Christoph Dann, Yishay Mansour, Mehryar Mohri, and Karthik Sridharan.
    (under submission)

  3. Neural Active Learning with Performance Guarantees
    with Pranjal Awasthi, Christoph Dann, Claudio Gentile, Zhilei Wang
    (under submission)

  4. Remember What You Want to Forget: Algorithms for Machine Unlearning
    with Jayadev Acharya, Gautam Kamath,and Ananda Theertha Suresh. (under submission)


  1. Reinforcement Learning with Feedback Graphs
    with Christoph Dann, Yishay Mansour, Mehryar Mohri, and Karthik Sridharan
    NeurIPS 2020. Short version at ICML 2020 Theoretical Foundations of RL workshop.

  2. Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
    with Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster and Karthik Sridharan
    COLT 2020. Honorable mention for best talk award at NYAS ML symposium 2020.

  3. The Complexity of Making the Gradient Small in Stochastic Convex Optimization
    with Dylan Foster, Ohad Shamir, Nathan Srebro, Karthik Sridharan and Blake Woodworth
    COLT 2019. Best Student Paper Award.

  4. Uniform Convergence of Gradients for Non-Convex Learning and Optimization
    with Dylan Foster and Karthik Sridharan
    NeurIPS 2018. Short version at ICML 2018 Nonconvex Optimization workshop.

  5. A Brief Study of in-domain Transfer and Learning from Fewer Samples using a Few Simple Priors with Marc Pickett and James Davidson
    ICML 2017 workshop: Picky Learners - Choosing Alternative Ways to Process Data.
    Awarded the second best paper prize among the workshop submissions.


  • Reviewing: ICLR 2019, AISTATS 2019, ICML (2019-21), COLT (2019-21), NeurIPS (2019-20), ISIS 2020, ITCS 2020, Journal of Complexity 2021, FORC 2021.