Dravyansh Sharma

I am currently an IDEAL Postdoc in Chicago. I completed my PhD in the Computer Science Department at the Carnegie Mellon University, and was fortunate to be advised by Nina Balcan. I am interested in designing algorithms for machine learning with strong and provable performance guarantees.

Recent news    

Research     Résumé     Reach me

   

   

   

   

   

Recent News and Highlights

  • June 2025
  • ★ My proposal "Limitations of State-of-the-Art and a New Principled Framework for HPO and Algorithm Selection" has been accepted as one of the 2025 AutoML tutorials.
  • • Gave a talk "Principled Hyperparameter Optimization and Algorithm Selection" at the Capital Area Theory Seminar (CATS) at the University of Maryland, College Park.
  • • Helped organize IDEAL Annual Meeting (with Lev Reyzin). Gave a talk titled "Principled Hyperparameter Tuning and Algorithm Selection".
  • • Five accepted workshop papers (three at ICML 2025, one at IJCAI 2025, one at UAI 2025).
  •    ⬩ ICML 2025 Workshop on Reliable and Responsible Foundation Models (R2-FM)
  •    ⬩ ICML 2025 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
  •    ⬩ ICML 2025 Workshop on Methods and Opportunities at Small Scale (MOSS)
  •    ⬩ IJCAI 2025 2nd Workshop on Social Choice and Learning Algorithms (SCaLA)
  •    ⬩ UAI 2025 Workshop on Safe AI
  • • New on arxiv Conservative classifiers do consistently well with improving agents: characterizing statistical and online learning (with Alec Sun).
  • May 2025
  • ★ Our work On Learning Verifiers for Chain-of-Thought Reasoning (joint with Nina Balcan, Avrim Blum and Zhiyuan Li) is available as a pre-print.
  • ★ New on arxiv Learning accurate and interpretable tree-based models (joint with Nina Balcan), an extended version of earlier work that won the Outstanding Student Paper Award at UAI 2024.
  • • Presented a poster at the "Midwest Optimization & Statistical Learning Conference 2025" at Northwestern University.
  • • Our work Tuning Algorithmic and Architectural Hyperparameters in Graph-Based Semi-Supervised Learning with Provable Guarantees (joint with Ally Du and Eric Huang) accepted at UAI 2025.
  • • Our work (joint with Nina Balcan) Learning Accurate and Interpretable Decision Trees (Extended Abstract) accepted at the Best Paper Track for Sister Conferences at IJCAI 2025.
  • • Our work PAC Learning with Improvements (joint with Idan Attias, Avrim Blum, Keziah Naggita, Donya Saless and Matthew Walter) accepted at ICML 2025.
  • • Our paper titled Algorithm Configuration for Structured Pfaffian Settings (joint with Nina Balcan and Anh Nguyen) published in TMLR 2025.
  • April 2025
  • • Gave a talk at TTIC on our recent work Provable tuning of deep learning model hyperparameters (joint with Nina Balcan and Anh Nguyen).
  • ★ My proposal Hyperparameter Optimization and Algorithm Selection: Practical Techniques, Theory, and New Frontiers has been accepted as one of the 2025 UAI tutorials. Stay tuned!
  • ★ Invited to serve as an Area Chair at NeurIPS 2025.
  • • Presented our work Provable tuning of deep learning model hyperparameters (joint with Nina Balcan and Anh Nguyen) at the IDEAL workshop on "Understanding the Mechanisms of Deep Learning and Generative Modeling" at Northwestern University.
  • • Gave a talk titled Provable tuning of deep learning model hyperparameters (based on joint work with Nina Balcan and Anh Nguyen) at the Theory lunch at the University of Chicago.
  • March 2025
  • ★ Session Chair at AAAI 2025 sessions on Constraint Satisfaction and Optimization.
  • • Our work titled PAC Learning with Improvements (joint with Idan Attias, Avrim Blum, Keziah Naggita, Donya Saless and Matthew Walter) available as a pre-print.
  • • Attending AAAI 2025 in Philly. Presenting Offline-to-online hyperparameter transfer for stochastic bandits, joint work with Arun Suggala.

Publications

            Contact:    

E-mail: dravy [AT] ttic [DOT] edu
Office: 434, Toyota Technological Institute at Chicago, 6045 S Kenwood Ave, Chicago, IL 60637