Dravyansh Sharma

I am currently an IDEAL Postdoc in Chicago. I completed my PhD in the Computer Science Department at the Carnegie Mellon University, and was fortunate to be advised by Nina Balcan. I am interested in designing algorithms for machine learning with strong and provable performance guarantees.

Recent news    

Research     Résumé     Reach me

   

   

   

   

   

Recent News

  • May 2025. Our work Tuning Algorithmic and Architectural Hyperparameters in Graph-Based Semi-Supervised Learning with Provable Guarantees (joint with Ally Du and Eric Huang) accepted at UAI 2025.
  • May 2025. Our work (joint with Nina Balcan) Learning Accurate and Interpretable Decision Trees (Extended Abstract) accepted at the Best Paper Track for Sister Conferences at IJCAI 2025.
  • May 2025. Our work PAC Learning with Improvements (joint with Idan Attias, Avrim Blum, Keziah Naggita, Donya Saless and Matthew Walter) accepted at ICML 2025.
  • April 2025. Gave a talk at TTIC on our recent work Provable tuning of deep learning model hyperparameters (joint with Nina Balcan and Anh Nguyen).
  • April 2025. My proposal "Hyperparameter Optimization and Algorithm Selection: Practical Techniques, Theory, and New Frontiers" has been accepted as one of the 2025 UAI tutorials. Stay tuned!
  • April 2025. Invited to serve as an Area Chair at NeurIPS 2025.
  • April 2025. Our paper titled Algorithm Configuration for Structured Pfaffian Settings (joint with Nina Balcan and Anh Nguyen) has been accepted for publication at TMLR.
  • April 2025. Presented our work Provable tuning of deep learning model hyperparameters (joint with Nina Balcan and Anh Nguyen) at the IDEAL workshop on "Understanding the Mechanisms of Deep Learning and Generative Modeling" at Northwestern University.
  • April 2025. Gave a talk titled Provable tuning of deep learning model hyperparameters (based on joint work with Nina Balcan and Anh Nguyen) at the Theory lunch at the University of Chicago.
  • March 2025. Session Chair at AAAI 2025 sessions on Constraint Satisfaction and Optimization.
  • March 2025. Our work titled PAC Learning with Improvements (joint with Idan Attias, Avrim Blum, Keziah Naggita, Donya Saless and Matthew Walter) available as a pre-print.
  • Feb-March 2025. Attending AAAI 2025 in Philly. Presenting Offline-to-online hyperparameter transfer for stochastic bandits, joint work with Arun Suggala.
  • Feb 2025. Our work titled Tuning Algorithmic and Architectural Hyperparameters in Graph-Based Semi-Supervised Learning with Provable Guarantees (joint with Ally Du and Eric Huang) is available as a pre-print.
  • Jan 2025. Our work titled Sample complexity of data-driven tuning of model hyperparameters in neural networks with structured parameter-dependent dual function (joint with Nina Balcan and Anh Nguyen) is available as a pre-print.
  • Dec 2024. Attended NeurIPS 2024 in Vancouver to present two posters.
  • Sep 2024. Started as a postdoc at IDEAL (The Institute for Data, Econometrics, Algorithms, and Learning), part of NSF TRIPODS, hosted by Avrim Blum (TTIC) and Aravindan Vijayaraghavan (Northwestern).
  • July 2024. Our work (joint with Nina Balcan) Learning Accurate and Interpretable Decision Trees won the Outstanding Student Paper Award at UAI 2024.

Publications

            Contact:    

E-mail: dravy [AT] ttic [DOT] edu
Office: 434, Toyota Technological Institute at Chicago, 6045 S Kenwood Ave, Chicago, IL 60637