Past Talks
2024
Tennison Liu and Nicolas Astorga (University of Cambridge) Large Language Models to Enhance Bayesian Optimization (video)
Robert Lange (SakanaAI) : The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (video)
Linus Eriksson (University of Edinburgh) - einspace: Searching for Neural Architectures from Fundamental Operations (video)
Ganesh Jawahar (Google Deepmind) - Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts (video)
Carl Hvarfner (Lund University, WASP) - Vanilla Bayesian Optimization Performs Great in High Dimensions (video)
Łukasz Dudziak (Samsung) - Neural Fine-Tuning Search for Few-Shot Learning (video)
Julien Siems (University of Freiburg) and Riccardo Grazzi (IIT) - Is Mamba Capable of In-Context Learning? (video)
Sebastian Pineda (University of Freiburg) - Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How (video)
Xingyou (Richard) Song (Google) - OmniPred: Towards Universal Regressors with Language Models (video)
Arber Zela (University of Freiburg) - Multi-objective Differentiable Neural Architecture Search (video)
Nick Erickson (AWS) - AutoGluon 1.0: Shattering the SOTA with Zeroshot-HPO & Dynamic Stacking (video)
Rhea Sukthanker (University of Freiburg) and Samuel Dooley (Abacus.ai) : Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face-Recognition (video)
Andreas Müller (Microsoft) - MotherNet: A Foundational Hypernetwork for Tabular Classification (video)
2023
Arkadiy Dushatskiy (CWI) - Multi-Objective Population Based Training (video)
Elliot Crowley (University of Edinburgh) - Zero-cost proxies and expanding search spaces with program transformations in NAS
Noah Hollmann (University of Freiburg) - LLMs for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering (video)
Jean Kaddour and Oscar Key (UCL) - No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models (video)
Theresa Eimer (University of Hannover) - Challenges in Hyperparameter Optimization for Reinforcement Learning - Understanding AutoRL Through an AutoML Lens (video)
Alina Selega (Lunenfeld-Tanenbaum Research Institute, Vector Institute) - Multi-objective Bayesian optimization with heuristic objectives for biomedical and molecular data analysis workflows (video)
Robert Lange (TU Berlin) - Discovering Black-Box Optimizers via Evolutionary Meta-Learning (video)
Nicholas Roberts, Samuel Guo, Cong Xu (AutoML Decathlon Team) - AutoML Decathlon: Diverse Tasks, Modern Methods, and Efficiency at Scale (video)
Carola Doerr (Sorbonne Université) - Hyperparameter optimization and algorithm configuration from a black-box optimization perspective (video)
Xingyou (Richard) Song (Google) - Open Source Vizier: Distributed Infrastructure and API for Reliable and Flexible Blackbox Optimization (video)(slides)
Kartik Chandra and Audrey Xie (MIT) - Gradient Descent: The Ultimate Optimizer (video)
Subho Mukherjee (Microsoft) - AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers (video)
2022
Lisa Dunlap (UC Berkley) - Hyperparameter Tuning on the Cloud (video)
Łukasz Dudziak (Samsung) - Challenges of federated and zero-cost NAS (video)
Xingchen Wan (University of Oxford) - Bayesian Generational Population-Based Training (video)
Jiaming Song (NVIDIA) - A General Recipe for Likelihood-free Bayesian Optimization (video)
Mohamed Abdelfattah (Cornell University) - Fast and Hardware-Aware Neural Architecture Search (video)
Yutian Chen (Deepmind) - Towards Learning Universal Hyperparameter Optimizers with Transformers (video)
Samuel Mueller (University of Freiburg) - Transformers can do Bayesian Inference (video)
Greg Yang (Microsoft) - Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer (video)
Zi Wang (Google) - Pre-training helps Bayesian optimization too (video)
Ruohan Wang (UCL) - Global labels and few-shot learning (video)
Martin Wistuba (AWS) - Few-Shot Bayesian Optimization (video)
Antonio Carta (University of Pisa) - Avalanche: an End-to-End Library for Continual Learning (video)
David Salinas (Amazon) - Tutorial on Syne Tune (video)
David Eriksson (Meta) - “High-Dimensional Bayesian Optimization” (video)
Jonas Rothfuss (ETH Zurich) -“Meta-Learning Reliable Priors for Bayesian Optimization”
2021
Willie Neiswanger (Stanford University) -“Going Beyond Global Optima with Bayesian Algorithm Execution”(video)
Ameet Talwalkar (Carnegie Mellon University & Determined AI) - “Automating Architecture Design for Emerging ML Tasks”(video)
Xavier Bouthillier (Université de Montréal) - “Reliability of benchmarks and why HPO is important”(ref)(video)
Bilge Celik (Eindhoven University of Technology), “Adaptation Strategies for Automated Machine Learning on Evolving Data”(video)
Debadeepta Dey (Microsoft Research) - “Ranking Architectures by their Feature Extraction Capabilities”(video)
Jörg Franke (University of Freiburg) - “Sample-Efficient Automated Deep Reinforcement Learning”(video)
Yan Wu (ETH Zurich) - “NAS as Sparse Supernet” & “Trilevel NAS for Efficient Single Image Super-Resolution” (video)
Louis Tiao (University of Sydney) - “Bayesian Optimization by Classification” (video)
Gabriel Bender (Google Brain) - “Can weight sharing outperform random architecture search? An investigation with TuNAS” (video)
Wuyang Chen (University of Texas at Austin) - “Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective” (video)
Xuanyi Dong (University of Technology Sydney) - Extending the Search from Architecture to Hyperparameter, Hardware, and System (video)
Josif Grabocka (University of Freiburg) - “Deep Learning for Tabular Datasets”
Marius Lindauer (University of Hannover) - “Towards Explainable AutoML: xAutoML”
Arber Zela (University of Freiburg & ELLIS) - “Neural Ensemble Search for Uncertainty Estimation and Dataset Shift”
Jeff Clune (OpenAI) - “Learning to Continually Learn”
2020
Binxin (Robin) Ru (University of Oxford) - Interpretable Neural Architecture Search
Sebastian Flennerhag (Deepmind) - Jointly learning a model and its learning rule
Colin White (Abacus.ai) - Bananas, Encodings, and Local Search: Insights into Neural Architecture Search (ref1,ref2,ref3)
Zohar Karnin (Amazon AWS) - Practical and sample efficient zero-shot HPO
Rodolphe Jenatton (Google Brain) - Hyperparameter Ensembles for Robustness and Uncertainty Quantification
Matthias Polozeck (Uber AI) - Scalable Bayesian optimization for Industrial Applications
Joaquin Vanschoren (Eindhoven University of Technology) - Towards ever-learning AutoML
Frank Hutter (University of Freiburg) - Neural Architecture Search: Coming of Age(slides)
Valerio Perrone (Amazon): “Fair Bayesian Optimization”(ICML Video)
Matthias Seeger (Amazon): “Model-based Asynchronous Hyperparameter and Neural Architecture Search”
Fabio Maria Carlucci (Huawei): Why NAS evaluation is frustratingly hard and how Generator-based optimization can help” (Slides)
David Salinas (Naver Labs): “A quantile-based approach for hyperparameter transfer learning”