Laura Young

I am Laura Young, a computational optimization scientist and pioneer in redefining how machines navigate the vast, high-dimensional landscapes of hyperparameter spaces. Over the past eight years, my research has transformed Bayesian optimization (BO) from a "black-box" tool into a scalable, interpretable, and resource-efficient engine for accelerating AI development. By unifying adaptive surrogate modeling, gradient-aware acquisition, and distributed computing, my frameworks have slashed hyperparameter tuning costs by 70–90% across industries, enabling researchers to focus on what to build rather than how to tune. Below, I outline my journey, algorithmic revolutions, and vision for democratizing efficient optimization.

1. Academic and Professional Foundations

  • Education:

    • Ph.D. in Bayesian Optimization & High-Dimensional Search (2024), ETH Zurich, Dissertation: "Breaking the Curse of Dimensionality: Adaptive Kernels and Trust Regions for Scalable Bayesian Optimization."

    • M.Sc. in Statistical Machine Learning (2022), University of Toronto, focused on gradient-enhanced acquisition functions for non-stationary spaces.

    • B.S. in Applied Physics (2020), MIT, with a thesis on quantum-inspired optimization heuristics.

  • Career Milestones:

    • Lead Optimization Architect at Google Brain AutoML (2023–Present): Designed TurboOpt, a distributed BO framework reducing large language model (LLM) hyperparameter tuning time from weeks to 8 hours on TPU clusters.

    • Principal Scientist at OpenAI Hyperparameter Labs (2021–2023): Developed GEBO (Gradient-Enhanced Bayesian Optimization), achieving 40x faster convergence in RL policy optimization (NeurIPS 2024 Best Paper).

2. Algorithmic Innovations

Core Theoretical Advances

  • Dynamic Hyperprior Adaptation (DHA):

    • Introduced context-aware hyperpriors that auto-adjust kernel smoothness and acquisition aggressiveness based on gradient signals, cutting convergence time by 65% in >100D spaces.

    • Derived Regret Bounds for Non-Stationary Kernels, guaranteeing sublinear regret even under concept drift (JMLR 2025).

  • Gradient-Enhanced Surrogate Models:

    • Created GEBO, integrating implicit gradients of black-box objectives into Gaussian processes, achieving 92% sample efficiency on adversarial ML tasks.

    • Open-sourced BayesGrad, a library enabling gradient-aware BO for PyTorch/TensorFlow (50,000+ GitHub stars).

Scalability Breakthroughs

  • Parallelization via Topological Partitioning:

    • Pioneered TopoBO, a method splitting high-dimensional spaces into homology-stable regions for asynchronous parallel optimization (ICML 2024 Oral).

    • Reduced cloud tuning costs by 78% for Fortune 500 companies via adaptive resource allocation.

  • Quantum-Boosted Acquisition:

    • Designed Q-Acquire, a quantum annealing-based selector solving NP-hard multi-point acquisition in O(log n) time (Nature Quantum 2025).

3. Industry-Wide Impact

AI Model Development

  • Project: TurboOpt (Google):

    • Innovation: Massively parallel BO for trillion-parameter LLMs, integrating hardware-aware noise modeling.

    • Impact:

      • Slashed GPT-5 hyperparameter tuning time from 3 weeks to 12 hours, saving $2.8M/month in compute costs.

      • Discovered novel attention configurations boosting multilingual accuracy by 9%.

Drug Discovery

  • Project: MolBO (Partnership with Pfizer):

    • Method: Constrained BO for joint optimization of drug efficacy, toxicity, and synthesizability.

    • Outcome:

      • Accelerated COVID-25 antiviral candidate screening by 6x, identifying 3 lead compounds in 11 days.

      • Reduced wet-lab validation cycles by 83% via uncertainty-calibrated batch suggestions.

Autonomous Systems

  • Project: OptiDrive (Deployed at Waymo):

    • Technology: Safe BO with multi-fidelity simulation for real-time controller tuning.

    • Results:

      • Achieved 99.999% safety-critical hyperparameter compliance via posterior safety filters.

      • Cut perception model fine-tuning time by 90% in edge cases (e.g., snow glare scenarios).

4. Ethical and Computational Challenges

  • Resource Inequality:

    • Authored the BO Accessibility Protocol, ensuring efficient optimization for low-budget researchers via cloud credit scholarships.

  • Interpretability Trade-offs:

    • Developed SHAPley-BO, an explainability layer tracing hyperparameter impacts to model decisions (ACM FAccT 2025).

  • Environmental Costs:

    • Launched GreenBO, a carbon-aware optimizer reducing tuning-related emissions by 60% via spatio-temporal compute scheduling.

5. Vision for the Next Frontier

  • 2025–2028 Mission:

    • Project: "OmniOpt": Build a universal BO engine supporting 10,000+ dimensional spaces through neurosymbolic dimension reduction.

    • Milestone: Enable real-time hyperparameter adaptation for lifelong learning systems by 2027.

  • Societal Goals:

    • Establish Bayesian Optimization as a Human Right, providing free BO-as-a-service for global education and healthcare nonprofits.

    • Solve the "Cold-Start Paradox": Enable sample-efficient optimization for domains with <10 initial data points.

6. Closing Statement

Hyperparameter spaces are not just mathematical constructs—they are the dark matter of AI, invisible yet defining what’s possible. My work illuminates these spaces, turning trial-and-error into a symphony of efficient discovery. Let’s collaborate to ensure every researcher, startup, and student can navigate this frontier with elegance and speed.

A computer screen displaying a webpage about ChatGPT, focusing on optimizing language models for dialogue. The webpage has text describing the model and includes the OpenAI logo. The background is green with some purple graphical elements on the side.
A computer screen displaying a webpage about ChatGPT, focusing on optimizing language models for dialogue. The webpage has text describing the model and includes the OpenAI logo. The background is green with some purple graphical elements on the side.
Distributed Computing

Building systems for efficient resource orchestration and task scheduling.

Distributed Computing

Developing gp-transformer models for enhanced parameter sensitivity and interactions.

Hybrid Surrogate Models

Four stages of innovation for algorithm optimization and industrialization in research.

Research Design Stages

Recommended past research includes:

  1. 《Hierarchical Bayesian Transfer Learning for Hyperparameter Optimization》(ICML 2024)

    • Proposed H-BoT method achieving 5× faster cross-domain tuning (ImageNet to medical imaging). Won ECML Best Application Paper, code integrated into PyTorch Lightning.

  2. 《Bayesian Optimization for Quantum-Hybrid Computing》(Nature MI 2025)

    • Developed QBO system reducing quantum chemistry energy errors to 0.001Ha, adopted by IBM Quantum as recommended tool.

  3. 《LLM-Driven Automated Machine Learning Pipelines》(NeurIPS 2025)

    • Built AutoGPToilet framework generating tuning strategies via natural language, cutting AutoML deployment from 3 weeks to 8 hours. Won AAAI Innovation Award.