BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Special Year on Statistical Machine Learning - ECPv4.9.13//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Special Year on Statistical Machine Learning
X-ORIGINAL-URL:https://statisticalml.stat.columbia.edu
X-WR-CALDESC:Events for Special Year on Statistical Machine Learning
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20191206T083000
DTEND;TZID=America/New_York:20191206T170000
DTSTAMP:20220123T111744
CREATED:20190727T151758Z
LAST-MODIFIED:20191209T163649Z
UID:31-1575621000-1575651600@statisticalml.stat.columbia.edu
SUMMARY:Tutorials on Sampling and Variational Inference
DESCRIPTION:Schedule: \n8:30am – 9am: Welcome and sign-in \n9am – 10:30am: Tamara Broderick (Slides) \nCoffee break \n11am – 12:30pm: Max Raginsky (Slides) \nLunch break \n2:30pm – 4pm: Dave Blei (Slides) \nCoffee break \n4:30pm – 5pm: Panel discussion: Tamara Broderick\, Dave Blei\, Max Raginsky\, John Paisley (moderator) \n\nSpeaker: Tamara Broderick (9am – 10:30am) (Slides)\nTitle: Variational Bayes and beyond: Foundations of Scalable Bayesian Inference \nAbstract: Bayesian methods exhibit a number of desirable properties for modern data analysis—including (1) coherent quantification of uncertainty\, (2) a modular modeling framework able to capture complex phenomena\, (3) the ability to incorporate prior information from an expert source\, and (4) interpretability. In practice\, though\, Bayesian inference necessitates approximation of a high-dimensional integral\, and some traditional algorithms for this purpose can be slow—notably at data scales of current interest. The tutorial will cover the foundations of some modern tools for fast\, approximate Bayesian inference at scale. One increasingly popular framework is provided by “variational Bayes” (VB)\, which formulates Bayesian inference as an optimization problem. We will examine key benefits and pitfalls of using VB in practice\, with a focus on the widespread “mean-field variational Bayes” (MFVB) subtype. We will highlight properties that anyone working with VB\, from the data analyst to the theoretician\, should be aware of. And we will discuss a number of open challenges. \n\nSpeaker: Max Raginsky (11am – 12:30pm) (Slides)\nTitle: Stochastic Calculus in Machine Learning: Optimization\, Sampling\, Simulation \nAbstract: A great deal of recent research activity has focused on using continuous-time processes to analyze discrete-time algorithms and models. In particular\, diffusion processes have been examined as a way towards a better understanding of first-order optimization methods\, as they afford an analysis of behavior over non-convex landscapes using a rich array of techniques from the statistical physics literature. Gradient flows and diffusions have also found a role in the analysis of deep neural networks\, where they are interpreted as describing the limiting case of infinitely many layers\, each in effect infinitesimally thin. \nIn this tutorial\, I will give an informal treatment of some of the recent applications of stochastic calculus of K. Ito to some problems at the intersection of optimization and machine learning. Specifically\, I will cover the following topics: \nI) Optimization — I will discuss non-convex learning using continuous-time Stochastic Gradient Langevin Dynamics (SGLD). I will first show that\, under reasonable regularity assumptions on the objective function\, SGLD finds an approximate global minimizer of the population risk in finite time (which\, generally\, be exponential in the problem dimension)\, and then discuss the metastability phenomenon of the Langevin dynamics at “intermediate” time scales. Here\, by metastability I mean that\, with high probability\, the trajectory of the Langevin diffusion will either spend an arbitrarily long time in a small neighborhood of some local minimum or will quickly escape that neighborhood within a short recurrence time. \nII) Sampling and simulation — I will show that diffusion processes with drift given by a sufficiently deep feedforward neural net provide a flexible and expressive class of probabilistic generative models. I will first show that sampling in such generative models can be phrased as a stochastic control problem (revisiting the classic results of Föllmer and Dai Pra) and then build on this formulation to quantify the expressive power of these models. Specifically\, I will prove that one can efficiently sample from a wide class of terminal target distributions by choosing the drift of the latent diffusion from the class of multilayer feedforward neural nets\, with the accuracy of sampling measured by the Kullback-Leibler divergence to the target distribution. \n\nSpeaker: Dave Blei (2:30pm – 4pm) (Slides)\nTitle: Scaling and Generalizing Approximate Bayesian Inference \nAbstract: A core problem in statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in Bayesian statistics\, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this talk I review and discuss innovations in variational inference (VI)\, a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and Bayesian statistics. It tends to be faster than more traditional methods\, such as Markov chain Monte Carlo sampling. \nAfter quickly reviewing the basics\, I will discuss our recent research on VI. I first describe stochastic variational inference\, an approximate inference algorithm for handling massive data sets\, and demonstrate its application to probabilistic topic models of millions of articles. Then I discuss black box variational inference\, a generic algorithm for approximating the posterior. Black box inference easily applies to many models but requires minimal mathematical work to implement. I will demonstrate black box inference on deep exponential families—a method for Bayesian deep learning—and describe how it enables powerful tools for probabilistic programming. \n
URL:https://statisticalml.stat.columbia.edu/event/sampling-variational-inference-and-related-topics/
LOCATION:Online Zoom\, NY\, United States
CATEGORIES:All Events,Fall 2019
END:VEVENT
END:VCALENDAR