BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Special Year on Statistical Machine Learning - ECPv4.9.13//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Special Year on Statistical Machine Learning
X-ORIGINAL-URL:https://statisticalml.stat.columbia.edu
X-WR-CALDESC:Events for Special Year on Statistical Machine Learning
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20200114T091500
DTEND;TZID=America/New_York:20200116T163000
DTSTAMP:20220526T120725
CREATED:20191126T025219Z
LAST-MODIFIED:20200515T013149Z
UID:159-1578993300-1579192200@statisticalml.stat.columbia.edu
SUMMARY:Statistical Machine Learning Bootcamp
DESCRIPTION:The goal of the Columbia Year of Statistical Machine Learning Bootcamp Lectures is to introduce students to the computational\, mathematical\, and statistical foundations of data science. \nThe focus will be on theoretical subjects of interest in modern statistical machine learning\, suitable for new Ph.D. students in computer science\, statistics\, applied math\, and related fields. \n\nThe lectures are open (free) to all\, but we kindly request that you complete the following registration form so we get an accurate headcount. \nRegistration: https://forms.gle/dHB5Hbq4GB43eJuJ8 \nWaitlist: https://forms.gle/LGjzrKg9Qo2R9hCM7 \n\nSchedule: \nLectures are in the CS Auditorium (451 Computer Science Building). \nThe 10:15am-11:00am coffee breaks will be in the CS Lounge (also in the Computer Science Building). \nTuesday\, January 14 \n\n9:15-10:15: concentration of measure (Jarek Błasiok; slides\, video)\n10:15-11:00: coffee break (CS Lounge)\n11:00-12:00: concentration of measure (Jarek Błasiok; slides\, video)\n12:00-2:00: lunch break (on your own)\n2:00-3:00: concentration of measure (Jarek Błasiok; slides\, video)\n3:00-3:30: break\n3:30-4:30: algorithmic applications of high-dimensional geometry (Alex Andoni; slides\, video)\n\nWednesday\, January 15 \n\n9:15-10:15: algorithmic applications of high-dimensional geometry (Alex Andoni; slides\, video)\n10:15-11:00: coffee break (CS Lounge)\n11:00-12:00: algorithmic applications of high-dimensional geometry (Alex Andoni; slides\, video)\n12:00-2:00: lunch break (on your own)\n2:00-3:00: optimal transport (Espen Bernton; slides\, video)\n3:00-3:30: break\n3:30-4:30: optimal transport (Espen Bernton; slides\, video)\n\nThursday\, January 16 \n\n9:15-10:15: stochastic gradient methods (Arian Maleki; video)\n10:15-11:00: coffee break (CS Lounge)\n11:00-12:00: stochastic gradient methods (Arian Maleki; video)\n12:00-2:00: lunch break (on your own)\n2:00-3:00: stochastic gradient methods (Arian Maleki; video)\n3:00-3:30: break\n3:30-4:30: nonparametric testing using optimal transport (Bodhi Sen; slides; video)\n\n\nLecturers and Topics: \n\nJarek Błasiok: concentration of measure (slides: [1]\, [2]\, [3])\n\nEquivalence between moment bounds/MGF bounds/tail bounds\, Khintchine inequality\, Bernstein inequality\, Johnson-Lindenstrauss for Gaussian matrices.\nSubspace embedding: net argument\, the volumetric argument for net constructions.\nConcentration inequalities for low-influence functions.\n\n\nAlex Andoni: algorithmic applications of high-dimensional geometry (slides: [1]\, [2]\, [3])\nMany modern algorithms\, especially for massive datasets\, benefit from geometric techniques and tools even though the initial problem might have nothing to do with geometry. In this lecture series\, we will cover a number of examples where (high-dimensional) geometry techniques lead to algorithms with significantly improved parameters\, such as run-time\, space\, communication\, etc. For example\, starting with the classic dimension reduction method\, researchers developed powerful tools for storing\, transmitting\, and accessing data quantums more efficiently than merely storing/etc the full data. These tools can be seen as a form of functional compression\, where we store just enough information about data pieces to be useful for particular tasks. We will see applications of these tools to problems such as similarity search/nearest neighbor search\, and numerical linear algebra. \n\nEspen Bernton: optimal transport (slides: [1]\, [2])\n\nTheoretical foundations: Origins of OT – Monge & Kantorovich problems\, Primal and dual formulations\, Wasserstein distance\, Some important properties.\nComputation and applications: Exact and approximate computation\, Some statistical properties\, OT as a loss function\, Application to Generative models.\n\n\nArian Maleki: stochastic gradient methods\n\nStandard stochastic gradient descent\, its convergence rate\, and optimality.\nRupport-Polyak averaging and its comparison with the standard SGD (also robust stochastic gradient descent of Nemirovski et al).\nAveraging the gradients and variance reduced algorithms\, such as SAG\, SAGA\, SVRG.\nQuasi-Newton stochastic gradient method.\n\n\nBodhi Sen: nonparametric testing using optimal transport (slides: [1])\nIn this lecture I will introduce the problem of distribution-free nonparametric testing and illustrate the connection to the theory of optimal transport. I will use these ideas to develop distribution-free testing procedures for: (i) multivariate two-sample goodness-of-fit testing\, and (ii) testing for independence of two random vectors. \n\n\n
URL:https://statisticalml.stat.columbia.edu/event/machine-learning-bootcamp/
LOCATION:CS Auditorium (CSB 451)\, Mudd Building\, 500 West 120th St\, New York\, NY\, United States
CATEGORIES:Spring 2020
END:VEVENT
END:VCALENDAR