Loading Events
  • This event has passed.

Deep generative models (DGM) are neural networks with many hidden layers
trained to approximate complicated, high-dimensional probability distributions
from a finite number of samples. When trained successfully, we can use the
DGMs to estimate the likelihood of each observation and to create new samples
from the underlying distribution. Developing DGMs has become one of the most
hotly researched fields in artificial intelligence in recent years. The literature on
DGMs has become vast and is growing rapidly. Some advances have even
reached the public sphere, for example, the recent successes in generating
realistic-looking images, voices, or movies; so-called deep fakes.
Despite these successes, several mathematical and practical issues limit the
broader use of DGMs: given a specific dataset, it remains challenging to design
and train a DGM and even more challenging to find out why a particular model is
or is not effective. To help bring new perspectives to this field, this two-hour talk
provides an introduction to DGMs and provides a concise mathematical
framework for modeling the three most popular approaches: normalizing flows
(NF), variational autoencoders (VAE), and generative adversarial networks
(GAN). We illustrate the advantages and disadvantages of these basic
approaches using numerical experiments. Our goal is to enable and motivate
participants to contribute to this proliferating research area. Our presentation
also emphasizes relations between generative modeling and optimal transport.