AI/ML Seminar Series: Exploring the Limits of Lossy Data Compression with Deep Learning

by Information and Computer Sciences

Lecture Academics ICS Speaker Technology

Mon, Apr 26, 2021

1 PM – 2 PM PDT (GMT-7)

Add to Calendar

Virtual

-

Details

Yibo Yang
Ph.D. Student, Department of Computer Science
University of California, Irvine

Probabilistic machine learning, particularly deep learning, is reshaping the field of data compression. Recent work has established a close connection between lossy data compression and latent variable models such as variational autoencoders (VAEs), and VAEs are now the building blocks of many learning-based lossy compression algorithms that are trained on massive amounts of unlabeled data. In this talk, I give a brief overview of learned data compression, including the current paradigm of end-to-end lossy compression with VAEs, and present my research that addresses some of its limitations and explores other possibilities of learned data compression. First, I present algorithmic improvements inspired by variational inference that push the performance limits of VAE-based lossy compression, resulting in a new state-of-the-art performance on image compression. Then, I introduce a new algorithm that compresses the variational posteriors of pre-trained latent variable models, and allows for variable-bitrate lossy compression with a vanilla VAE. Lastly, I discuss ongoing work that explores fundamental bounds on the theoretical performance of lossy compression algorithms, using the tools of stochastic approximation and deep learning.

Bio: Yibo Yang is a PhD student advised by Stephan Mandt in the Computer Science department at UC Irvine. His research interests include probability theory, information theory, and their applications in statistical machine learning.

YouTube Stream: https://youtu.be/1lXKUhBTHWc

Hosted By

Information and Computer Sciences | Website | View More Events

UCI Center for Machine Learning and Intelligent Systems

Contact the organizers