Kantorovich Initiative Seminar: Jia-Jie Zhu
Topic
Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows
Speakers
Details
Gradient flows have emerged as a powerful framework for analyzing machine learning and statistical inference algorithms. Motivated by several applications in statistical inference, generative models, generalization, and robustness of learning algorithms, I will provide a few new results regarding the kernel approximation of gradient flows, such as a hidden link between the gradient flows of kernel maximum-mean discrepancy and relative entropies. These findings not only advance our theoretical understanding but also provide practical tools for enhancing machine learning algorithms. I will showcase inference and sampling algorithms using a new kernel approximation of the Wasserstein-Fisher-Rao (a.k.a. Hellinger-Kantorovich) gradient flows, which have better convergence characterization and improved performance in computation.