LEK-HENG LIM - University of Chicago
Main Content
Modern data are often characterized by their principal subspaces and "subspace learning", i.e., statistical models for inferring mixtures of subspaces, has become a topic of interest. We will discuss some recent progress in learning (1) subspaces of different dimensions and (2) affine subspaces. The set of all subspaces of a given dimension is a well-known geometric object known as a Grassmannian. Our Bayesian model would depend on a clever embedding of Grassmannians of different dimensions into the unit sphere of relatively low dimension.
As will become apparent, such models invariably rest upon a notion of distance between subspaces. For two subspaces of the same dimension, there is a well-known intrinsic notion of distance -- the geodesic distance between two points on a Grassmannian. This is intrinsic in the sense that it does not depend on an embedding of the Grassmannain into some larger ambient space, and furthermore it can be related to principle angles and thus computed via the SVD. We will discuss intrinsic distances for (1) subspaces of different dimensions and (2) affine subspaces inspired respectively by algebraic geometry (Schubert varieties) and differential geometry (universal quotient bundle). Both are readily computable via SVD.
The first part of this talk is joint work with Lizhen Lin, Sayan Mukherjee, and Brian St. Thomas of Duke. The second part is joint work with Ke Ye of Chicago.