-
11:00 am
Efstratios Tsoukanis - CGU
Active Learning Classification from a Signal Separation Perspective
Math 278B: Mathematics of Information, Data, and Signals
APM 6402
AbstractIn machine learning, classification is often approached as a function approximation problem. In this talk, we propose a active learning framework inspired by signal separation and super-resolution theory. Our approach enables efficient identification of class supports, even in the presence of overlapping distributions. This allows efficient clustering and label propagation from very few labeled points.
-
4:00 pm
Dr. Miguel Moreira - Massachusetts Institute of Technology
The Chern filtration on the cohomology of moduli spaces of (parabolic) bundles
Math 208: Seminar in Algebraic Geometry
APM 7321
AbstractThe Chern filtration is a natural filtration that can be defined on the cohomology of moduli spaces of sheaves. Its definition was originally made for the moduli of Higgs bundles, motivated by a comparison with the perverse and weight filtrations, but it also makes sense for the very classical moduli spaces of bundles on curves. A vanishing result conjectured by Newstead and proved by Earl-Kirwan in the 90s is secretly a statement about the Chern filtration. I will explain a new approach to this vanishing which is based on parabolic bundles: it turns out that enriching the problem with a parabolic structure gives access to powerful tools, such as wall-crossing, Hecke transforms and Weyl symmetry — together, these give a new proof of the Newstead-Jefrey-Kirwan vanishing and a related "d independence" statement. Part of the talk is based on work with W. Lim and W. Pi.
-
3:00 pm
Professor Hans Wenzl - UC San Diego
Tensor categories from conformal inclusions
Math 211A: Seminar in Algebra
APM 7321
AbstractIt is well-known that if a tensor category has an abelian algebra object A, one obtains a new category, essentially by tensoring over A. An important class of such algebra objects come from conformal inclusions for loop groups. While these algebra objects have been known for a long time, an explicit description of the corresponding categories was only recently found.
They are somewhat surprisingly closely related to representation categories of the isomeric quantum Lie super algebras. This talk is based on joint work with Edie-Michell and a paper by Edie-Michell and Snyder.
-
3:00 pm
Prof. Robert Webber - UC San Diego
Randomly sparsified Richardson iteration: A dimension-independent sparse linear solver
Math 296: Graduate Student Colloquium
APM 6402
AbstractRecently, a class of algorithms combining classical fixed point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as $10^{108} \times 10^{108}$. So far, a complete mathematical explanation for this success has proven elusive. The family of methods has not yet been extended to the important case of linear system solves. In this work we propose a new scheme based on repeated random sparsification that is capable of solving sparse linear systems in arbitrarily high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster-than-Monte Carlo convergence rate and justifies use of the scheme even when the solution vector itself is too large to store.
-
12:10 pm
-
3:00 pm
Dr. Kristin Lauter - Meta
AI4Crypto: Using Machine Learning to solve Hard Math Problems in Practice
AWM Colloquium
APM 6402
AbstractAI is taking off and we could say we are living in “the AI Era”. Progress in AI today is based on mathematics and statistics under the covers of machine learning models. This talk will explain recent work on AI4Crypto, where we train AI models to attack Post Quantum Cryptography (PQC) schemes based on lattices. I will use this work as a case study in training ML models to solve hard math problems in practice. Our AI4Crypto project has developed AI models capable of recovering secrets in post-quantum cryptosystems (PQC). The standardized PQC systems were designed to be secure against a quantum computer, but are not necessarily safe against advanced AI!
Understanding the concrete security of these standardized PQC schemes is important for the future of e-commerce and internet security. So instead of saying that we are living in a “Post-Quantum” era, we should say that we are living in a “Post-AI” era!
-
10:00 am
Gaurav Aggarwal - Tata Institute of Fundamental Research, Mumbai
Lévy-Khintchine Theorems: effective results and central limit theorems
Math 211B - Group Actions Seminar
Zoom ID 96741093409
AbstractThe Lévy-Khintchine theorem is a classical result in Diophantine approximation that describes the growth rate of denominators of convergents in the continued fraction expansion of a typical real number. We make this theorem effective by establishing a quantitative rate of convergence. More recently, Cheung and Chevallier (Annales scientifiques de l'ENS, 2024) established a higher-dimensional analogue of the Lévy-Khintchine theorem in the setting of simultaneous Diophantine approximation, providing a limiting distribution for the denominators of best approximations. We also make their result effective by proving a convergence rate, and in addition, we establish a central limit theorem in this context. Our approach is entirely different and relies on techniques from homogeneous dynamics.
-
11:00 am
Professor Zhen-Qing Chen - University of Washington
Boundary trace of symmetric reflected diffusions
2025 Ronald Getoor Lecture
APM 6402
AbstractStarting with a transient irreducible diffusion process $X^0$ on a locally compact separable metric space $(D, d)$ (for example, absorbing Brownian motion in a snowflake domain), one can construct a canonical symmetric reflected diffusion process $\bar X$ on a completion $D^*$ of $(D, d)$ through the theory of reflected Dirichlet spaces. The boundary trace process $\check X$ of $X$ on the boundary $\partial D:=D^*\setminus D$ is the reflected diffusion process $\bar X$ time-changed by a smooth measure $\nu$ having full quasi-support on $\partial D$. The Dirichlet form of the trace process $\check X$ is called the trace Dirichlet form. In this talk, I will address the following two fundamental questions:
1) How to characterize the boundary trace Dirichlet space in a concrete way?
2) How does the boundary trace process behave?
Based on a joint work with Shiping Cao.
-
1:00 pm
Dr. Gregory Parker - Stanford University
Families of non-product minimal submanifolds with cylindrical tangent cones
Math 258: Seminar in Differential Geometry
APM B412
AbstractThe study of singularities of minimal submanifolds has a long history, with isolated singularities being the best understood case. The next simplest case is that of minimal submanifolds with families with singularities locally modeled on the product of an isolated conical singularity and a Euclidean space — such submanifolds are said to have cylindrical tangent cones at these singularities. Despite work in many contexts on minimal submanifolds with such singularities, the only known explicit examples at present are global products or involve extra structure (e.g. Kahler subvarieties). In this talk, I will describe a method for constructing infinite-dimensional families of non-product minimal submanifolds in arbitrary codimension whose singular set is itself an analytic submanifold. The construction uses techniques from the analysis of singular elliptic operators and Nash-Moser theory. This talk is based on joint work with Rafe Mazzeo.
-
11:00 am
-
2:00 pm
Scotty Tilton - UCSD
A Chemystery: Representations, Orbitals, and Mnemonic Devices
Food for Thought
APM 6402
AbstractHow in the world did they get those crazy pictures of electron orbitals? Those chemists had to have talked to somebody about it! It turns out they talked to math people (probably physicists, but physicists talk to math people, and so on). These orbitals can actually be derived in not-too-bad a way using representation theory. We'll go over what electron orbitals are, how they show up in the periodic table, how representation theory gets involved, and how to derive the electron orbitals ourselves. We will even find orbitals that are bigger than the highest electron on Oganesson! We'll hopefully also understand what physicists and engineers mean when they say they have a "tensor." I've also been studying the periodic table using mnemonic devices lately, so you'll be sure to hear about that.
-
4:00 pm
Dr. Francois Greer - Michigan State University
Elliptic-Elliptic Surfaces
Math 208: Seminar in Algebraic Geometry
APM 7321
AbstractElliptic surfaces are complex surfaces with two discrete invariants, $g$ and $d>0$. We will discuss the moduli and Hodge theory of these surfaces for small values of $(g,d)$. The case $(g,d)=(1,1)$ is particularly interesting, in view of a new conjectural Fourier-Mukai type correspondence. It also provides a test case of the Hodge Conjecture in dimension 4.
-
3:00 pm
Prof. Brendon Rhoades - UC San Diego
The superspace coinvariant ring of the symmetric group
Math 211A: Seminar in Algebra
APM 7321
AbstractThe symmetric group $\mathfrak{S}_n$ acts naturally on the polynomial ring of rank $n$ by variable permutation. The classical coinvariant ring $R_n$ is the quotient of this action by the ideal generated by invariant polynomials with vanishing constant term. The ring $R_n$ has deep ties to the combinatorics of permutations and the geometry of the flag variety. The superspace coinvariant ring $SR_n$ is obtained by an analogous construction where one considers the action of $\mathfrak{S}_n$ on the algebra $\Omega_n$ of polynomial-valued differential forms on $n$-space. We describe the Macaulay-inverse system associated to $SR_n$, give a formula for its bigraded Hilbert series, and give an explicit basis of $SR_n$. The basis of $SR_n$ will be derived using Solomon-Terao algebras associated to free hyperplane arrangements. Joint with Robert Angarone, Patty Commins, Trevor Karn, Satoshi Murai, and Andy Wilson.
-
3:00 pm
Prof. Rose Yu - UC San Diego, Department of Computer Science and Engineering
On the Interplay Between Deep Learning and Dynamical Systems
APM 7321
AbstractThe explosion of real-time data in the physical world requires new generations of tools to model complex dynamical systems. Deep learning, the foundation of modern AI, offers highly scalable models for spatiotemporal data. On the other hand, deep learning is opaque and complex. Dynamical system theory plays a key role in describing the emerging behavior of deep neural networks. It provides new paths towards understanding the hidden structures in these complex systems. In this talk, I will give an overview of our research to explore the interplay between the two. I will showcase the applications of these approaches to different science and engineering tasks.
-
2:00 pm
Professor Claire Tomlin - James and Katherine Lau Professor in the College of Engineering; Chair, Department of Electrical Engineering and Computer Sciences (University of California, Berkeley)
Safe Learning in Autonomy
Murray and Adylin Rosenblatt Lecture in Applied Mathematics
Kavli Auditorium, Tata Hall, UC San Diego
AbstractPlease register at https://forms.gle/
yDcUa9LAmpY1F2178.
-
3:10 pm
Professor David Hirshleifer - University of Southern California
Social Transmission Effects in Economics and Finance
Murray and Adylin Rosenblatt Endowed Lecture in Applied Mathematics
Kavli Auditorium, Tata Hall, UC San Diego
Abstract
-
11:00 am
Haixiao Wang - UC San Diego
Critical sparse random rectangular matrices: emergence of spectra outliers
Math 288 - Probability & Statistics
APM 6402
AbstractConsider the random bipartite Erdos-Renyi graph $G(n, m, p)$, where each edge with one vertex in $V_{1}=[n]$ and the other vertex in $V_{2}=[m]$ is connected with probability $p$ with $n \geq m$. For the centered and normalized adjacency matrix $H$, it is well known that the empirical spectral measure will converge to the Marchenko-Pastur (MP) distribution. However, this does not necessarily imply that the largest (resp. smallest) singular values will converge to the right (resp. left) edge when $p = o(1)$, due to the sparsity assumption. In Dumitriu and Zhu 2024, it was proved that almost surely there are no outliers outside the compact support of the MP law when $np = \omega(\log(n))$. In this paper, we consider the critical sparsity regime with $np =O(\log(n))$, where we denote $p = b\log(n)/\sqrt{mn}$, $\gamma = n/m$ for some positive constants $b$ and $\gamma$. For the first time in the literature, we quantitatively characterize the emergence of outlier singular values. When $b > b_{\star}$, there is no outlier outside the bulk; when $b^{\star}< b < b_{\star}$, outlier singular values only appear outside the right edge of the MP law; when $b < b^{\star}$, outliers appear on both sides. Meanwhile, the locations of those outliers are precisely characterized by some function depending on the largest and smallest degrees of the sampled random graph. The thresholds $b^{\star}$ and $b_{\star}$ purely depend on $\gamma$. Our results can be extended to sparse random rectangular matrices with bounded entries.
-
11:00 am
Zihan Shao - UCSD
Solving Nonlinear PDEs with Sparse Radial Basis Function Networks
Math 278B: Mathematics of Information, Data, and Signals
APM 6402
AbstractWe propose a novel framework for solving nonlinear PDEs using sparse radial basis function (RBF) networks. Sparsity-promoting regularization is employed to prevent over-parameterization and reduce redundant features. This work is motivated by longstanding challenges in traditional RBF collocation methods, along with the limitations of physics-informed neural networks (PINNs) and Gaussian process (GP) approaches, aiming to blend their respective strengths in a unified framework. The theoretical foundation of our approach lies in the function space of Reproducing Kernel Banach Spaces (RKBS) induced by one-hidden-layer neural networks of possibly infinite width. We prove a representer theorem showing that the solution to the sparse optimization problem in the RKBS admits a finite solution and establishes error bounds that offer a foundation for generalizing classical numerical analysis. The algorithmic framework is based on a three-phase algorithm to maintain computational efficiency through adaptive feature selection, second-order optimization, and pruning of inactive neurons. Numerical experiments demonstrate the effectiveness of our method and highlight cases where it offers notable advantages over GP approaches. This work opens new directions for adaptive PDE solvers grounded in rigorous analysis with efficient, learning-inspired implementation.
-
3:00 pm
Prof. Deanna Needell - UCLA
Fairness and Foundations in Machine Learning
Math 278C: Optimization and Data Science
APM 6218 & Zoom (Meeting ID: 941 4642 0185, Password: 278C2025)
AbstractIn this talk, we will address areas of recent work centered around the themes of fairness and foundations in machine learning as well as highlight the challenges in this area. We will discuss recent results involving linear algebraic tools for learning, such as methods in non-negative matrix factorization that include tailored approaches for fairness. We will showcase our approach as well as practical applications of those methods. Then, we will discuss new foundational results that theoretically justify phenomena like benign overfitting in neural networks. Throughout the talk, we will include example applications from collaborations with community partners, using machine learning to help organizations with fairness and justice goals. This talk includes work joint with Erin George, Kedar Karhadkar, Lara Kassab, and Guido Montufar.
Prof. Deanna Needell earned her PhD from UC Davis before working as a postdoctoral fellow at Stanford University. She is currently a full professor of mathematics at UCLA, the Dunn Family Endowed Chair in Data Theory, and the Executive Director for UCLA's Institute for Digital Research and Education. She has earned many awards including the Alfred P. Sloan fellowship, an NSF CAREER and other awards, the IMA prize in Applied Mathematics, is a 2022 American Mathematical Society (AMS) Fellow and a 2024 Society for industrial and applied mathematics (SIAM) Fellow. She has been a research professor fellow at several top research institutes including the SLMath (formerly MSRI) and Simons Institute in Berkeley. She also serves as associate editor for several journals including Linear Algebra and its Applications and the SIAM Journal on Imaging Sciences, as well as on the organizing committee for SIAM sessions and the Association for Women in Mathematics.
-
11:00 am
-
11:00 am
-
3:00 pm
Nathaniel Libman
Orbit Harmonics and Graded Ehrhart Theory for Hypersimplices
Thesis Defense
APM 7321
-
4:00 pm
Joe Kramer-Miller - Lehigh University
On the diagonal and Hadamard grades of hypergeometric functions
Math 209: Number Theory Seminar
APM 7321 and online (see https://www.math.ucsd.edu/~nts
/) AbstractDiagonals of multivariate rational functions are an important class of functions arising in number theory, algebraic geometry, combinatorics, and physics. For instance, many hypergeometric functions are diagonals as well as the generating function for Apery's sequence. A natural question is to determine the diagonal grade of a function, i.e., the minimum number of variables one needs to express a given function as a diagonal. The diagonal grade gives the ring of diagonals a filtration. In this talk we study the notion of diagonal grade and the related notion of Hadamard grade (writing functions as the Hadamard product of algebraic functions), resolving questions of Allouche-Mendes France, Melczer, and proving half of a conjecture recently posed by a group of physicists. This work is joint with Andrew Harder.
[pre-talk at 3:00PM]
-
11:00 am
Jonah Botvinick-Greenhouse - Cornell University
TBA
Math 278B: Mathematics of Information, Data, and Signals
APM 6402
-
11:00 am
-
11:00 am