Department of Mathematics,
University of California San Diego
****************************
Advancement to Candidacy
Jiangtao Li
UC San Diego
Kahler-Ricci Flow On Surfaces
-
APM 7218
APM 7218
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 269 - Combinatorics Seminar
Dr. He Guo
Umea University
Coloring and list coloring in intersections of matroids
Abstract:
It is known that in matroids the list chromatic number is equal to the chromatic number. We investigate the gap within these pairs of parameters for hypergraphs that are the intersection of a given number $k$ of matroids. We prove that in such hypergraphs the list chromatic number is at most $k$ times the chromatic number and at most $2k-1$ times the maximum chromatic number among the $k$ matroids. This solves a conjecture posed by Kiraly and also by Berczi, Schwarcz, and Yamaguchi. We also prove that the list chromatic number of the intersection of two matroids is at most the sum of the chromatic numbers of each matroid, improving a result by Aharoni and Berger from 2006. The tools used are in part topological (but for the talk we do not assume background knowledge of matroid theory or algebraic topology).
Based on joint works with Ron Aharoni, Eli Berger, and Dani Kotlar, see arXiv:2407.08789.
-
APM 7321
APM 7321
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 296 - Graduate Student Colloquium
Prof. Lijun Ding
UC San Diego
On the squared-variable approach for nonlinear (semidefinite) programming
Abstract:
Consider min f(x) s.t. x>=0, where the objective function f: R→ R is smooth, and the variable is required to be nonnegative. A naive "squared variable" technique reformulates the problem to min_v f(v^2). Note that the new problem is now unconstrained, and many algorithms, e.g., gradient descent, can be applied. In this talk, we discuss the disadvantages of this approach, which have been known for decades, and the possible surprising fact of equivalence for the two problems in terms of (i) local minimizers and (ii) points satisfying the so-called second-order optimality conditions, which are keys for designing optimization algorithms. We further discuss extensions of the approach and equivalence to the vector case (where the vector variable is required to have all entries nonnegative) and the matrix case (where the matrix variable is required to be a positive semidefinite).
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
PhD Defense
Srivatsa Srinivas
UC San Diego
TBA
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
PhD Defense
Haixiao Wang
UC San Diego
Community Detection Problems on the General Random Hypergraphs
Abstract:
This dissertation concerns the community detection problems under the Hypergraph Stochastic Block Model (HSBM), where the sizes of edges may vary, and each edge appears independently with some given probability, purely determined by the labels of its vertices.
For regimes where the expected degrees grow with the number of vertices, for the first time in the literature, we prove a wide-ranging, information-theoretic lower bound on the number of misclassified vertices for any algorithm, where the bound is characterized by the generalized Chernoff-Hellinger divergence involving model parameters. Besides that, when the expected degrees grow logarithmically, we establish a sharp threshold for exact recovery for the non-uniform, multiple-community setting, subject to only minor constraints. A key insight reveals that aggregating information across uniform layers enables exact recovery even in cases where this is impossible if each layer were considered alone. We present two efficient algorithms, for minimal and full information scenarios, which successfully achieve exact recovery when above the threshold, and attain the lowest possible mismatch ratio when below the threshold where exact recovery is impossible, confirming their optimality.
In the regime with bounded expected degrees, we develop a spectral algorithm that achieves partial recovery, correctly classifying a constant fraction of vertices. This fraction is determined by the Signal-to-Noise Ratio (SNR) of the model, and approaches 1 as the SNR grows slowly with the number of vertices. Our algorithm employs a three-stage approach: first, preprocessing the hypergraph to maximize SNR; second, applying spectral methods to obtain an initial partition; and finally, implementing tensor-based refinement to enhance classification accuracy.
Additionally, we provide novel concentration results for adjacency matrices of sparse random hypergraphs, serving as the foundations of the theoretical analysis of our algorithms, which could be of independent interest. We also address several open problems concerning incomplete model information and parameter estimations.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 278C: Optimization and Data Science
Prof. Tingting Tang
San Diego State University
On flat stationary points of deep neural networks
Abstract:
Understanding the loss landscape of the deep networks can provide many insights into the theoretical understanding of how the networks learn and why they work so well in practice. In this talk, starting with the observation that the flat minima correspond to continuous symmetries of the loss function, two symmetry breaking methods are proposed to provably remove all the flat minima (and flat stationary points) from the loss landscape for any deep feedforward network as long as the activation function is a smooth function. Examples of activation functions that satisfy the assumptions are sigmoid, hyperbolic tangent, softplus, polynomial, etc., and those of loss functions are cross-entropy, squared loss, etc. The methods can be essentially viewed as generalized regularizations of the loss function. The proposed methods are applied on the polynomial neural networks, where the activation function is a polynomial with arbitrary degree, and a first result on estimates of the number of isolated solutions is provided and we get a first glimpse on the complexity of the loss landscapes even in the absence of flat minima.
-
APM 6402
Zoom (Link)
Meeting ID: 941 4642 0185
Password: 278C2025
APM 6402
Zoom (Link)
Meeting ID: 941 4642 0185
Password: 278C2025
****************************
Department of Mathematics,
University of California San Diego
****************************
Advancement to Candidacy
Yasir Khan
Some PDE Constrained Variational Problems
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 211B - Group Actions Seminar
Professor Joshua Bowman
Pepperdine University
Cycles in digraphs
Abstract:
Directed graphs, or digraphs, are useful in many areas of theoretical and applied mathematics, including for describing other combinatorial objects. We will review a method for counting closed walks in a digraph using a transfer matrix. Then we will use a group action to count singular cycles (closed walks for which the initial vertex has been forgotten). Finally we will apply these results to count structures in circulant graphs, up to rotational equivalence.
-
APM 7321
APM 7321
****************************
Department of Mathematics,
University of California San Diego
****************************
PhD Defense
Sheng Qiao
UCSD
Distributed High-Dimensional Quantile Regression under Communication Constraints
-
Meeting ID: 604 124 1201 (Zoom)
Meeting ID: 604 124 1201 (Zoom)
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 288 - Probability & Statistics
Prof. Michael Conroy
Clemson University
Extremes in symmetric exclusion systems
Abstract:
The simple symmetric exclusion process on Z models the dynamics of particles with strong local interaction induced by an exclusion rule: each attempts the motion of a nearest-neighbor symmetric random walk, but jumps to occupied sites are suppressed. While this process has been studied extensively over the past several decades, not much has been known rigorously about the behavior of extremal particles when the system is out of equilibrium. We consider a `step’ initial condition in which infinitely many particles lie below a maximal one. As time tends to infinity, the system becomes indistinguishable from one without particle interaction, in the sense that the point process of particle positions, appropriately scaled, converges in distribution to a Poisson process on the real line with intensity exp(-x)dx. Correspondingly, the position of the maximal particle converges to the Gumbel distribution exp(-exp(-x)), which answers a question left open by Arratia (1983). I will discuss several properties of the symmetric exclusion process that lead to this result, including negative association, self-duality, and the so-called `stirring’ construction, as well as extensions to higher dimensions and to dynamics that allow more than one particle per site. The talk is based on joint work with Sunder Sethuraman and Adrián González Casanova.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
PhD Defense
Zhiyuan Jiang
UC San Diego
On Bimeromorphic Geometry and Abundance for Kähler Varieties
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Final Defense
Haeseong Moon
UCSD
Robust Estimation and Private Learning in High-Dimensional Regression
-
Meeting ID: 318 528 1168 (Zoom)
Meeting ID: 318 528 1168 (Zoom)
****************************
Department of Mathematics,
University of California San Diego
****************************
Mathematics Colloquium
Miranda Holmes-Cerfon
University of British Columbia
DNA as a programmable material
Abstract:
DNA encodes the foundations of life, but it can also be thought of as a physical material, where its information-carrying capacity can be used to encode complexity in the structures it forms. I will talk about our group’s work studying DNA in a material setting. First, I will zoom in on the microscopic dynamics of DNA, and ask how it changes the coarse-grained dynamics of systems of particles when these are coated with single-stranded DNA. We will use stochastic models and homogenization techniques to show that DNA changes the dynamics dramatically, and we confirm our predictions with experiments. Our model bears much in common with many biological systems, such as blood cells and virus particles, and we use our model to make predictions about the dynamics of these systems. Then, we will zoom out and ask how to best use DNA to encode highly specific interactions between particles. Specifically, we wish to understand how to avoid the “kinetic traps” that prevent a system of assembling particles from reaching its equilibrium, or stationary, state. We discovered an unexpected connection to a combinatorial property of graphs, which we use to propose a strategy for designing optimal interactions.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 295 - Mathematics Colloquium
Prof. Miranda Holmes-Cerfon
University of British Columbia
DNA as a programmable material
Abstract:
DNA encodes the foundations of life, but it can also be thought of as a physical material, where its information-carrying capacity can be used to encode complexity in the structures it forms. I will talk about our group’s work studying DNA in a material setting. First, I will zoom in on the microscopic dynamics of DNA, and ask how it changes the coarse-grained dynamics of systems of particles when these are coated with single-stranded DNA. We will use stochastic models and homogenization techniques to show that DNA changes the dynamics dramatically, and we confirm our predictions with experiments. Our model bears much in common with many biological systems, such as blood cells and virus particles, and we use our model to make predictions about the dynamics of these systems. Then, we will zoom out and ask how to best use DNA to encode highly specific interactions between particles. Specifically, we wish to understand how to avoid the “kinetic traps” that prevent a system of assembling particles from reaching its equilibrium, or stationary, state. We discovered an unexpected connection to a combinatorial property of graphs, which we use to propose a strategy for designing optimal interactions.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 278B: Mathematics of Information, Data, and Signals
Zhaiming Shen
Georgia Tech
Transformers for Learning Single and Multi Tasks Regression on Manifolds: Approximation and Generalization Insights
Abstract:
Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA, and their successors. While empirical studies have shown that real-world data and learning tasks exhibit low-dimensional geometric structures, the theoretical understanding of transformers in leveraging these structures remains largely unexplored. In this talk, we present a theoretical foundation for transformers in two key scenarios: (1) regression tasks with noisy input data lying near a low-dimensional manifold, and (2) in-context learning (ICL) for regression of Hölder functions on manifolds. For the first setting, we prove approximation and generalization bounds that depend crucially on the intrinsic dimension of the manifold, demonstrating that transformers can effectively learn from data perturbed by high-dimensional noise. For the second setting, we derive generalization error bounds for ICL in terms of prompt length and the number of training tasks, revealing that transformers achieve the minimax optimal rate for Hölder regression—scaling exponentially with the intrinsic rather than ambient dimension. Together, these results provide foundational insights into how transformers exploit low-dimensional geometric structures in learning tasks, advancing our theoretical understanding of their remarkable empirical success.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
PhD Defense
Zunding Huang
UC San Diego
Mathematical and Numerical Studies of Continuum Electrostatics
-
APM 6218
APM 6218
****************************
Department of Mathematics,
University of California San Diego
****************************
PhD Defense
Finn Mcglade
UC San Diego
On the Fourier-Jacobi Expansion of Quaternionic Modular Forms on $\mathrm{Spin}(8)$.
-
APM 6402
APM 6402
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 208: Seminar in Algebraic Geometry
Vignesh Jagathese
UIC
Quasi-F-Purity of Excellent Rings
Abstract:
Quasi-F-Splittings have proven to be a vital invariant in the study of varieties in positive characteristic, with numerous applications to birational geometry and F-singularities. In this talk I'll provide an overview of Quasi-F-Splittings and introduce a local variant, dubbed Quasi-F-Purity, which extends the theory of Quasi-F-Splittings to arbitrary prime characteristic fields. I will also discuss various permanence properties of Quasi-F-Purity, including stability under completion and étale extension.
-
APM 7321
APM 7321
****************************