2025年5月13日上午9:30至11:00
浙江大学玉泉校区曹光彪东楼502报告厅
Dr. Shang-Hua Teng has twice won the prestigious Gödel Prize in theoretical computer science, first in 2008, for developing the theory of smoothed analysis, and then in 2015, for designing the groundbreaking nearly-linear time Laplacian solver for network systems. Both are joint work with Dan Spielman of Yale --- his long-time collaborator. Smoothed analysis is fundamental for modeling and analyzing practical algorithms, and the Laplacian paradigm has since led to several breakthroughs in network analysis, matrix computation, and optimization. Citing him as, ``one of the most original theoretical computer scientists in the world'', the Simons Foundation named Teng a 2014 Simons Investigator, for pursuing long-term curiosity-driven fundamental research. He and his collaborators also received the best paper award at ACM Symposium on Theory of Computing (STOC) for what's considered to be the ``first improvement in 10 years'' of a fundamental optimization problem --- the computation of maximum flows and minimum cuts in a network. His manuscript, Scalable Algorithms for Data and Network Analysis, received Phi Kappa Phi Faculty Recognition Award in 2020. In addition, he is known for his joint work with Xi Chen and Xiaotie Deng that characterized the complexity for computing an approximate Nash equilibrium in game theory, and his joint papers on market equilibria in computational economics. He and his collaborators also pioneered the development of well-shaped Dalaunay meshing algorithms for arbitrary three-dimensional geometric domains, which settled a long-term open problem in numerical simulation, also a fundamental problem in computer graphics. Software based on this development was used at the University of Illinois for the simulation of advanced rockets. Teng is also interested in mathematical board games. With his former Ph.D. student Kyle Burke, he designed and analyzed a game called Atropos , which is played on the Sperner's triangle and based on the beautiful, celebrated Sperner's Lemma. In 2000 at UIUC, Teng was named on the List of Teachers Ranked as Excellent by Their Students for his class, ``Network Security and Cryptography''. He has worked and consulted for Microsoft Research, Akamai, IBM Almaden Research Center, Intel Corporation, Xerox PARC, and NASA Ames Research Center, for which he received fifteen patents for his work on compiler optimization, Internet technology, and social network.
For professional leadership, he served as the chair of the USC Computer Science Department (2009 - 2012). He is the current chair of the Steering Committee for ACM-SIAM Symposium on Discrete Algorithms (SODA) and Vice Chair of IEEE Technical Committee on Mathematical Foundations of Computing. He was also the 2018 chair of the ACM Donald E. Knuth Prize Committee. Currently, he is on the Advisory Board of the USC Women in Science and Engineering (WiSE) and the Board of Directors of USC Center for Applied Mathematical Sciences.
The quintessential learning algorithm of empirical risk minimization (ERM) is known to fail in various settings for which uniform convergence does not characterize learning. Relatedly, the practice of machine learning is rife with considerably richer algorithmic techniques, perhaps the most notable of which is regularization. Nevertheless, no such technique or principle has broken away from the pack to characterize optimal learning in these more general settings. The purpose of this work is to precisely characterize the role of regularization in perhaps the
simplest setting for which ERM fails: multiclass learning with arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles: Occam’s Razor as embodied by structural risk minimization (SRM), the principle of maximum entropy, and Bayesian inference. We also extract from OIGs a combinatorial sequence we term the Hall complexity, which is the first to characterize a problem’s transductive error rate exactly. Lastly, we introduce a generalization of OIGs and the transductive learning setting to the agnostic case, where we show that optimal orientations of Hamming graphs – judged using nodes’ outdegrees minus a system of node-dependent credits – characterize optimal learners exactly. We demonstrate that an agnostic version of the Hall complexity again characterizes error rates exactly, and exhibit an optimal
learner using maximum entropy programs. Joint work (COLT 2024) with Julian Asilis, Siddartha Devic, Shaddin Dughmi, and Vatsal Sharan.
欢迎各位老师、同学、校友积极参加!