报告1 Symmetric Nonnegative Matrix Factorization
Abstract—In this talk, the symmetric nonnegative matrix factorization (SNMF), which is a powerful tool in data mining for data dimension reduction and clustering, is discussed. Our present work is introduced including: (i) a new descent direction for the rank-oneSNMF is derived and a strategy for choosing the step size along this descent direction is established; (ii) a progressive hierarchicalalternating least squares (PHALS) method for SNMF isdeveloped, which is parameter-free and updates the variables column bycolumn. Moreover, every column is updated by solving a rank-one SNMF subproblem; and (iii) the convergence to theKarush-Kuhn-Tucker (KKT) point set (or the stationary point set) is proved for PHALS. Several synthetical and real data sets are testedto demonstrate the effectiveness and efficiency of the proposed method. Our PHALS provides betterperformance in terms of thecomputational accuracy, the optimality gap, and the CPU time, compared with a number of state-of-the-art SNMF methods.
报告2 Convergence of A Fast Hierarchical Alternating Least
Squares Algorithm for Nonnegative Matrix Factorization
Abstract—The hierarchical alternating least squares (HALS) algorithms are powerful for nonnegative matrix factorization (NMF) whichis a popular data dimension reduction method. Among existing HALS algorithms, the Fast-HALS proposed in [A. Cichocki and A.-H.Phan, 2009] is the most efficient one. In this talk, the convergence of this Fast-HALS is discussed. First, a more general weakconvergence (every limit point of the iterates is a stationary point) is established without any assumption, while the existing resultsassume all the columns of iterates are strictly away from the origin. Thena simplified strong convergence (the convergence of theiterates) proof is provided. The existing strong convergence is attributed to the block prox-linear (BPL) method which is a more generalframework including Fast-HALS as a special case. So, the convergence proof of BPL is quite complex. Our simplified proof exploresthe structure of Fast-HALS and can be regarded as a complement to the results under BPL.
报告3：Quadratic Inverse Eigenvalue Problemsin Quadratic Model Updating
Abstract—Updating a system modeled as a real symmetric quadratic eigenvalue problem to match observedspectral information has been an important task for practitioners in different disciplines. It is often desirablein the process to match only the newly measured data without tampering with the other unmeasured and oftenunknown eigenstructure inhering in the original model. Such an updating, known as no spill-over, has been criticalyet challenging in practice. Up to now, only a mathematical theory on updating with no spill-over has begun to beunderstood. However, other imperative issues such as maintaining positive definiteness in the coefficient matricesremain to be addressed. This talk highlights several theoretical aspects about updating that preserves bothno spill-over and positive definiteness of the mass and the stiffness matrices.
报告4：Perturbation Analysis of Linear Least Squares
Abstract. In this talk, some important results on the perturbation of linear least squares problem are introduced.
储德林，新加坡国立大学教授，先后在香港大学，清华大学，德国TU Chemnitz（开姆尼斯工业大学）、University of Bielefeld（比勒费尔德大学）等高校工作。主要研究领域是科学计算、数值代数及其应用，曾获得德国的“洪堡学者”和日本的“JSPS学者”等称号。现为国际顶级期刊 SIAM Journal on Scientific Computing副主编、SIAM Journal on Matrix Analysis and Applications 副主编、Automatica 副主编、计算数学期刊Journal of Computational and Applied Mathematics编委、国际权威期刊Journal of the Franklin Institute客座编委。近年来在SIAM Journal on Matrix Analysis and Applications、SIAM Journal on Scientific Computing、SIAM Journal on Control and Optimization、SIAM Journal on Applied Dynamical Systems, Mathematics of Computation、 Numerische Mathematik, Journal of Scientific Computing 、IEEE Transactions on Pattern Analysis and Machine Intelligence、IEEE Transactions on Neural Networks and Learning Systems等国际知名学术期刊发表论文100余篇。