LSV-Based Tail Inequalities for Sums of Random Matrices.
In: Neural Computation, Jg. 29 (2017), Heft 1, S. 247-262
academicJournal
Zugriff:
The techniques of random matrices have played an important role in many machine learning models. In this letter, we present a new method to study the tail inequalities for sums of random matrices. Different from other work (Ahlswede & Winter, 2002; Tropp, 2012; Hsu, Kakade, & Zhang, 2012), our tail results are based on the largest singular value (LSV) and independent of the matrix dimension. Since the LSV operation and the expectation are noncommutative, we introduce a diagonalization method to convert the LSV operation into the trace operation of an infinitely dimensional diagonal matrix. In this way,we obtain another version of Laplace-transform bounds and then achieve the LSV-based tail inequalities for sums of random matrices. [ABSTRACT FROM AUTHOR]
Copyright of Neural Computation is the property of MIT Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Titel: |
LSV-Based Tail Inequalities for Sums of Random Matrices.
|
---|---|
Autor/in / Beteiligte Person: | Zhang, Chao ; Du, Lei ; Tao, Dacheng |
Zeitschrift: | Neural Computation, Jg. 29 (2017), Heft 1, S. 247-262 |
Veröffentlichung: | 2017 |
Medientyp: | academicJournal |
ISSN: | 0899-7667 (print) |
DOI: | 10.1162/NECO_a_00901 |
Schlagwort: |
|
Sonstiges: |
|