Parsimonious representation learning (PRL) aims to identify low-dimensional structures, such as sparsity and low rank, in high-dimensional data. In this talk, I will first focus on the low-rank matrix factorization and tensor decomposition problem. I will introduce a novel Schatten-p quasi-norm of a matrix defined over the matrix factors. Moreover, I will present a fully automated hierarchical variational Bayes scheme, which allows us to perform block term tensor decomposition (BTD) in challenging cases where all the ranks of the model are unknown. In both cases, the representations we get are rank-revealing, which is essential in developing computationally efficient algorithms. Next, I will introduce a new formulation of a well-established approach for RSR, dubbed DPCP. With this, we can address RSR in high and unknown subspace codimensions. Specifically, I will show that a projected subgradient method (PSGM) applied to the DPCP objective converges to the true subspace without requiring the knowledge of the true codimension. Finally, I will introduce a new formulation for reverse engineering lp adversarial attacks. The proposed approach leverages sparse representation-based classification ideas and aims to first a) denoise the adversarially perturbed signal and then b) classify the signal and the attack. I will show that the proposed approach can determine the correct class of the signal and the attack of adversarially corrupted signals as long as certain geometric conditions are satisfied. For all the approaches mentioned above, I will present experimental results that provide empirical evidence as to the merits of the proposed methods compared to other state-of-art relevant methods.
Paris V. Giampouras is a Research Faculty member of the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University (JHU), USA. He received the Diploma in Electrical and Computer Engineering from the National Technical University of Athens, Athens, Greece, in 2011 and an M.Sc. degree in Information Technologies in Medicine and Biology in 2014 from the Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens, Greece. In 2018, he received a Ph.D. degree from the same university. In 2019, he was awarded the Marie Skłodowska-Curie Postdoctoral Fellowship from the European Union and joined the Mathematical Institute for Data Science (MINDS), Johns Hopkins University (JHU), USA, where he worked with Prof. Rene Vidal. In 2017, he received the 2nd best student paper award at the IEEE CoSeRa conference. His work has been published and presented at Signal Processing and Machine Learning journals and conferences, such as IEEE Transactions on Signal Processing (TSP), Advances in Neural Information Processing Systems (NeurIPS), International Conference on Learning Representations (ICLR), and International Conference on Machine Learning (ICML). His main research interests include nonconvex optimization, representation learning using sparse and low-rank priors, adversarial robustness, continual learning, and deep generative models. He focuses on applications of these methods to recommender systems, hyperspectral image processing, computer vision, and robust machine learning.