Έμβλημα Πολυτεχνείου Κρήτης με τίτλο Σχολή Ηλεκτρολόγων Μηχανικών & Μηχανικών Υπολογιστών
Η Σχολή ΗΜΜΥ στο Facebook  Η Σχολή ΗΜΜΥ στο Youtube

Κατάλογος Εκδηλώσεων

Προβολή ημερολογίου Προβολή ημερολογίου
Προβολή λίστας Προβολή λίστας
iCal - Εκδηλώσεις μήνα iCal - Εκδηλώσεις μήνα
iCal - Εκδηλώσεις 6 μηνών iCal - Εκδηλώσεις 6 μηνών
RSS - Εκδηλώσεις μήνα RSS - Εκδηλώσεις μήνα
RSS - Εκδηλώσεις 6 μηνών RSS - Εκδηλώσεις 6 μηνών

05
Ιουλ

Ομιλία Επ. Καθ. Α. Κυριλλίδη (Rice University)
Κατηγορία: Ομιλία/Διάλεξη  
ΤοποθεσίαΛ - Κτίριο Επιστημών/ΗΜΜΥ, 145Π-58
Ώρα05/07/2022 14:30 - 16:00

Περιγραφή:

Title:
A Tale of Sparsity in Deep Learning: Lottery Tickets, Subset Selection, and Efficiency in Distributed Learning

Abstract:
Neural network pruning is useful for discovering efficient, high-performing subnetworks within pre-trained, dense network architectures. Yet, more often than not, it involves a computationally expensive procedure, as the dense model must be fully pre-trained to achieve top-notch performance, at least for a number of iterations/epochs. Most existing works in this area remain empirical or impractical, based on heuristic rules about when, how and how much one needs to pre-train to recover meaningful sparse subnetworks. In this talk, we will scratch the surface of open questions in the general area of ``pruning techniques for neural network training'', with special focus on what can be theoretically characterized, in order to move from heuristics to provable protocols.

If time permits, the talk is split into the following three themes/questions:
- Can we theoretically characterize how much SGD-based pre-training is sufficient to identify meaningful sparse subnetworks?
- Can we drive connections between pruning methods and classical sparse recovery, in order to leverage decades of knowledge on theory for subset selection?
- From a practical standpoint, can we combine the above ideas with existing efficient distributed protocols in order to achieve end-to-end sparse neural network training, even avoiding heavy full-model pre-training phases?

Bio:
Anastasios Kyrillidis is a Noah Harding Assistant Professor at the Computer Science department at Rice University. Prior to that, he was a Goldstine postdoctoral fellow at IBM T. J. Watson Research Center (NY), and a Simons Foundation postdoc member at the University of Texas at Austin. He finished his PhD at the CS Department of EPFL (Switzerland) under the supervision of Volkan Cevher. Tasos got his M.Sc. and Diploma from the Electronic and Computer Engineering Dept. at the Technical University of Crete (Chania). His research interests include (but not limited to): Optimization for machine learning, convex and non-convex algorithms and analysis, large-scale optimization, any problem that includes a math-driven criterion and requires an efficient method for its solution.

https://akyrillidis.github.io

 

Προσθήκη στο ημερολόγιό μου
© Σχολή Ηλεκτρολόγων Μηχανικών & Μηχανικών Υπολογιστών 2014
Πολυτεχνείο Κρήτης