Seminar on Theoretical Aspects of Machine Learning Algorithms
Timeline
Kickoff-Meeting: 20.10. 16:00 in https://tuwien.zoom.us/my/maxthiessen
Date |
Deadline |
20.10. 16:00 |
kickoff (here) |
Nov |
spotlight and abstract |
Nov |
bidding |
Dec |
progress presentation and draft report |
Jan |
reviewing your peers |
Jan |
final presentation and report |
This seminar simulates a machine learning conference, where the students take on the role of authors and reviewers. It consists of multiple phases.
1. Proposal phase
Attend the mandatory first meeting on 20.10, 16:00 (https://tuwien.zoom.us/my/maxthiessen).
Option 1: our suggestions
You select two projects/papers (i.e. two bullet points) from one of the topics below. You will work with the material mentioned in the overview and the project-specific resources.
Option 2: your own projects
You choose two different own project ideas to work on. This can be some existing machine learning paper/work or an own creative idea in the context of machine learning. Importantly, it has to be specific and worked out well.
Independent of the option you chose, understand the fundamentals of your projects and try to answer the following questions:
- What is the problem?
- Why is it an interesting problem?
- How do you plan to approach the problem? /
How have the authors of your project approached the problem?
Select projects and write a short description of them together with the answers to the questions (~3 sentences shoud be sufficient) in TUWEL.
We can only accept your own proposals if you can answer the mentioned questions and have a well worked out project idea.
2. Bidding and assignment phase
You will also act as reviewers and bid on the projects of your peers you want to review. Based on the biddings, we (in the role as chairs of the conference) will select one of each student’s proposals as the actual project you will work on for the rest of this semester. You do not need to work on the other project, anymore. Additionally, we will also assign two different projects from other students to you, which you will have to review later in the semester.
3. Working phase
Now the actual work starts. Gather deep understanding of your project, write a first draft of your report and give a 5-minute presentation. We recommend to go beyond the given material.
4. Reviewing phase
You will again act as a reviewer for the conference by writing two reviews, one for each draft report assigned to you.
5. Writing phase
Based on the reviews from your peers (and our feedback) you will further work on your project.
6. Submission phase
Give a final presentation and submit your report.
General resources (freely available books and lecture notes)
- Understanding machine learning: from theory to algorithms. Shai Shalev-Shwartz and Shai Ben-David (pdf)
- Foundations of machine learning. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar (pdf)
- Foundations of data science. Avrim Blum, John Hopcroft, and Ravindran Kannan (pdf)
- Mathematics for machine learning. Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong (pdf)
- Mining of massive datasets. Jure Leskovec, Anand Rajaraman, and Jeffrey D. Ullman (pdf)
- Reinforcement learning: an introduction. Richard Sutton and Andrew Barto (pdf)
- Research Methods in Machine Learning. Tom Dietterich (pdf)
Topics (Tentative)
You should have access to the literature and papers through Google scholar, DBLP, the provided links, or the TU library.
(Graph) Representation Learning (click to expand)
Overview:
- "graph representation learning" by William L. Hamilton (pdf)
- Knowledge Graph Embeddings Tutorial: From Theory to Practice, 2020 (https://kge-tutorial-ecai2020.github.io/)
Papers and projects:
- Knowledge Graph Embeddings (focus on deep learning approaches)
- Q. Wang, Z. Mao, B. Wang, L. Guo. "Knowledge Graph Embedding: A Survey of Approaches and Applications", 2017
- Y. Dai, S. Wang, N. Xiong, W. Guo. "A Survey on Knowledge Graph Embedding: Approaches, Applications and Benchmarks", 2020
- M. Wang, L. Qiu, X. Wang. "A Survey on Knowledge Graph Embeddings for Link Prediction", 2021
Neurosymbolic AI / Logic & ML (click to expand)
Overview:
- Neurosymbolic AI: The 3rd Wave, 2020 (A. Garcez, L. Lamb)
- Neural-Symbolic Cognitive Reasoning, 2009 (A. Garcez, L. Lamb)
Papers and projects:
- find your own topic :) (a starting point can be the survey from L. De Raedt, S. Dumancic, R. Manhaeve, G. Marra. "From Statistical Relational to Neuro-Symbolic Artificial Intelligence", 2020)
- SAT solving using deep learning
- D. Selsam, M. Lamm, B. Bünz, P. Liang, D. Dill, L. de Moura. "Learning a SAT Solver from Single-Bit Supervision", 2019
- V. Kurin, S. Godil, S. Whiteson, B. Catanzaro. "Improving SAT Solver Heuristics with Graph Networks and Reinforcement Learning", 2019
- J. You, H. Wu, C. Barrett, R. Ramanujan, J. Leskovec. "G2SAT: Learning to Generate SAT Formulas", 2019
Submodularity in machine learning (click to expand)
Overview:
- chapter 1-3 of "Learning with submodular functions: a convex optimization perspective" by Francis Bach, 2013.
- introduction to submodularity in machine learning: Stefanie Jegelka - MLSS 2017 (youtube-link)
Papers and projects:
- submodularity in data subset selection and active learning (Wei, et al. "Submodularity in data subset selection and active learning." ICML 2015)
- robust submodular observation selection (Krause, et al. "Robust submodular observation selection." Journal of machine learning research 2008)
- submodular function maximization (Krause and Golovin. "Submodular function maximization." 2014)
- graph cuts for image segmentation (Blum and Chawla. "Learning from labeled and unlabeled data using graph mincuts." ICML 2001 and Jegelka and Bilmes. "Submodularity beyond submodular energies: coupling edges in graph cuts." CVPR 2011)
- learning submodular functions (Balcan and Harvey. "Learning submodular functions." ACM symposium on theory of computing 2011)
- batch active learning using submodular optimization (Chen and Krause. "Near-optimal batch mode active learning and adaptive submodular optimization." ICML 2013)
Clustering and dimensionality reduction (click to expand)
Overview:
- chapter 1 and 2 of "Dimension reduction: a guided tour" by Christopher Burges, 2010, and chapter 22 (the introduction section before 22.1 and section 22.5) of "Understanding machine learning".
- introduction and theoretical overview on clustering: Shai Ben-David Cheriton Symposium 2017 (youtube-link)
- introduction and overview on probabilistic dimensionality reduction: Neil Lawrence - MLSS 2012 (youtube-link)
Papers and projects:
- kernel PCA and multidimensional scaling (Schölkopf, et al. "Kernel principal component analysis." ICANN 1997 and Williams "On a connection between kernel PCA and metric multidimensional scaling." Machine learning 2002)
- spectral clustering (Von Luxburg. "A tutorial on spectral clustering." Statistics and computing 2007)
- (adaptive) correlation clustering (Bansal, et al. "Correlation clustering." Machine learning 2004 and Bressan, Marco, et al. "Correlation clustering with adaptive similarity queries." NeurIPS 2019)
- (approximate) k-means++ (Arthur and Vassilvitskii. "k-means++: The advantages of careful seeding." Stanford, 2006 and Bachem, Olivier, et al. "Approximate k-means++ in sublinear time." AAAI 2016)
- clustering under approximation stability (Balcan, et al. "Clustering under approximation stability." Journal of the ACM 2013)
- auto-encoders and generative adversarial nets (Diederik and Welling "Auto-encoding variational Bayes" ICLR 2014 and Goodfellow, et al. "Generative adversarial nets" NIPS 2014 and Tolstikhin, et al. "Wasserstein auto-encoders" ICLR 2018)
Theory of graph neural networks (GNNs) (click to expand)
Overview:
- chapter 1 and 5 of "Graph representation learning" by William L. Hamilton (pdf)
- Xu, et al. "How Powerful are Graph Neural Networks?" ICLR 2019
- introduction and overview on graph neural networks: Petar Veličković - Tensorflow Tech Talks 2021 (youtube-link)
Papers and topics:
- k-dimensional Weisfeiler Leman and GNNs (Morris, et al. "Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks." AAAI 2019 and Morris, et al. "Weisfeiler and Leman go sparse: Towards scalable
higher-order graph embeddings." NeurIPS 2020)
- subgraph counts and GNNs (Barceló, et al. "Graph Neural Networks with Local Graph Parameters" arXiv:2106.06707 2021 and Chen, et al. "Can Graph Neural Networks Count Substructures? NeurIPS 2020)
- homomorphisms and GNNs (NT and Maehara "Graph Homomorphism Convolution." ICML 2020 and Dell, et al. "Lovász Meets Weisfeiler and Leman." ICALP 2018)
Equivariant neural networks (click to expand)
Overview:
- chapter 8 "equivariant neural networks" of "Deep learning for molecules and materials" by Andrew D. White, 2021. (pdf)
- introduction to equivariance: Taco Cohen and Risi Kondor - Neurips 2020 Tutorial (first half) (slideslive-link)
Papers and projects:
- group equivariance (Esteves. "Theoretical aspects of group equivariant neural networks", arXiv 2020)
- equivariant CNNS on homogeneous spaces (Cohen, et al. "A General theory of equivariant CNNs on homogeneous spaces." Neurips 2019)
Graph kernels (click to expand)
Overview:
- first 23 pages of "A survey on graph kernels" (Applied Network Science 2019) by Nils M. Kriege, et al.
- practical motivation for graph kernels in computational biology: Karsten Borgwardt - MLSS 2013 (the first 35 minutes) (youtube-link)
Papers and topics:
- hardness and expressivity (Gärtner, et al. "On graph kernels: Hardness results and efficient alternatives." COLT 2003 and Ramon and Gärtner. "Expressivity versus efficiency of graph kernels." Workshop on mining graphs, trees and sequences 2003)
- (k-dimensional) Weisfeiler-Lehman kernel (Shervashidze, et al. "Weisfeiler-Lehman graph kernels." Journal of machine learning research 2011 and Morris, et al. "Glocalized Weisfeiler-Lehman graph kernels: Global-local feature maps of graphs." ICDM 2017)
- mutiple and deep graph kernel learning (Aiolli, et al. "Multiple graph-kernel learning" and Yanardag and Vishwanathan. "Deep graph kernels." SIGKDD 2015)
Kernel methods (click to expand)
Overview:
- chapters 1 and 2 of "Learning with kernels" by Bernhard Schölkopf and Alex Smola, 2002 (pdf)
- introduction to kernels: Bernhard Schölkopf - MLSS 2013 (youtube-link)
Papers and projects:
- Nyström method (Drineas and Mahoney. "On the Nyström method for approximating a Gram matrix for improved kernel-based learning." Journal of machine learning research 2005 and Kumar, et al. "Sampling methods for the Nyström method." Journal of machine learning research 2012)
- Nyström method with kernel k-means++ samples as landmarks (Drineas and Mahoney. "On the Nyström method for approximating a Gram matrix for improved kernel-based learning." Journal of machine learning research 2005 and Oglic and Gärtner. "Nyström method with kernel k-means++ samples as landmarks." ICML 2017)
- random features (Rahimi and Recht. "Random features for large-scale kernel machines." NIPS 2007 and Le, et al. "Fastfood: approximate kernel expansions in loglinear time." ICML 2013)
- neural tangent kernel (Jacot, et al. "Neural tangent kernel: convergence and generalization in neural networks." NIPS 2018)
Causal inference (click to expand)
Overview:
- chapter 1 to 3 of "Elements of causal inference" by Jonas Peters, Dominik Janzing, and Bernhard Schölkopf, 2017 (pdf)
- introduction to causal inference: Bernhard Schölkopf - MLSS 2020 (youtube-link)
Papers and projects:
- transfer learning (Rojas-Carulla, et al. "Invariant models for causal transfer learning." Journal of machine learning research 2019)
- causality and semi-supervised learning (chapter 5 of "Elements of causal inference" and Schölkopf, et al. "On causal and anticausal learning." ICML 2012)
Semi-supervised learning (click to expand)
Overview:
- first chapter/introduction of "Semi-supervised learning" (SSL) by Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien, 2006 (pdf)
- introduction to semi-supervised learning: Tom Mitchell - Carnegie Mellon University 2011 (youtube-link)
Papers and projects:
- transductive support vector machines (chapter 6 in SSL by Thorsten Joachims)
- large-margin semi-supervised learning (Wang, et al. "On efficient large margin semisupervised learning: method and theory." Journal of machine learning research 2009)
- label propagation and quadratic criterion (chapter 11 in SSL by Yoshua Bengio, Olivier Delalleau and Nicolas Le Roux)
- PAC model for semi-supervised learning (chapter 22 of SSL by Maria-Florina Balcan and Avrim Blum)
- generalization error bounds (Rigollet. "Generalization error bounds in semi-supervised classification under the cluster assumption." Journal of machine learning research 2007)
- regularization and semi-supervised learning on graphs (Belkin, et al. "Regularization and semi-supervised learning on large graphs." COLT 2004)
- manifold regularization (Belkin, et al. "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples." Journal of machine learning research 2006)
- label propagation (Zhu, et al. "Semi-supervised learning using Gaussian fields and harmonic functions." ICML 2003 and Zhou, et al. "Learning with local and global consistency." NIPS 2004)
- normalized cuts (Shi and Malik "Normalized cuts and image segmentation." IEEE TPAMI Journal 2000 and Joachims "Transductive learning via spectral graph partitioning." AAAI 2003)
Active learning (click to expand)
Overview:
- chapter 1 "Automating inquiry" of Burr Settles' "Active learning" book, 2012.
- introduction and recent research: Rob Nowak and Steve Hanneke - ICML 2019 tutorial (youtube-link)
Papers and projects:
- active learning with graph cuts (Blum and Chawla. "Learning from labeled and unlabeled data using graph mincuts." ICML 2001 and Guillory and Bilmes. "Label selection on graphs." NIPS 2009):
- agnostic/noisy active learning (Balcan, et al. "Agnostic active learning." Journal of computer and system sciences 2009 and Beygelzimer, et al. "Importance weighted active learning.")
- active nearest-neighbour learning (Kontorovich, et al. "Active nearest-neighbor learning in metric spaces." Journal of machine learning research 2017)
- active learning on trees and graphs (Cesa-Bianchi, et al. "Active learning on trees and graphs", COLT 2013)
- shortest-path-based active learning (Dasarathy, et al. "S2: an efficient graph based active learning algorithm with application to nonparametric classification." COLT 2015)
Online learning (click to expand)
Overview:
- chapter 1 of "A modern introduction to online learning" by Francesco Orabona, 2020.
- introduction to online learning (iterative learning / streaming settings): Nicolò Cesa-Bianchi - Mediterranean Machine Learning school 2021 (youtube-link)
Papers and projects:
- weighed majority and Littlestone dimension (Littlestone and Warmuth. "The weighted majority algorithm." Information and computation 1994 and Littlestone "Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm." Machine Learning 1988).
- online (sub-)gradient descent (chapter 2-4 of "A modern introduction to online learning", Francesco Orabona, 2020)
- bandits and expert advice (introduction and chapter 1,5,6 of "Introduction to multi-armed bandits", Aleksandrs Slivkins, 2019)
- (online) learning with partial orders (Gärtner and Garriga. "The cost of learning directed cuts." ECML 2007 and Missura and Gärtner. "Predicting dynamic difficulty." NIPS 2011)