In this course, you will experience the role of a typical machine learning researcher. You will:
There are three types of projects:
Working on the project consists of the following steps:
Choose a project topic or come up with your own, original idea. Then, write a short problem statement, including your research question, motivation, related work, methodology, and expected results.
Work on your project and present your work to colleagues and our group. Additionally, you will have discussion meetings with your supervisor if needed.
Design a poster and write a short final report to present the results of your work. Your poster will be exhibited to your colleagues and our group during a poster session (snacks included, if Covid allows it!) at the end of the semester.
We are happy to supervise machine learning related projects that are connected to our research interests. Examples are:
Type: Develop and validate a deep learning model.
Recommended Skills: Experience with machine learning using Python. Ideally: experience with implementing and training neural networks with a deep learning framework (like PyTorch).
Task: Develop and validate a deep learning model to automatically segment intraocular lens (IOL) axis in toric IOLs.
Context: In cataract surgery, patients with higher astigmatism require toric IOL implantation for improvement in vision-related quality of life (spectacle independence). Rotational stability of the IOL is crucial. Thus, we want to develop a deep learning model to automatically detect IOL rotation based on clinically recorded images, to validate safety and efficacy of new toric IOLs.
Suggested approach: Implement a deep learning model and evaluate its performance with the standardized and well-tried method of Schartmüller et al.. The work will be conducted in adherence to the tenets of the Declaration of Helsinki and a positive ethics approval.
Related work: Schartmüller et al., Rotational Stability of Intraocular Lenses: A Standardized Method for More Accurate Measurements in Future Studies. Am J Ophthalmol. 2021 Nov;231:200-207. doi: 10.1016/j.ajo.2021.06.006. Epub 2021 Jun 8. PMID: 34116009.
Contact person:
Marcus Lisy
Department of Ophthalmology and Optometry
Medical University of Viennamarcus.lisy@meduniwien.ac.at
Tel.: 01/40400-79450
Type: Implementation and experimenting with implementing CNNs / transformers as GNNs
Recommended Skills: Understanding graphs. Experience with machine learning using Python. Ideally: experience with a deep learning framework (PyTorch, TensorFlow, …).
Task: Train a GNN on the same task as a CNN or transformer. Investigate differences in model performance.
Context: Many data structures such as images or text can be interpreted as graphs. Thus it is possible to use GNNs instead of CNNs (for images) and transformers (for text). For a CNN, images are transformed into grid graphs. For transformers, text is transformed into complete graphs (cliques). In this project you will investigate if this theoretical connection between CNNs/transformers and GNNs actually holds in practice.
Suggested approach: Select either a CNN or a transformer as target architecture. Pick some target dataset / tasks and train your target architecture on those tasks. Then, implement the given dataset as a graph dataset and train a GNN on it. Compare the results.
Related work:
Advisor: Fabian
Type: Implementing and training GNNs with PyTorch (Geometric)
Recommended Skills: (Some) experience with machine learning in Python.
Task: Train a GNN using contrastive learning based on a suitable distance measure.
Context: The expressivity of GNNs is usually described in terms of graph isomorphism testing, i.e., whether GNNs are able to map isomorphic graphs to the same point in the embedding space and non-isomorphic graphs to different points. Formally, they are only as good in distinguishing isomorphic graphs as the Weisfeiler-Leman graph isomorphism test (WL). However, for some real-world graph datasets, we might actually be interested in a slightly different inductive bias: Similar graphs should be embedded close to each other, while dissimilar graphs should be embedded distant from each other in the embedding space. One possible way to achieve these kinds of embeddings is through contrastive learning, where learning is done by comparing positive (i.e., similar graphs) and negative (i.e., dissimilar graphs) pairs of graphs.
Suggested approach: Choose a GNN architecture and a suitable distance measure for graphs. Choose 2-3 datasets and train your GNN using contrastive learning to embed similar graphs close to each other, and dissimilar graphs more distant from each other in the embedding space. Next, you could (1) use these embeddings as a basis for a classification task, or (2) investigate how well your chosen distance measure aligns with graph isomorphism testing, i.e., are isomorphic graphs embedded close to each other, while non-isomorphic graphs are never mapped to the same point?
Related work: ???
Advisor: Tamara and Pascal
Question: Do node labelled real-world graphs have (almost) geodesically convex classes?
Suggested approach: Check for node-labelled graphs in benchmark datasets (e.g., http://snap.stanford.edu/data/), whether the classes (vertex sets corresponding with same node label) are convex. Try to find a more realisitic notion of convexity which, for example, allows outliers and disconnected regions.
Related work:
Context: While classical notions of convexity in Euclidean space are largely studied and used by the machine learning community, related notions in discrete spaces (e.g., graphs) have been mostly overlooked. A vertex set S is in a graph is geodesically convex, if all shortest paths joining two vertices in S stay in S. Recently, we (and other groups) have started to use the notion of convexity in graphs to achieve guarantees for machine learning problems on graphs. Currently, these results are mainly of theoretical interest, but the next step is to evaluate and test the developed methods on real-world graphs. For that, we have to find reasonable datasets and application areas that fit our convexity-based assumptions. Your goal is to identify graphs where the labelled subgraphs are (close to) convex.
Advisor: Max
Type: Mostly implementing and training GNNs with PyTorch (Geometric), some going through related work and collecting interesting generalization bounds
Recommended Skills: (Some) experience with machine learning in Python. Familiarity with mathematical proofs is helpful to understand the related work, but not strictly required for this project
Question: How (empirically) tight are generalization bounds for graph neural networks?
Context: In machine learning theory, we are interested in giving formal guarantees about the behavior of machine learning algorithms. One type of guarantee is about generalization, where we may want to provide an (upper) bound on the generalization error, i.e., how well does a model generalize to yet unseen data? Recently, there has been a number of theoretical contributions for generalization bounds for graph neural networks. It would be interesting to assess (1) how practically useful they are, and (2) how they compare to each other.
Suggested approach: Consult related work and choose two to three different generalization bounds for graph neural networks (this can be done in discussion with your supervisor). Assess them empirically on a number of graph benchmark datasets for at least one GNN architecture.
Related work:
Advisor: Tamara
Type: Implementation and training of GNNs with PyTorch.
Question: Is there a privacy risk of data leakage from information contained in gradients computed during the training of a GNN?
Suggested Approach: Definition and evaluation of an attack strategy using information contained in the gradients computed during training and/or inference, adapting the approach of Zhu et al. (2019) to the graph setting.
Related Work:
Context: Information contained in the gradients computed to train a neural network can be used to infer private information on training data (Zhu et al., 2019). This risk is under-investigated in the GNN context (Zhang et al., 2022), where focus is usually put on the privacy risks of the final, trained model. It is thus interesting to investigate gradient inversion attacks (Yin et al., 2021) for GNN models and evaluate their efficacy and the possible protections offered by, e.g., differential privacy.
Advisor: Patrick
Type: Implementation and training of GNNs with PyTorch.
Question: Are GNNs which are robust against adversarial attacks less private? Are private GNNs less robust?
Suggested Approach: Implementation/adaptation of robust GNNs and evaluation of the privacy/utility trade-off on benchmark datasets against a number of privacy attacks. A possible focus can be on private/adversarial perturbations on the edges of the graphs: how does the accuracy of an adversarially attacked GNN compare to that of a private one?
Related Work:
Context: A tension between differential privacy and robustness (against adversarial examples) has been empirically observed [Song et al., 2019] and theoretically shown [Ghazi et al., 2021]. Do these result extend to structured data? Focusing on a centralised setting and possibly on perturbations of the graph structure: are more robust algorithms less private (or vice versa)?
Advisor: Patrick
Type: Implement and experiment with GNNs
Recommended Skills: Experience with deep learning using PyTorch
Task: Testing Zero-One laws on Graph Neural Networks on distributions beyond Erdos-Reyni
Context: Zero-One laws state that the probability of any first order logic sentence on a random graph approaches zero or one, as larger and larger graphs are sampled [1]. Recent results [2] have shown that GNNs also exhibit zero-one laws on Erdos-Reyni random graphs. In this project our goal would be to empirically test this behavior of GNNs beyond the Erdos-Reyni random graph model. We will be interested in checking this behavior on graphons — a very general random graph model, and on random graph models, where edge probability depends on the number of nodes in the graph. Especially, we would be interested in testing distributions where zero-one laws for first order logic do not hold.
Suggested approach: Create synthetic dataset by randomly generating graphs w.r.t the above mentioned distributions, and label the graphs with a first order logic rule of your choice — 1 if the fol rule is true on the graph and 0 otherwise. Learn a GNN on those graphs. Now, check the probability of the randomly generated graph being classified as 1, as the graph size increases, both with the GNN and with the first order logic formula.
Related work:
[1] Spencer. Strange Logic of Random Graphs. Springer-Verlag 2001
[2] Adam Day et al. Zero-One Laws of Graph Neural Networks. NeurIPS 2023
Advisor: Sagar Malhotra
Type: Understanding a theory focused paper and implementing a graph algorithm
Recommended Skills: A good understanding of graph algorithms and some experience with theoretical computer science. Experience with implementing graph algorithms.
Task: Implement an algorithm to find pairs of non-isomorphic that cannot be distinguished by message passing graph neural networks or the Weisfeiler-Leman graph isormoprhism test
Context: It is known that message passing graph neural networks (MPNNs) have limitations in the kind of functions they can express. Formally, they are only as good in distinguish isomorphic graphs as the Weisfeiler-Leman graph isomorphism test (WL). Thus, any pair of graphs that cannot be distinguished by WL cannot be distinguished by any MPNN i.e., any MPNN will return the same output for the two graphs. Arvind et al. propose an efficient algorithm to determine whether a given graph can be distinguished from all other graphs by WL. Such graphs are called amenable. We propose using this algorithm to generate a dataset of pairs of graphs that are indistinguishable by WL and MPNNs. This dataset could serve as a benchmark to evaluate graph neural networks that are more expressive than the MPNN in distinguishing graphs.
Suggested approach: Implement the algorithm from Corollary 11 of Arvind et al.. This algorithm allows us to search for non-amenable graphs i.e., graphs for which a non-isomorphic graph exists that cannot be distinguished via WL. Implement a brute-force algorithm to find this non-isomorphic graph or (optionally) design a more efficient algorithm to find these graphs.
Related work:
Advisor: Fabian
Type: Implementation and training of GNNs with PyTorch.
Question: What is the privacy protection performance of differentially private GNNs against various privacy attacks?
Suggested Approach: Implementation/adaptation of existing differentially private GNNs and evaluation of the privacy/utility trade-off on benchmark datasets against a number of privacy attacks.
Related Work:
Context: Similarly to deep learning algorithms trained on images and text, GNNs rely on large amounts of (possibly sensitive) data. Differential privacy is one of the possible approaches to protect sensitive data, which is of particular importance when the models are trained in a centralised way. In particular, there exist differentially private techniques that aim at preserving the privacy of features, labels, or edges of the graph, mainly focusing on protection against membership inference attacks. Nevertheless, privacy protection performance against various other privacy attacks if under-investigated.
Advisor: Patrick
Type:
Coding and evaluation in python, likely using sklearn [4] and grakel [5].
Question:
How well do SVMs and graph kernels still do on large scale problems?
Context: Recent large scale graph level learning problems give you a few million training graphs and a learning problem such as molecule property prediction. In recent years, such tasks are typically solved by large (message passing) graph neural networks (GNN) with millions of parameters [1]. In fact, most top scoring competitors in recent challenges are in fact ensembles of GNNs. These methods require expensive hardware (e.g. 4 NVIDIA A100 for 10k€ a piece) to train in reasonable time.
One argument that is often mentioned as a drawback of SVM/SVR learning is its typical at-least-quadratic scaling behavior with the number of training examples. In particular, kernel methods are assumed to require a full Gram matrix of the input data, which is infeasible for datasets with millions of graphs.
However, multiple graph kernels have been proposed that capture different graph properties that have been shown to achieve good predictive performance on small to medium sized graph datasets see e.g. [3] [4].
Suggested approach:
If ensembling of models is an integral part of recent practical approaches using GNNs, why don’t we allow it for kernel methods, as well?
Related work:
[1] https://ogb.stanford.edu/docs/lsc/pcqm4mv2
[2] Nils M. Kriege, Fredrik D. Johansson, Christopher Morris: A survey on graph kernels. Appl. Netw. Sci. 5(1): 6 (2020)
[3] Karsten M. Borgwardt, M. Elisabetta Ghisu, Felipe Llinares-López, Leslie O’Bray, Bastian Rieck: Graph Kernels: State-of-the-Art and Future Challenges. Found. Trends Mach. Learn. 13(5-6) (2020)
[4] https://scikit-learn.org/
[5] https://ysig.github.io/GraKeL/
Advisor: Pascal
Type: Implement and train deep neural networks, use off-the-shelf logical rule learners
Recommended Skills: Experience with deep learning using PyTorch
Task: Zero-One Laws of Graph Neural Network
Context: Message Passing Graph Neural Networks(GNNs) expressivity is known to be bounded above by the one dimensional Weisfeiler-Leman test [1] — an algorithm that iteratively colors graph nodes, with the goal of creating different coloring for two non-isomorphic graphs (although not necessarily succeeding). Hence, if two graphs have the same WL coloring then a GNN will have the same labels for these two graphs. This implies that whatever function a GNN learns can be expressed as a function on 1-WL colors. This equivalence can be further extended to the fragment of first order logic with two variables and counting quantifiers (C2) i.e. two graphs which agree on all C2 properties have the same WL colors, and hence are assigned the same label by a GNN. The research question in this project is “Can we learn/approximate the function represented by a GNN in an explainable logical language?” If successful, such an algorithm can provide explanations for GNNs predictions, and can potentially serve as an explainable alternative to the GNN itself.
Suggested approach: Create synthetic dataset, by fixing an a-priori a logical rule, and labelling randomly generated graphs with that rule. Learn a GNN on those graphs. Now, run WL on the previously generated graphs, which will give you set of colors for your examples, use the GNN generated graph label as the label for the corresponding multiset of WL-colors of the graph. Now, we can use off-the-shelf logical rule learners (like [2]) to learn succinct rules expressing the association between WL colors (as propositions) and GNN labels. Finally, we can check if the learned logical rule is indeed equivalent to the original logical rule.
Related work:
[1] Martin Grohe. The Logic of Graph Neural Networks. 2022. LICS 2022
[2] Ghosh et al. Efficient Learning of Interpretable Classification Rules. JAIR 2022
[3] Azzolin et al. Global Explainability of GNNs via Logic Combination of Learned Concepts. ICLR 2023
Advisor: Sagar and Pascal
Type: Research methods for debiasing (graph-based) models, implement and train GNNs, apply and evaluate debiasing methods
Recommended Skills: Experience with deep learning framework such as PyTorch (Geometric)
Task: Conduct an initial research related to debiasing methods for deep neural networks (and graph-based models) and experiment in translating them to graph-based models.
Context: Bias in ML models is a very important part of ML research which tries to identify and reduce implicit biases such as gender or racial bias. While there has been a plethora of different algorithms and methods to debias deep neural networks (e.g., debiasing Recommender Systems, Large Language Models), there is still plenty of room to explore, especially for graph-based models such as GNNs. With the rising popularity of using GNNs for a variety of different tasks, we want to investige methods suitable to debias those models.
Suggested approach: Find suitable data sets, i.e., data sets with additional meta information such as gender or age that can be represented as a graph (e.g., Knowledge Graphs of recommendation tasks such as MovieLens, LFM2b); implement and train a GNN; evaluate performance of the task and metrics related to bias; apply debiasing methods and evaluate (e.g., adversarial training)
Related work:
Advisor: David
Type: Implement and train deep neural networks
Recommended Skills: Experience with deep learning using PyTorch
Task: Extend research on debiasing recommender systems (see Ganhör et al. below) by experimenting with the use of Information Theory
Context: Bias in ML models is a very important part of ML research which tries to identify and reduce implicit biases such as gender or racial bias. While there has been a plethora of different algorithms and methods to debias deep neural networks (e.g., debiasing Recommender Systems, Large Language Models), many of those algorithms require changes and/or additions to the architecture. Using Fisher/Mutual Information as regression term, we hope for a method that is easy to tune and does not require changes to the existing architecture.
Suggested approach: Implement recommender systems based on VAE architectures and different data sets including additional meta information such as gender (e.g. LFM2b, MovieLens), implement and adapt an approximation algorithm for Fisher/Mutual Information (see related work), train and evaluate the influence of the regression term.
Related work:
Advisor: David
Type: Implement and train deep neural networks
Recommended Skills: Experience with deep learning using PyTorch
Task: Extend research on debiasing recommender systems (see Ganhör et al. below) by experimenting with the use of Siamese Neural Networks
Context: Bias in ML models is a very important part of ML research which tries to identify and reduce implicit biases such as gender or racial bias. Recent research has shown good results using a variety of different approaches (e.g., adversarial training). However, many of those approaches are difficult to fine tune and come with a loss in performance. Using a simpler training loop using Siamese Neural Networks, we hope for a method that is easy to tune and preserves the models performance.
Suggested approach: Implement recommender systems based on VAE architectures and different data sets including additional meta information such as gender (e.g. LFM2b, MovieLens), extend the implementation to a Siamese Neural Network architecture,
Related work:
Advisor: David
Type: Implement and train different algorithms designed to compute disentangled representations (PyTorch)
Recommended Skills: Experience with deep learning using PyTorch
Task: Implement different state-of-the-art algorithms for computing disentangled representations and benchmark them on a variety of different data sets, preferrably (including) data sets outside the Computer Vision domain.
Context: Many state-of-the-art deep learning models in various domains such as Language Models and Recommender Systems thrive on semantincally rich hidden representations. Those representations, however, are most of the time highly entangled, i.e., a change of a single factor such as gender, colour, shape, age, … will change all the dimensions of the representation. Disentangling individual dimensions or vectors from eachother might offer benefits such as better explainability of the predictions, new insights in what the network is learning, controllability over what information flows through the network, …
Suggested approach: Find suitable data sets, i.e., data sets that come with additional meta information such as gender and age (e.g., LFM2b, MovieLens, BIOS); implement and train a variety of different algorithms suitable for the given task (e.g., beta-VAE, FactorVAE); evaluate performance of the task and metrics related to disentanglement
Related work:
Advisor: David
Type: Experimental
Recommended Skills: Some programming experience, ideally some experience with standard machine learning libraries such as PyTorch.
Task: Reproduce the results of a machine learning paper.
Context: Reproducibility, i.e., obtaining similar output using the same data, is a critical quality in research. Unfortunately, this is often not the case.
Suggested approach: Choose a paper from one of the following venues: NeurIPS, ICML, ICLR, ECML, JMLR, MLJ. Please do no just re-use existing code! (i.e., re-implement in a different programming language/library, or implement methods from multiple papers to perform a comparison.)
Related work:
Advisor: Depends on paper
Question: How can we find robust hypotheses that attackers cannot easily manipulate?
Suggested Approach: Approximate Tukey center of hypotheses via iterated Radon points.
Related Works:
Context: The more (distributed) machine learning algorithms and/or the models learned by them are to be applied in critical applications where lives depend on predictions, the more trustworthy they need to be. For that, (distributed) learning algorithms need to be robust to manipulation of (training) data and communication channels. They also need to guarantee that their models have well-calibrated confidence. The Tukey center is a high-dimensional generalisation of the median and shares similar stability properties that will be exploited in this project.
Question: How can constraints be incorporated into meaningful low-dimensional embeddings of large data sets?
Suggested Approach: Approximate knowledge-constrained PCA via the power method.
Related Works:
Context: Traditionally, embedding techniques focus on finding one fixed embedding, which depends on some technical parameters and emphasises a singular aspects of the data. In contrast, interactive data exploration aims at allowing the user to explore the structure of a dataset by observing a projection dynamically controlled by the user in an intuitive way. Ultimately it provides a way to search and find an embedding, emphasising aspects that the user desires to highlight. Current approaches often do not allow for real-time manipulation of large data sets.
Type: Coding and evaluation in python, likely using pytorch (maybe pytorch geometric)
Question: Can we accurately predict shortest path distances in a large graph in sublinear time and space using a specialized neural network?
Context: Shortest path distance computation is an important primitive in network analysis and practical applications such as suggesting new contacts and content to users of a social network.
Let $G=(V,E)$ be an undirected graph. The shortest path distance $d(s,t)$ can be computed with Dijkstras algorithm in $O(|E|+\log(|V|))$ time for any pair of vertices $s,t \in V$. Running this algorithm in a live system for thousands of queries per second in a very large graph with millions or billions of nodes and edges is intractable. Precomputing and storing the distances of all pairs of vertices, on the other hand is intractable due to its quadratic space requirement. As a result, one tends to work with estimates of $d(s,t)$ that can be computed more efficiently.
One way of arriving at such estimates is to select landmarks or anchors, which are some vertices for which we precompute the shortest path distances to all other vertices in $G$. Let $l \in V$ be such a landmark, then – using the triangle inequality – we can bound \(d(s,t) \leq d(l,s) + d(l,t)\) \(d(s,t) \geq |d(l,s) - d(l,t)|\) for all pairs $s,t \in V$. Note that we only use distances from $l$ to other vertices for these bounds.
We can use the bounds directly to give estimates of the form \(d(s,t) \simeq d(l,s) + d(l,t),\) \(d(s,t) \simeq |d(l,s) - d(l,t)|,\) or \(d(s,t) \simeq \frac{(d(l,s) + d(l,t) - |d(l,s) - d(l,t)|)}{2} + |d(l,s) - d(l,t)|.\) But can we do better?
Suggested approach: We consider the distance estimation problem as a supervised learning task. Given a pair $(s,t)$ of vertices, predict $d(s,t)$. As features, we will use the lower and upper bounds that we obtain from several landmarks, at training time, we can allow a certain number of shortest path distance computations to obtain ground truth distances $d(s,t)$. This problem has already been formulated and addressed, e.g. in [1].
Your tasks include
Related work: [1] Maria Christoforaki, Torsten Suel: Estimating pairwise distances in large graphs. IEEE BigData 2014: 335-344
Advisor: Pascal
Type: Theory of reinforcement learning (multiple topics)
Recommended skills: Course on reinforcement learning. Strong mathematical/theoretical background.
Task: Show convergence results for algorithms in RL. Show lower bounds on the performance of RL algorithms, i.e. PAC (probably approximately correct) estimates. Based on the PAC estimates, show computational complexity.
Context: In safety critical applications, convergence results and performance estimates are of utmost importance for reliable/safe/robust learning.
Suggested approach: Use fixed-point arguments and concentration inequalities.
Related work: Markus Böck and Clemens Heitzinger. Speedy categorical distributional reinforcement learning and complexity analysis. SIAM Journal on Mathematics of Data Science, 4(2):675–693, 2022. Markus Böck, Julien Malle, Daniel Pasterk, Hrvoje Kukina, Ramin Hasani, andClemens Heitzinger. Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning. PLOS ONE, 17(11):e0275358/1–18, 2022.
Advisor: Prof. Clemens Heitzinger
Type: Safety-critical applications of reinforcement learning (multiple topics)
Recommended skills: Course on reinforcement learning. Strong programming skills (Python and/or Julia) for applications.
Task: Implement and test modern RL algorithms and devise new algorithms.
Context: In safety critical applications, distributional RL makes it possible to quantify risk. In real-world applications, deep RL makes it possible to treat large state spaces. We have experience in medical applications and want to learn more optimal strategies on recent datasets.
Suggested approach: Distributional RL, deep RL.
Related work: Markus Böck, Julien Malle, Daniel Pasterk, Hrvoje Kukina, Ramin Hasani, and Clemens Heitzinger. Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning. PLOS ONE, 17(11):e0275358/1–18, 2022. Pierrick Lorang, Horvath Helmut, Tobias Kietreiber, Patrik Zips, Clemens Heitzinger, Matthias Scheutz. Adapting to the ``open world’’: the utility of hybrid hierarchical reinforcement learning and symbolic planning. Proc. 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), 13–17 May 2024, accepted for publication.
Advisor: Prof. Clemens Heitzinger
Describe the scientific merit and novelty of your idea. It is very important to narrow down the rough topic to a tentative research question and approach of interest to us. The research question should not have been answered previously and the answer needs to be verifiable. To answer the question, typically one has to:
If you choose your own topic, please briefly describe your project following this structure (check our suggested topics to get an idea):