- TISS: (link)
- contact: Sagar Malhotra (email)
- meeting link: (zoom)
- everything important will be announced in TUWEL/TISS.

This seminar simulates a machine learning conference, where the students take on the role of authors and reviewers. It consists of multiple phases.

Attend the **mandatory** first meeting either in person or remotely (details on TUWEL).

You select

twotopics/papers (i.e., two bullet points) from one of the topics below. You will work with the material mentioned in the overview and the topic-specific resources.

You choose

your owntopic to work on. This can be some existing machine learning paper/work or an own creative idea in the context of machine learning. We strongly encourage you to start from existing papers from the following venues: NeurIPS, ICML, ICLR, COLT, AISTATS, UAI, JMLR, MLJ. Importantly, your idea has to be specific and worked out well. Nevertheless, chooseoneof our suggestions as well.

**Independent of the option you chose**, understand the fundamentals of your topic and try to answer the following questions:

**What**is the problem?**Why**is it an interesting problem?**How**do you plan to approach the problem? /**How**have the authors of your topic approached the problem?

Select topics and write a short description of them together with the answers to the questions (~3 sentences should be sufficient) in **TUWEL**.

We can only accept your own proposals if you can answer the mentioned questions and have a well worked out topic.

You and your fellow students will act as reviewers and bid on the topics of your peers you want to review. Based on the biddings, we (in the role as chairs of the conference) will select one of each student’s proposals as the actual project you will work on for the rest of this semester. You **do not** need to work on the other project, anymore. Additionally, we will also assign two different projects from other students to you, which you will have to review later in the semester.

Now the actual work starts. Gather deep understanding of your topic, write a first draft of your report and give a 5-minute presentation. Feel free to go beyond the given material.

You will schedule two meetings with your supervisor to discuss your progress, but do not hesitate to contact him/her if you have any questions.

You will again act as a reviewer for the conference by writing two reviews, one for each draft report assigned to you.

Based on the reviews from your peers (and our feedback) you will further work on your topic.

Give a final presentation and submit your report.

- Understanding machine learning: from theory to algorithms. Shai Shalev-Shwartz and Shai Ben-David (pdf)
- Foundations of machine learning. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar (pdf)
- Foundations of data science. Avrim Blum, John Hopcroft, and Ravindran Kannan (pdf)
- Mathematics for machine learning. Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong (pdf)
- Mining of massive datasets. Jure Leskovec, Anand Rajaraman, and Jeffrey D. Ullman (pdf)
- Reinforcement learning: an introduction. Richard Sutton and Andrew Barto (pdf)
- Deep learning and neural networks. Ian Goodfellow and Yoshua Bengio and Aaron Courville (pdf)

You should have access to the literature and papers through Google scholar, DBLP, the provided links, or the TU library.

- Shai Ben-David and Shai Shalev-Shwartz. Understanding Machine Learning. Chapter 2,3
- Shai Ben-David Lectures. (youtube-link) Lecture 1,2,3
- Martin Grohe and Martin Ritzert. Learning First-Order definable concepts over structures of small degree. 2017
- Grohe et al. Learning MSO-definable hypotheses on strings. 2017
- Bergeram et al. On Parameterized Complexity of learning Monadic Second-Order Formulas. 2023

**Motivation**: Ability to learn logically definable concepts from labelled data is a theoretical model of Machine Learning which is explainable by design, and integrates ideas from both logic (especially finite model theory) and PAC Learning.

**Overview**:

**Papers and topics**:

**Motivation**:

Data science algorithms are successfully utilized in many different areas on a daily basis. Typically, these algorithms solve problems that are NP-hard and often even hard to approximate. Understanding why these algorithms work so well in practice is an important question in the area of beyond worst-case analysis.

**Overview**:

The book “Beyond the Worst-Case Analysis of Algorithms” by Tim Roughgarden provides a good starting point for literature search. We are particularly interested in the results related to Chapters 6, 7, 20, 28 and 30 of this book. It is encouraged to also look at other chapters of this book and papers related to these chapters.

**Supervisor**: Prof. Dr. Stefan Neumann

While our primary focus is on healthcare, RL has widespread applications in diverse domains such as finance, robotics, gaming, and autonomous systems. The adaptability of RL algorithms makes them suitable for addressing complex decision-making challenges in different fields.

PAC provides a mathematical foundation for ensuring that learned policies are close to optimal, instilling confidence in the reliability of RL algorithms. With PAC, RL agents can make decisions in critical healthcare scenarios with a high degree of certainty, mitigating risks associated with uncertain outcomes.

- What is a good representation? (Bengio, et al., “Representation Learning: A Review and New Perspectives”, 2013)
- Two common architectures used for disentanglement:
- Variational Auto-Encoders (Kingma & Welling, “Auto-Encoding Variational Bayes”, 2013, and “An Introduction to Variational Autoencoders”, 2019)
- Generative Adversarial Networks (Goodfellow, et al., “Generative Adversarial Nets”, 2014)
- survey on useful Metrics (Carbonneau, et al., “Measuring Disentanglement: A Review of Metrics”, 2022; and Eastwood & Williams, “A Framework for the Quantitative Evaluation of Disentangled Representations”, 2018; and Do & Tran, “Theory and Evaluation Metrics for Learning Disentangled Representations”, 2019)
- fairness (Creager, et al., “Flexibly Fair Representation Learning by Disentanglement”, 2019)
- contrastive Learning (Cao, et al., “An Empirical Study on Disentanglement of Negative-free Contrastive Learning”, 2022)
- recommender Systems (Ma, et al., “Learning Disentangled Representations for Recommendation”, 2019)
- weakly-Supervised (Locatello, et al., “Weakly-Supervised Disentanglement Without Compromises”, 2020)
- semi-supervised (Nie, et al., “Semi-Supervised StyleGAN for Disentanglement Learning”, 2020)

**Motivation**: Computing a disentangled representation is a very desirable property for modern deep learning architectures. Having access to individual, disentangled factors is expected to provide significant improvements for generalisation, interpretability and explainability.

**Overview**:

**Papers and topics**:

- chapter 8 “equivariant neural networks” of “Deep learning for molecules and materials” by Andrew D. White, 2021. (pdf).
- introduction to equivariance: Taco Cohen and Risi Kondor - Neurips 2020 Tutorial (first half) (slideslive-link)
- neural network that can learn on sets (Zaheer, et al. “Deep sets.” NeurIPS 2017)
- learning equivariance from data (Zhou, et al. “Meta-learning symmetries by reparameterization.” ICLR 2021)

**Motivation**: Many datastructures have an innate structure that our neural networks should respect. For example the output of a graph neural networks should not change if we permute the vertices (permutation equivariance/invariance).

**Overview**:

**Papers and topics**:

- notes on generalisation (Prof. Roger Grosse) (link)
- generalisation and overfitting (youtube-link)
- memorisation (Arpit, et al. “A closer look at memorization in deep networks.” ICML 2017)
- double-descent (Belkin, et al. “Reconciling modern machine-learning practice and the classical bias–variance trade-off.” Proceedings of the National Academy of Sciences 2019)
- generalisation gap (Keskar, et al. “On large-batch training for deep learning: Generalization gap and sharp minima.” ICLR 2017)
- loss landscape (Fort and Jastrzebski. “Large scale structure of neural network loss landscapes.” NeurIPS 2019
**and**Li, et al. “Visualizing the loss landscape of neural nets.” NeurIPS 2018)

**Motivation**: The ability of a model to adapt and perform well on new data is crucial. A model which generalises not only performs well on the training set, but on unseen data as well. Understanding and characterising why and how deep learning can generalise well is still an open question.

**Overview**:

**Papers and topics**:

- Veličković,
*Everything is connected: Graph neural networks*, Current Opinion in Structural Biology, 2023 - Sanchez-Lengeling et al.,
*A Gentle Introduction to Graph Neural Networks*, distill.pub 2021 - Veličković,
*Intro to graph neural networks (ML Tech Talks)*, YouTube, 2021 - Baranwal et al.,
*Optimality of Message-Passing Architectures for Sparse Graphs*, NeurIPS, 2023 - Zhou et al.,
*Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle Counting Power*, NeurIPS, 2023 - Zahng et al.,
*A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weisfeiler-Lehman Tests*, ICML, 2023 - Zhang et al.,
*Rethinking the Expressive Power of GNNs via Graph Biconnectivity*, ICLR, 2023 - Lim et al.,
*Sign and Basis Invariant Networks for Spectral Graph Representation Learning*, ICLR, 2023 - Joshi et al.,
*On the Expressive Power of Geometric Graph Neural Networks*, ICML, 2023 - Huang et al.,
*You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets*, LoG, 2022

**Motivation:** Graphs are a very general structure and can be applied to many areas: molecules and developing medicine, geographical maps, spread of diseases. They can be used to model physical systems and solve partial differential equations. Even images and text can be seen as a special case of graphs. Thus it makes sense to develop neural networks that can work with graphs. GNNs have strong connections to many classical computer science topics (algorithmics, logic, …) while also making use of neural networks. This means that work on GNNs can be very theoretical, applied or anything in between.

**Overview:**

**Papers:**

*Note:* For very long papers we do not expect you to read the entire appendix.

**Motivation**:
Large language models such as ChatGPT are seeing a huge research interest. Some companies are releasing more or less free models, and open-source initiatives have sprung up.

**Overview**:
In this seminar paper, an overview of the latest large language models that are available in various forms is given. This includes, in particular, an investigation of their performance, and an explanation how performance can be evaluated objectively at all.

**Advisor**: Prof. Clemens Heitzinger

- Nikita Bhalla, Adam Lechowicz, Cameron Musco: Local Edge Dynamics and Opinion Polarization. WSDM 2023: 6-14
- Uthsav Chitra, Christopher Musco: Analyzing the Impact of Filter Bubbles on Social Network Polarization. WSDM 2020: 115-123
- Cameron Musco, Christopher Musco, Charalampos E. Tsourakakis: Minimizing Polarization and Disagreement in Social Networks. WWW 2018: 369-378
- Antonis Matakos, Evimaria Terzi, Panayiotis Tsaparas: Measuring and moderating opinion polarization in social networks. Data Min. Knowl. Discov. 31(5): 1480-1505 (2017)
- Xi Chen, Jefrey Lijffijt, Tijl De Bie: Quantifying and Minimizing Risk of Conflict in Social Networks. KDD 2018: 1197-1205
- David Bindel, Jon M. Kleinberg, Sigal Oren: How bad is forming your own opinion? Games Econ. Behav. 92: 248-265 (2015)
- Mayee F. Chen, Miklós Z. Rácz: An Adversarial Model of Network Disruption: Maximizing Disagreement and Polarization in Social Networks. IEEE Trans. Netw. Sci. Eng. 9(2): 728-739 (2022)
- Jason Gaitonde, Jon M. Kleinberg, Éva Tardos: Adversarial Perturbations of Opinion Dynamics in Networks. EC 2020: 471-472
- Sijing Tu, Stefan Neumann: A Viral Marketing-Based Model For Opinion Dynamics in Online Social Networks. WWW 2022: 1570-1578

**Motivation**:

Online social networks have become ubiquitous parts of modern societies, but recently they have been blamed for causing disagreement and polarization. Developing a theoretical understanding of these phenomena is still an active research question.

**Papers**:

**Supervisor**: Prof. Dr. Stefan Neumann

- Renormalization Group Flow as Optimal Transport by Jordan Cotler and Semon Rezchikov, https://arxiv.org/abs/2312.16038
- Physics-informed neural network for solving functional renormalization group on lattice by Takeru Yokota, https://arxiv.org/abs/2304.00599

**Motivation:**
Exploration of new perspectives opened by ML methods for complex quantum systems and/or improving machine learning methods from the point of view as a physical system of interacting elements

**Overview:**
Statistical field theory for neural networks (Lecture notes)
by Moritz Helias and David Dahmen, https://arxiv.org/abs/1901.10416

**Related Works:**

**Advisor**: Prof. Sabine Andergassen

Policy evaluation is a critical process that assesses the effectiveness of decision-making policies in healthcare. In dynamic healthcare environments, RL algorithms continuously assess and adjust policies based on real-time patient data, ensuring adaptability to evolving medical scenarios.

In healthcare, making decisions is challenging due to complex and high-stakes scenarios. RL, as a dynamic decision-making framework, is uniquely positioned to handle the intricacies of healthcare scenarios by not only predicting outcomes but also adapting treatment strategies to evolving patient conditions.

**Motivation**:
Transformers have revolutionized natural-language processing, and research has exploded since 2017.

**Overview**:
In this seminar paper, the functioning of transformers is explained and an overview of the latest developments, regarding large language models, time-series predction, etc., is given.

**Advisor**: Prof. Clemens Heitzinger

- General: What does it mean for ML to be trustworthy? (youtube-link)
- General: Trustworthy ML (Kush R. Varshney) (link)
- Differential privacy: Chapter 2 of: Dwork, Cynthia, and Aaron Roth. “The algorithmic foundations of differential privacy.” Found. Trends Theor. Comput. Sci. 9.3-4 2014
- Explainability: Samek, Wojciech, and Klaus-Robert Müller. “Towards explainable artificial intelligence.” Explainable AI: interpreting, explaining and visualizing deep learning.” Springer, Cham, 2019
- interpreting model predictions
- Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. ““Why should i trust you?” Explaining the predictions of any classifier.” ACM SIGKDD 2016
- Lundberg, Scott M., and Su-In Lee. “A unified approach to interpreting model predictions.” NeurIPS 2017

- reliability of explanation methods
- Kumar, I. Elizabeth, et al. “Problems with Shapley-value-based explanations as feature importance measures.” ICML, 2020.

- robustness against attacks and adversaries
- Jagielski, Matthew, et al. “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning.” 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 2018.
- Carmon, Yair, et al. “Unlabeled data improves adversarial robustness.” NeurIPS 2019.

- differential privacy
- Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.
- Patel, Neel, Reza Shokri, and Yair Zick. “Model explanations with differential privacy.” 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.

**Motivation**: Machine learning systems are ubiquitous and it is necessary to make sure they behave as intended. In particular, trustworthiness can be achieved by means of privacy-preserving, robust, and explainable algorithms.

**Overview**:

**Papers and topics**: