Distributed Computing
ETH Zurich

Seminar in Deep Neural Networks (FS 2023)

Organization

When & Where: Tuesdays 10:15, ETZ E 9.
First seminar: 21.02.2023
Last seminar: 23.05.2023
Coordinators: Benjamin Estermann, Florian Grötschla, Roger Wattenhofer.

Background

This is a seminar, we will focus on recent reasearch and skip most of the basics. We assume that all participants are familiar with the fundamentals on deep neural networks. If you feel like you cannot follow the discussions, please check out this playlist, this lecture, the book by Francois Chollet on Deep Learning with Python, or any other lectures or books on deep neural networks.

Seminar Timeline

Preparation Timeline

Your Presentation

Grade

The most important part of your grade will be the quality of your presentation, both content and style. In addition, we grade how well you motivate and direct the discussions with the audience, during and after the presentation. Also, we also grade how actively you participate in the discussions throughout the semester. And finally, we also value attendance and the quality of your mentor-only test presentation.

Papers

You can find the list of available papers here. Send us an ordered list (by preference) of up to 5 papers. We try to assign the papers first-come first-serve according to your preferences, while also taking into account the availability of the supervisor. To maximize the chance that you get a paper from your list, we recommend that you diversify the papers sufficiently. Please send us your preferences until the 10th of February. If you do not have any preference, still send us an e-mail and we will assign a paper to you.

Schedule

Date Presenter Title Mentor Slides
February 28 Entiol Liko Truncated Horizon Policy Search: Combining Reinforcement Learning & Imitation Learning Xiaofeng Flint Fan [pdf]
February 28 Davide Maioli Towards Understanding Grokking: An Effective Theory of Representation Learning Benjamin Estermann [pdf] [jpg]
March 7 Max Krähenmann End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking Joël Mathys [pdf]
March 7 Virgilio Strozzi What do Vision Transformers Learn? Peter Belcák [pdf]
March 14 Hongze Wang Deep Reinforcement Learning meets Graph Neural Networks: exploring a routing optimization use case Xiaofeng Flint Fan [pdf]
March 14 Guy Shacht Exploratory Combinatorial Optimization with Reinforcement Learning, RL for combinatorial optimization Xiaofeng Flint Fan [pdf]
March 21 Yannick Wattenberg Highly accurate protein structure prediction with AlphaFold Karolis Martinkus [pdf]
March 21 Stuart Heeb An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes Luca Lanzendörfer [pdf]
March 28 Ferjad Naeem Learning Transferable Visual Models From Natural Language Supervision Ferjad Naeem [pdf]
April 4 Andras Geiszl Microsoft Jigsaw (Post-processing GPT-3 Codex for producing valid code) Peter Belcák [pdf]
April 4 Lucas Morin DiGress: Discrete Denoising Diffusion for Graph Generation Karolis Martinkus [pdf]
April 18 Simon Wachter GraphCodeBERT: Pre-Training Code Representations with Data Flow Florian Grötschla [pdf]
April 18 Turcan Tuna Flamingo: a Visual Language Model for Few-Shot Learning Ferjad Naeem [pdf]
April 25 Francesco Di Stefano Transformers as Soft Reasoners over Language Peter Belcák [pdf]
May 9 Alec Pauli ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation Luca Lanzendörfer [pdf]
May 9 Stefan Kramer ROME: Editing Factual Associations in GPT Peter Belcák [pdf]
May 16 Hong Fan Zhao BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer Luca Lanzendörfer [pdf]
May 16 Matthias Otth Deep Equilibrium Models Joël Mathys [pdf]
May 23 Dennis Vilgertshofer A Generalist Algorithmic Learner Florian Grötschla [pdf]
May 23 Meret Ackermann Illuminating protein space with a programmable generative model Karolis Martinkus [pdf]