Seminar in Deep Neural Networks (FS 2023)
This is a seminar, we will focus on recent reasearch and skip most of the basics. We assume that all participants are familiar with the fundamentals on deep neural networks. If you feel like you cannot follow the discussions, please check out this playlist, this lecture, the book by Francois Chollet on Deep Learning with Python, or any other lectures or books on deep neural networks.
- At the end of January, we will publish a list of papers. You can then tell us what your preferences are.
- We will assign the papers based on a first-come first-serve principle. The presentations will also be scheduled in this order. Further, you will be assigned a mentor that is familiar with the topic and helps you with the preparation of your presentation.
- In the first week of the semester, there will be no presentations. Instead, we will give an introduction to the seminar and some tips on scientific presentations.
- After that, every week two students will present their respective papers.
- Around 4 weeks before your talk: first meeting with your mentor where you discuss the structure of the talk.
- 4 to 1 week before your talk: you meet with your mentor as often as both parties find necessary to be making progress.
- At least 1 week before your talk: The presentation is ready, and you will present your presentation (as a test run) to your mentor only.
- Your mentor will get a copy of your test run presentation slides. (These slides will not become public, but they may influence your seminar grade.)
- Your mentor will give you feedback, and you are supposed to update your final presentation based on this feedback.
- Please send us your slides at least on the day before your presentation.
- Your presentation should be 30 minutes long.
- After your presentation, you should organize a lively discussion about your presentation, for up to 15 minutes.
- It may help discussions if you also try to be critical about the presented work.
- Your presentation should take into account these presentation guidelines.
- Beyond these guidelines, you may find other useful tips about good scientific presentations online, for instance here, or here.
- All work copy/pasted from others (figures, explanations, examples, or equations) must be properly referenced.
The most important part of your grade will be the quality of your presentation, both content and style. In addition, we grade how well you motivate and direct the discussions with the audience, during and after the presentation. Also, we also grade how actively you participate in the discussions throughout the semester. And finally, we also value attendance and the quality of your mentor-only test presentation.
You can find the list of available papers here. Send us an ordered list (by preference) of up to 5 papers. We try to assign the papers first-come first-serve according to your preferences, while also taking into account the availability of the supervisor. To maximize the chance that you get a paper from your list, we recommend that you diversify the papers sufficiently. Please send us your preferences until the 10th of February. If you do not have any preference, still send us an e-mail and we will assign a paper to you.
|February 28||Entiol Liko||Truncated Horizon Policy Search: Combining Reinforcement Learning & Imitation Learning||Xiaofeng Flint Fan||[pdf]|
|February 28||Davide Maioli||Towards Understanding Grokking: An Effective Theory of Representation Learning||Benjamin Estermann||[pdf] [jpg]|
|March 7||Max Krähenmann||End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking||Joël Mathys||[pdf]|
|March 7||Virgilio Strozzi||What do Vision Transformers Learn?||Peter Belcák||[pdf]|
|March 14||Hongze Wang||Deep Reinforcement Learning meets Graph Neural Networks: exploring a routing optimization use case||Xiaofeng Flint Fan||[pdf]|
|March 14||Guy Shacht||Exploratory Combinatorial Optimization with Reinforcement Learning, RL for combinatorial optimization||Xiaofeng Flint Fan||[pdf]|
|March 21||Yannick Wattenberg||Highly accurate protein structure prediction with AlphaFold||Karolis Martinkus||[pdf]|
|March 21||Stuart Heeb||An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes||Luca Lanzendörfer||[pdf]|
|March 28||Ferjad Naeem||Learning Transferable Visual Models From Natural Language Supervision||Ferjad Naeem||[pdf]|
|April 4||Andras Geiszl||Microsoft Jigsaw (Post-processing GPT-3 Codex for producing valid code)||Peter Belcák||[pdf]|
|April 4||Lucas Morin||DiGress: Discrete Denoising Diffusion for Graph Generation||Karolis Martinkus||[pdf]|
|April 18||Simon Wachter||GraphCodeBERT: Pre-Training Code Representations with Data Flow||Florian Grötschla||[pdf]|
|April 18||Turcan Tuna||Flamingo: a Visual Language Model for Few-Shot Learning||Ferjad Naeem||[pdf]|
|April 25||Francesco Di Stefano||Transformers as Soft Reasoners over Language||Peter Belcák||[pdf]|
|May 9||Alec Pauli||ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation||Luca Lanzendörfer||[pdf]|
|May 9||Stefan Kramer||ROME: Editing Factual Associations in GPT||Peter Belcák||[pdf]|
|May 16||Hong Fan Zhao||BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer||Luca Lanzendörfer||[pdf]|
|May 16||Matthias Otth||Deep Equilibrium Models||Joël Mathys||[pdf]|
|May 23||Dennis Vilgertshofer||A Generalist Algorithmic Learner||Florian Grötschla||[pdf]|
|May 23||Meret Ackermann||Illuminating protein space with a programmable generative model||Karolis Martinkus||[pdf]|