Zum Hauptinhalt springen





Zeit: 09.02.2021 | 10:00 – 11:00 Uhr
Sprache: Deutsch
Ort: GoToWebinar, Zugangslink via E-Mail 

Referentin: Beatrice Lugger, NAWIK



Zeit: 16.02.2021 | 17:00 – 18:00 Uhr
Sprache: Deutsch
Ort: GoToWebinar 

Referentin: Katharina Weitz, Universität Augsburg



In diesem Webinar werden Grundlagen des Forschungsbereiches „Erklärbare KI“ behandelt. Nach einem Einblick, was hinter den Schlagwörtern „Erklärbare KI (XAI)“ und „Menschzentrierte KI“ steckt, wird ein Überblick über verschiedene XAI Methoden gegeben. Anschließend werden aktuelle Forschungsarbeiten vorgestellt, die Stärken und Schwächen verschiedener XAI Methoden im Einsatz bei Endnutzer:innen demonstrieren.

Zeit: 17.02.2021 | 17:00 – 18:00 Uhr
Ort: GoToWebinar

Referent: Dr. Jannes Quer, Freie Universität Berlin


Beschreibung:  In the webinar I will give an introduction to the fundamental ideas of reinforcement learning. Using examples, I will illustrate the ideas and demonstrate how, for example, neural networks can be applied. At the end I will present current questions and results from my own research. 

Zeit: 23.02.2021 | 17:00 – 18:00 Uhr
Sprache: Deutsch
Ort: GoToWebinar 


Prof. Dr. Eva Bittner, Universität Hamburg
Dr. Sarah Oeste-Reiß, Universität Kassel




Im Webinar diskutieren wir aktuelle Herausforderungen und Gestaltungsansätze für die kollaborative Zusammenarbeit von Menschen und KI-basierten Systemen in der Wissensarbeit anhand aktueller Forschungsprojekte (www.hymeki.de,https://instant.informatik.uni-hamburg.de). Besondere Bedeutung kommt hierbei dem gegenseitigen Lernen von Mensch und KI (human-in-the-loop, machine-in-the-loop) in Arbeitsprozessen und der Nutzung der komplementären Stärken zu.

Zeit: 24.02.2021 | 11:00 – 12:00 Uhr
Sprache: Englisch
Ort: GoToWebinar


Dr. Theresa Züger, Forschungsgruppenleiterin Public Interest AI, Alexander von Humboldt Institut für Internet und Gesellschaft


Dr. Hadi Asghari, Post-Doc Public Interest AI
Freya Hewett, Doktorandin Public Interest AI
Judith Fassbender, Doktorandin Public Interest AI
Jakob Stolberg, Doktorand Public Interest AI



How can artificial intelligence best serve the public interest? This is the main question that our research project will aim to tackle. By first defining what exactly is meant with public interest, and which factors are particularly important in this context, we will then apply this knowledge to developing new prototypes.

Our prototypes will be based on three different areas of AI. One project will use automatic image recognition technologies to help identify wheelchair-accessible locations, whilst a second project will examine how Natural Language Processing tools can be applied to decrease the level of text complexity. A third project will analyse what role design plays in creating public interest AI. 

By working with external partners to ensure that our AI prototypes do actually serve the public interest, we will be able to continually adjust which factors are most important and provide empirical evidence for our best practice guidelines for developing public interest AI. 

In this webinar, we will give a short presentation on our various projects and also touch upon the various perspectives that are important when approaching such a research question. As we are at the beginning of our project, the webinar will offer the opportunity to give feedback and there will be time for comments and questions.

Zeit: 25.02.2021 | 15:00 – 16:00 Uhr
Ort: GoToWebinar

Referent: Dr. Jörn Hees, DFKI – Smarte Daten & Wissensdienste


Beschreibung: In this talk we'll give an overview over the still very active research areas of Deep Learning. We'll start with a short history about how the field developed, cover the main research directions, and also dive into remaining challenges and current hot topics (e.g., XAI, Self-Supervision, Multi-Modality, Multi-Task Models).

Deep-Dive Workshops

Adversarial examples are images that elude their classification by deep neural networks. They exploit certain known weaknesses of these networks — for instance a preference for texture over shape — by introducing invisible changes to an image on the pixel level. Such images can then become instruments for stress-testing artificial intelligence systems, tools of resistance against social surveillance and control, or even media of artistic expression. This workshop introduces participants to the theory and practice of adversarial examples. With only a few lines of code, we will create images that confuse many well-known neural network architectures, and explore the limits of current-generation artificial intelligence systems.


Dr. Fabian Offert, Assistant Professor of History and Theory of the Digital Humanities at the University of California, Santa Barbara


Data and information are never neutral. Even though historically, information science practices are focused on providing access to information for ‘everyone’, information products often reflect the perspectives and biases of their creators, and therefore do not meet the needs of all users. Indeed, the needs of marginalized communities are often not taken into consideration by those who create and maintain information infrastructures and technologies and this lack of consideration can have many negative effects.

In this workshop, we will take as a starting point an intersectional feminist view on information and data science, informed by the idea that information and data can form and reinforce systems of power, which lead to inequalities and marginalization of communities. 


Prof. Rebecca Frank, Juniorprofessor at the Berlin School of Library and Information Science at Humboldt-Universität zu Berlin and the Einstein Centre Digital Future. 

In diesem Workshop soll es darum gehen, warum offene Schnittstellen und OpenData so wichtig für die Mobilität der Zukunft sind, wie diese Schnittstellen auch dann genutzt werden können, wenn diese nicht offiziell angeboten werden. Dafür werden Apps live auseinander genommen.


Radforschung analysiert Mobilitätssharingsysteme mit dem Schwerpunkt OpenData und offenen Schnittstellen




Wenn Roman Lipski (https://www.romanlipski.com/) an einem neuen Bild arbeitet, macht er das gemeinsam mit dem KI-Programm „Arta“ – und so auch in dem Workshop "Unfinished by Lipski - AI Art in the Making". Er lädt euch ein und lässt euch dabei selbst den ersten Pinselstrich machen. Alles, was danach passiert, wird per Kamera auf den Computer übertragen und dort von Arta verarbeitet. Die KI verändert das Bild und entwickelt daraus Hinweise, wie es weitergehen könnte. „Nach einer mehrstündigen ‚Unfinished‘-Session sind die Leute meist völlig fertig – aber glücklich“, berichtet Lipski, der die so entstandene Basis des Bildes später vollendet. Dabei betont er: Die Software unterstützt zwar den künstlerischen Prozess, Malen müssen die menschlichen Protagonisten jedoch immer noch selbst. „Die KI gibt mir mehr Zeit für das Wesentliche: für das Malen mit Pinsel und Farben auf der Leinwand“, so der Künstler. „Wahrscheinlich gehöre ich zu den wenigen Malern, die kreativ sind und keine Krisen haben.“ Der Workshop findet digital statt. Ihr steuert gemeinsam mit der KI, das was Roman Lipski live auf der Leinwand malt. Zusätzlich wird Florian Dohmann von Birds on Mars (https://www.birdsonmars.com/) euch technische Einblicke in die KI und ihre Funktionsweisen geben.


Roman Lipski, Künstler


In the workshop we will discuss different perspectives from different disciplines on the topic "What is fairness and what does it mean for ML?". This will be done on the basis of some basic inputs and texts from Ethics and Machine Learning. In a second part, the discussion will be confronted with the results of some interviews we conducted with lay people. Adding a transdisciplinary touch to the workshop will lead to a second question: If there is a gap between the discussion on fairness in ML and its public perception, how do we bridge it? Do we even need to?


Dr. Thomas Grote, Postdoctoral fellow at the Ethics and Philosophy Lab of the Cluster of Excellence: ML: New Perspectives for Science; University of Tübingen

Dr. Samira Samadi, Research group leader of the “Human Aspects of Machine Learning” group at the Max Planck Institute for Intelligent Systems in Tübingen

This workshop aims to encourage and empower participants to create and write positive future scenarios for our AI influenced society in the form of storytelling. First, the project twentyforty - Utopias for a Digital Society by the Alexander von Humboldt Institute will be presented, in which researchers from all over the world have already successfully carried out this task (Bronwen Deacon was project leader and Isabella Hermann one of the authors). Afterwards, we will work together on the participants' ideas. The aim is for all participants to solidify their topic, develop their future scenario in a dramatic way and find a form in which to write the story. This should happen in an open discussion and with the support of the facilitators. 

The utopian narratives written after the input of the workshop will be screened by a jury and a selection will be published in the KI-Camp publication.


Bronwen Deacon, Researcher/ Project Manager, HIIG

Dr. Isabella Hermann: Research Coordinator, Berlin-Brandenburg Academy of Sciences and Humanities in Berlin

In our workshop, you will get a transdisciplinary idea of robots with emotional intelligence in theory and practice. Starting from the actual relevance of the emotion theoretical question “What is an emotion?” that has been formulated in 1884 by William James, you will get more familiar with the engineering perspective on emotions with its computational models. After these short impulses, you will program a robot with emotional intelligence yourself (with support if needed) and observe its behaviour live at GV Lab in Japan. There will also be room for questions and discussions


Maike Klein, Doctoral Candidate University of Stuttgart / Project Lead KI-Camp 2021

Enrique Coronado, PhD, Assistant Professor, Tokyo University of Agriculture and Technology

Current research on AI shows that machine learning systems can foster discrimination and inequality. It is therefore useful to examine AI through a feminist lens. In this workshop, we will ask: Who is currently designing and building AI, and whose voices are heard in the debate? Where does discrimination through AI currently occur? And, how can feminist practices help to tackle these issues? Envisioning the future is political, even more so, when these visions are narrated and discussed. Visions can lead to social change. Thus, participants of this workshop will be invited to imagine, write and draw their visions for desirable feminist futures of AI.


Helene von Schwichow, Co-Founder of the MOTIF Institute for Digital Culture

Katrin Fritsch, Co-Founder of the MOTIF Institute for Digital Culture

Automatische Gesichtserkennung macht die Auswirkungen digitaler Überwachung für jede*n Einzelne*n spür- und erfahrbar. Die Politik, als Hauptdarstellerin und Verantwortungsträgerin der kontrovers geführten Debatte, steht vor drängenden Grundsatzfragen und Risiken – potentielle Grundrechtseingriffe, die Etablierung neuer Formen von Diskriminierung und Machtzuwachs auf Seiten des Tech- bzw. Privatsektors.  

Auf welchen Ebenen spielt sich der Diskurs ab? Welche Positionen und Framings sind prägend? Sollte Gesichtserkennung (temporär) verboten werden? Oder kann sie in bestimmten Fällen in demokratischen Gesellschaften durchaus sinnvoll sein? Wie könnte effektive Regulierung aussehen?

Im Workshop sollen der Status quo in Medien, Zivilgesellschaft und Politik sowie potentielle Lösungsansätze für das Gemeinwohl diskutiert werden.


Julia Hess, KI-Camp Stipendiatin / Stiftung neue Verantwortung

In this workshop we will explore the use of machine learning and AI in healthcare. We will kick things off with an overview of the contexts in which AI can be applied in health care and then give a short summary of recent regulatory initiatives that we hope will open new opportunities for research and applications. We will also touch on ethical and regulatory frameworks to ensure better outcomes for all -- but most of all, we want to hear from you! We will facilitate a brainstorming session where you can interact with like-minded researchers and will have an opportunity to let your research (ideas) shine.


Lars Roemheld, Director AI & Data | health innovation hub

Prof. Dr. Ariel Stern, Director International Health Care Economics | health innovation hub

During this workshop, hybrid music artist Portrait XO and machine learning expert CJ Carr (Dadabots) introduces ‘neural synthesis’ for vocal AI. Participants will be shown what it's like to train a neural network on your own voice, how to use DDSP to transfer singing voices to other instruments (e.g. voice to saxophone), and how to train GPT2 to generate lyrics. In the end, we’ll share the results and open up for discussion and a quick Q&A.


Portrait XO, Hybrid Music Artist and Creative Director

CJ Carr, AI Music Artist

Können Sie allein mit deiner Vorstellungskraft die Hand eines anderen Menschen bewegen? Mit Brain Computer Interfaces wird dies und noch vieles mehr möglich. Wozu könnte das von Nutzen sein und wie könnten zukünftige Anwendungen eingesetzt werden?

In diesem Workshop bekommen Sie Einblicke in die Welt der Neuronen Aktivität, der bioelektrischen Signale des Gehirns und der Funktionalität von EEG basierten BCI-Systemen. Nach einer Live-Demonstration unterschiedlicher Methoden und BCI Anwendungen werden Martin Walchshofer (BCI Entwickler, Guger Technologies, www.gtec.at) und Erika Mondria (Supervisor Brain Projects, Ars Electronica Linz) Ihre Fragen zur KI hinter diesen Anwendungen und Möglichkeiten zukünftiger Entwicklungen beantworten.


Erika Mondria, Supervisor Brain Projects, Ars Electronica Linz

Martin Walchshofer, BCI Entwickler, Guger Technologies

Martin Spanka, Ansprechperson Neurotechnologie / Ars Electronica Center