Zum Hauptinhalt springen

Das sind die KI-Newcomer*innen 2021


Shailza Jolly, DFKI und TU Kaiserslautern
Stefan Seegerer, Freie Universität Berlin

Natur- und Lebenswissenschaften:

Dr. Heidi Seibold, Helmholtz AI am Helmholtz Zentrum München
Dr. Benjamin Schubert, Helmholtz Zentrum München

Technik- und Ingenieurwissenschaften:
Dr. Georgia Chalvatzaki, TU Darmstadt
Pascal Klink, TU Darmstadt

Geistes- und Sozialwissenschaften:
Ariana Dongus, HfG Karlsruhe
Dr. Daniele Di Mitri, Leibniz-Institut für Bildungsforschung und Bildungsinformation

Sofia Crespo, Freie Künstlerin
Jake Elwes, Freier Künstler


Die Kandidat*innen

Folgende Personen kandidierten um den Titel "KI-Newcomer*in 2021"

Sie können nach Name, Ort und Bundesland suchen.
Geistes- und Sozialwissenschaften

Ariana Dongus

Karlsruhe Baden-Württemberg

Ariana Dongus forscht und lebt in Berlin und Karlsruhe. Sie ist Doktorandin und wissenschaftliche Mitarbeiterin an der Staatlichen Hochschule für Gestaltung Karlsruhe. Dort koordiniert sie zudem die von Prof. Pasquinelli gegründete Forschungsgruppe Künstliche Intelligenz und Medienphilosophie. An der HfG koordinierte zudem ein von der Volkswagen Stiftung gefördertes Forschungsprojekt im Program "AI and the Society of the Future".

Ihre Arbeiten stellte sie national und international vor, etwa auf der Ars Electronica, der transmediale, beim PACT Zollverein, an der Hochschule der Künste Bern vor, im Shanghai Ming Contemporary Art Museum oder auf der Ljubljana Biennial of Graphic Arts. Ihre Texte erschienen in künstlerischen Magazinen, journalistischen Zeitungen sowie akademischen Journalen. Letzte Veröffentlichungen beinhalten Essays, journalistische Reportagen und Interviews: www.arianadongus.com.

An important concern of her work is to demystify AI and show how technology is not neutral and objective: The historically grown social injustices, discriminations and inequalities are deeply woven into the fabric of high-tech. In her PhD thesis and a project funded by Volkswagen Stiftung, she does this by showing how much human labor is still needed to make AI applications appear 'autonomous' and 'smart'. She analyzes intersections of biometrics, colonial pasts, new forms of labor, and 'machine intelligence', and in so doing, contributes to a constructive critique of digital economies.

In her teaching she offers, among other classes, a seminar series co-taught with Prof. Pasquinelli titled "Women in Computation/Queering Media Theory" that highlights the crucial yet forgotten and invisible role women have played in the history of computation, as well as seminars such as "In/Visible Subjects of AI: Ghost Work and Biometric Control in the Global South," in which students engage with the highly problematic past and present of biometrics, discuss it's application for social control and investigate the emerging field of gig work for AI-applications.

Technik- und Ingenieurwissenschaften

Andreas Kist

Erlangen Bayern

Andreas Kist studied Molecular Medicine at the Friedrich-Alexander-University (FAU) Erlangen-Nürnberg. During his studies, Andreas has been involved in innovate healthcare projects at Siemens AG. He received his PhD in Neuroscience for his work at the Max-Planck-Institute for Neurobiology in Martinsried (Munich). Since 2018, Andreas has been working as postdoc at the University Hospital, Department of Otorhinolaryngology, Head&Neck surgery, in Erlangen. In a BMWi-funded project, his aim is to bring laryngeal high-speed videoendoscopy to the clinic. Specifically, Andreas is transforming the complex image processing pipeline using AI tools. Andreas will be a junior faculty member at FAU in April 2021.

Andreas presented his interdisciplinary research on numerous national and international conferences, and received several awards for his works, for example from the Boehringer Ingelheim Fonds or the Joachim-Herz foundation. Andreas is married and father of two wonderful girls.

Laryngeal high-speed videoendoscopy (HSV) is an excellent tool to quantify the vocal fold oscillation, important to determine the patient’s health status and to monitor treatment progress. Despite the great advantages of HSV compared to current clinical tools, HSV is barely used in the clinic due to the complex and manual data analysis, and outdated hardware. Therefore, a clinically applicable HSV system should provide a fully automatic data analysis and latest hardware.

Andreas lead an international collaboration of several hospitals and research institutions to create an open multihospital dataset for glottis segmentation, crucial for any further HSV data analysis. With this, Andreas was able to develop novel, highly efficient and generalized deep neural networks (DNNs) that can be deployed to inexpensive hardware accelerators. Andreas’ interdisciplinary research is focused on developing and evaluating clinical-applicable semantic segmentation and health status classification DNNs.

Geistes- und Sozialwissenschaften

Antonio Bikić

München/ Zürich Bayern Antonio Bikić promoviert an der LMU München und der ETH Zürich zur Relevanz von Semantik bei moralischen Entscheidungen und setzt sich dazu mit »[Inverse] Reinforcement Learning«-Algorithmen auseinander. Er untersucht dabei insbesondere Fragestellungen aus der Philosophie des Geistes und der Ethik bzw. Handlungstheorie. Sein technischer Fokus liegt auf subsymbolisch realisierter Künstlicher Intelligenz. Er studierte u.a. Philosophie/Latein (M.A.) und Computerlinguistik/Informatik (M.Sc.) in München, Köln, Mainz und Bonn und arbeitete mit dem Verband der Automobilindustrie und dem Fraunhofer IAO an ethischen Fragestellungen im Kontext des autonomen Fahrens als Sachverständiger zusammen. Zuvor arbeitete er u.a. für das Rechenzentrum der Max-Planck-Gesellschaft, für PwC München (an einem Anforderungskatalog für KI-Systeme) und an Ethik-Lehrstühlen der LMU München. The primary research question of my project is the following: Does moral agency require semantics? My project answers this question by determining if there is a set of situations that require semantics. I am setting out these situations by exploring the architecture of (inverse) reinforcement learning agents. The scientific problem that underlies this project is the connection between semantics and moral agency. Moral agency is structured by norms. My thesis is: Contexts with at least second degree temporary implicit norms require semantics to take (moral) decisions. Second degree implicit norms are norms that emerge from at least two other implicit norms. If my thesis holds true, there are good ethical reasons not to automate some contexts if the current machine architecture is maintained.
Technik- und Ingenieurwissenschaften

Benjamin Maschler

Stuttgart Baden-Württemberg Benjamin Maschler studied Renewable Energies and Sustainable Electrical Power Supply at the Universities of Stuttgart and Cape Town. Since 2017, he has been a research assistant at the Institute of Industrial Automation and Software Systems at the University of Stuttgart. His research focusses on solving applied problems of distributed or dynamic machine learning to make this learning more suitable for everyday use, more robust and less prone to abuse. For this purpose, he uses methods of continual and transfer learning. His research was well received on national and international conferences and is currently leading to several high-profile journal publications. In addition to his research work, he is committed to an informed, social debate about technology in our everyday lives (e.g. https://youtu.be/x-6_X_xoJR0). Benjamin Maschler is researching how to use deep transfer learning to make machine learning more suitable for everyday use, more robust, and less prone to abuse. Conventional machine learning requires the compilation of large training data sets, from which algorithms then extract correlations. This favors large corporations with existing market access, takes away much of the control users have over their data and how it is used, and is energy inefficient. Moreover, such algorithms have so far shown little flexibility in responding to dynamic changes. Transfer learning, on the other hand, enables learning on distributed datasets directly at the user's end and, at the same time, permits much smaller-scale adaptation to local, even dynamic, conditions. Here, Benjamin developed first description approaches and is currently dedicated to the creation of a practical, open framework - primarily in industrial automation, but transferable to other areas where a central merging of data is (or should be) undesirable.
Natur- und Lebenswissenschaften

Benjamin Schubert

München Bayern Benjamin Schubert is a team leader at Helmholtz Zentrum München. His goal is to develop more effective and safe vaccines and biotherapeutics using AI methods. During his PhD at the University of Tübingen, Dr. Schubert designed algorithms that support every step of vaccine development, from antigen identification to selection and vaccine assembly, enabling a more streamlined and resource-efficient development. In a recent collaboration, Dr. Schubert could experimentally demonstrate that his algorithm truly improves vaccine efficacy beyond human designs. During his postdoc at Harvard Medical School, Dr. Schubert designed an AI-based method to modify biotherapeutics to prevent immunological responses that otherwise would have negative effects on efficacy and safety. Initial experimental evaluations of computationally re-designed biotherapeutics were encouraging and demonstrated the prospects of his approach to improving the safety and efficacy of biotherapeutics. Dr. Schubert is interested in how machine learning can be used to determine expressive latent representations of amino acid sequences to predict biophysical properties accurately and generate new sequences with optimized design criteria. To this end, he is developing novel generative and supervised deep neural networks and combines those architectures with techniques from multiple instances and multi-task learning. Integrated into discreet optimization problems, Dr. Schubert is using these models to solve often arising biotherapeutic engineering tasks such as arranging peptides optimally to improve vaccine efficacy or finding the best alterations of a drug to reduce potential side effects. Such optimization problems often have multiple design criteria that need to be fulfilled simultaneously. Therefore, Dr. Schubert is also developing new strategies to find optimal solutions to discrete multi-objective optimization problems.

Damian Dziwis

Düsseldorf Nordrhein-Westfalen Damian T. Dziwis was born 1986 in Chorzów (Poland). The Düsseldorf
(Germany) based composer and engineer creates the majority of his
multimedia works with audiovisual generative algorithms, machine learning and artificial intelligence, live coding or DIY electronics. He began his artistic education in instrumental composition under David Graham, followed by electronic composition under Christian Banasik and finalized it with his master studies in electronic composition under Michael Beil in Cologne.
 Damian’s compositions and installations were played and exhibited at various festivals, for example the „CTM Festival“ in Berlin, „Music Tech Fest“ Stockholm, „inSonic Festival“ at the ZKM Karlsruhe, „Beethoven Fest“ Bonn, the „ACHT BRÜCKEN“ festival in Cologne, „die digitale“ festival in Düsseldorf, „60x60“ Festival in Chicago or the „THE WRONG - New Digital Art Biennale“. Damian was artist in residence at the ZKM Karlsruhe and participated in art labs from festivals like the Ars Electronica in Linz, CTM Festival Berlin, Music Tech Fest Stockholm, the Gamma Festival in Saint Petersburg and MUTEK Montreal.
Beside his artistic work he has an engineer’s degree in media technology, is working as a lecturer for Creative Coding at the Peter Behrens School of Arts (HS Düsseldorf) and is doing a Ph.D. (TH Cologne & TU Berlin) in virtual acoustics, developing applications for spatial audio and art expression which were published at conferences like AES, DAGA, ICMI and TEI. website: http://damian.t.dziwis.net Damian T. Dziwis is a composer/media-artist working with algorithms and AI. He is exploring the possibilities of machine learning agents as a creative and collaborative partner in artistic processes - such as composition, live coding or generative art. He is also a lecturer for Creative Coding at HS Düsseldorf, as well as a researcher and Ph.D. student in the field of spatial audio at the TH Cologne and TU Berlin.
Geistes- und Sozialwissenschaften

Daniele Di Mitri

Frankfurt Hessen My name is Daniele Di Mitri. I was born in Bari, south of Italy. My mother was a teacher, my father a software engineer. From them, I inherited two main passions: education and technology. Since high school, I am on a mission to improve access to education. Starting off as a student activist, I later joined the board of two European NGOs in the field of lifelong learning. At the age of 19, I founded a web development company and started my studies in computer science. In my bachelor thesis, I focused on how learning analytics can improve school assessment. Thereafter, I realised I wanted to learn more about data science. I moved to the Netherlands and enrolled in the master of AI at Maastricht University. During my studies, I took part in the excellence research programme at IBM, where I learned more about the business side of AI. I followed up my AI master with a PhD on learning analytics and wearable sensors at the Open University Netherlands. My PhD thesis, “The Multimodal Tutor”, describes the potentials of using multimodal data to support practical learning experiences through automatic feedback. In 2020, I joined the EduTec group at the DIPF in Frankfurt as a group leader. My current focus is on creating responsible AI for education and human support. Especially during Covid-19, AI can tackle learning in isolation by widening access to education and allowing people to train practical skills. Becoming the AI newcomer of 2021 will help me along with my mission. How can we best interface artificial intelligence applications with humans to ultimately support human learning, support their goal achievement and boost human productivity?

Franziska Schirrmacher

Erlangen Bayern Franziska Schirrmacher has developed innovative AI-based image processing methods during her academic career and applied them in several socially relevant areas. Her master thesis in the field of medical image processing significantly improves the image quality of eye scans, enabling better diagnosis of eye diseases. The work was presented at the high-level medical image processing conference MICCAI 2017 and published in a special issue of the journal Medical Image Analysis, where it was recognized as one of the best papers. She is currently writing her doctoral thesis at the IT Security Infrastructures Lab at Friedrich-Alexander-Universität Erlangen-Nürnberg. She is currently contributing to a DFG-funded collaborative research center and developing image processing methods to fight organized crime in cooperation with the BKA. One goal, for example, is improved recognition of license plates from poor quality image or video data such as surveillance cameras. The high reliability is achieved by a novel combination of character recognition and image processing. Her published results led to a whole series of follow-up work and serve as the basis for license plate recognition at the BKA as part of a BMBF project. In police investigations, data from various sources are used with the aim of identifying the suspect. One possibility is to determine the license plate number of the crime vehicle, which can be seen on a surveillance video. Often, the quality of the video is poor and the license plate cannot be identified. The resolution of the camera, video compression as well as lighting are known influencing factors. To train a neural network for license plate recognition, these factors must be covered in the training data. If the test data deviates from the training data, the recognition accuracy of the network decreases strongly. The goal of my work is to design a network topology and create a dataset for reliably predicting license plates from poor quality image or video data. To improve the topology, especially the extension of the network with tasks from the field of image processing shows great potential.
Technik- und Ingenieurwissenschaften

Georgia Chalvatzaki

Darmstadt Hessen Georgia Chalvatzaki is a research group leader of the intelligent Robotic Systems & Assistants (iROSA) group at TU Darmstadt. Dr. Chalvatzaki is a world-leading junior researcher developing novel methods for assistive robots using machine learning and classical approaches. Her scientific insight regarding human-robot interaction (HRI) with learning and planning is considered one of the most promising approaches to allow robots to leave the structured lab environments and enter our houses through her primary focus on embodied AI of robot assistants. Dr. Georgia Chalvatzaki developed methods for human motion and action prediction; while her work on interactive Reinforcement Learning for HRI enabled the planning of human-adaptive robot behaviors. She has recently been accepted into the Emmy Noether Program for Artificial Intelligence of the DFG – only 9 out of 91 proposals were selected for funding. It enables outstanding young scientists to qualify for a university professorship by independently leading a research group over six years.  Her proposed research focuses on “Robot Learning of Mobile Manipulation for Intelligent Assistance,” studying new methods at the intersection of Machine Learning and Classical Robotics, taking one step further the research for embodied AI robotic assistants. The research in iROSA proposes novel methods for enabling mobile manipulator robots to solve complex tasks in house-like environments, with the human-in-the-loop of the interaction process. My fundamental research question considers how to enable embodied-AI systems, i.e. robots, to acquire skills for performing assistive tasks in human-inhabited environments, introducing novel methods at the intersection of robotics and machine learning for mobile manipulation for intelligent human-centered assistance.
Technik- und Ingenieurwissenschaften

Gesina Schwalbe

Regensburg Bayern

Gesina Schwalbe completed her studies of mathematics at the University of Regensburg in 2018. Since then she is a PhD student working at Continental AG on the topic verification of deep neural networks (DNNs) for perception in automated driving. This is supervised by Professor Ute Schmid, head of the Cognitive Systems Laboratory at the University of Bamberg. Gesina's research interests are dedicated to the safety of future autonomous mobility solutions that use DNNs for perception and planning. This encompasses the structure of the safety argument for automated driving perception, and the verification of the internal representation of DNNs with respect to pre-defined symbolic constraints. For the latter she currently investigates in the application of concept activation vectors to semi-formal verification of object detection. This work is part of the German publicly funded project KI-Absicherung aiming for a prototypical safety argumentation of pedestrian detection realized with a deep neural network.

Gesina Schwalbe is generally interested in the challenge of safety assurance for deep convolutional neural networks in perception applications. This includes both the safety argumentation structure and methods to provide evidence for the safety argument.

Her main research question is how to enable formal or semi-formal verification of symbolic requirements on convolutional deep neural networks. This combines questions of explainable AI (How to link symbolic concepts with intermediate outputs of a DNN?) and formal verification (How to do this quantitatively? How to formulate verifiable rules on the symbolic concepts?).

Technik- und Ingenieurwissenschaften

Kaja Balzereit

Lemgo Nordrhein-Westfalen Kaja Balzereit has been a research associate at the Fraunhofer Industrial Automation branch INA of Fraunhofer IOSB for more than three years now. For her PhD, she is researching symbolic AI methods for intelligent fault handling in production systems, in order to increase the resilience of modern production facilities to external, as well as internal, disturbances. She regularly publishes and presents the results of her research at international conferences and in scientific journals. In research projects, she works closely with various companies to bring the potential of AI into broad application and the future trend Industry 4.0 from theory into practice. Among other things, she recently contributed to an AI milestone on the road to the autonomous factory with the successful completion of a scientific project. In addition, her work is closely linked to the Fraunhofer-Gesellschaft's Machine Learning Research Center, which promotes the development of key technologies in Artificial Intelligence. Parallel to her scientific work, Kaja Balzereit is personally interested in inspiring students for her field of work. She has presented her work in several lectures at different universities and thus contributed to making AI research visible and inspiring students for technical topics and points out career paths for research in the STEM field. Kaja Balzereit deals with intelligent error handling in modern production plants. In order to increase the resilience of production plants, she uses both symbolic AI methods and machine learning. The goal is not only to detect anomalies and errors in production plants, but also to automatically identify the cause of the errors as well as to minimize the effects of the errors as far as possible. Therefore, symbolic AI methods such as automatic reasoning which enable the analysis of cause-effect relationships and causalities are used.
Geistes- und Sozialwissenschaften

Nils Köbis

Berlin Berlin Nils Köbis ist Post-Doc am Max-Planck-Institut für Bildungsforschung (Zentrum für Mensch & Maschine), wo er zum Thema „(Un)ethisches Verhalten von Menschen und Maschinen“ forscht. Fast täglich stehen wir vor der Wahl, ob wir uns an (ethische) Regeln halten sollen oder sie zu unserem Vorteil missachten. Wann brechen Menschen solche Regeln? Welchen Einfluss hat KI auf unser ethisches Verhalten? Und wie gehen intelligente Algorithmen mit solchen ethischen Dilemmata um? Auf der Suche nach wissenschaftlichen Antworten, wendet Nils seine Erfahrungen aus der Sozialpsychologie (Dissertation, VU Amsterdam) und Verhaltensökonomie (Post-Doc, UvA Amsterdam) auf die Verhaltensforschung von Algorithmen und Menschen an. Dabei verfolgt er das Ziel das Forschungsfeld der Verhaltensethik von KI aufzubauen. Nils legt dabei großen Wert darauf die Erkenntnisse nicht nur der Fachwelt zugänglich zu machen, sondern auch an die breite Öffentlichkeit via seines Wissenschaftspodcast KickbackGAP zu kommunizieren. Nils Köbis is a Post-Doc at the Max-Planck-Institute for Human Development (Center for Humans & Machine), where he is conducting research on "(Un)ethical Behavior of Humans and Machines". On a daily basis, we face choices between following (ethical) rules or disregarding them for our own profit. When do people break ethical rules? What influence does AI have on such ethical behavior? And how do intelligent algorithms themsevles deal with such ethical dilemmas? In search of scientific answers, Nils combines his experience in social psychology (PhD, VU Amsterdam) and behavioral economics (Post-doc, UvA Amsterdam) to conduct research on human and machine behavior. He aims to help establishing the new research field of "Behavioral Ethics of AI". Nils aims to not only communicate his findings to the scientific community, but also towards a broader general public via his science podcast KickbackGAP.
Geistes- und Sozialwissenschaften

Lajla Fetic

Berlin Berlin Lajla Fetic forscht, spricht und schreibt als freie Wissenschaftlerin und Beraterin zu den ethischen Aspekten Künstlicher Intelligenz. 2021 wurde sie dafür als eine der 100 Brilliant Women in AI Ethics ausgezeichnet. Sie beschäftigt sich vor allem mit der Frage, wie Technologie einen Beitrag zur Stärkung des Gemeinwohls leisten kann und wirklich alle davon profitieren können. Um Wissenschaft und Praxis zu vereinen, hat sie als Co-Autorin mehrere Leitfäden zur Umsetzung von KI-Ethik verfasst. Außerdem war sie als Teil der AI Ethics Impact Group an der Erstellung eines KI-Ethik-Labels beteiligt. Sie gibt ihr Wissen auch als leidenschaftliche Dozentin an angehende Digitalexpert:innen und Führungskräfte weiter, während sie parallel ein Masterprogramm an der Hertie School und der Sciences Po absolviert. Als Projekt Managerin für die Bertelsmann Stiftung hat sie zuvor im Projekt „Ethik der Algorithmen“ die Weiterentwicklung ethischer Regeln für KI in Unternehmen und im öffentlichen Sektor verantwortet. Heute berät sie das Team weiterhin als externe Expertin. //////////////////////////////////////////////////////////// Lajla Fetic works, speaks and writes as a researcher and consultant on the ethical aspects of artificial intelligence. In 2021, she was named one of the 100 Brilliant Women in AI Ethics. She is primarily concerned with the question of how technology can contribute to strengthening the common good. To bridge science and practice, she has co-authored several guides on implementing AI ethics and was also involved in the creation of an AI ethics label as part of the AI Ethics Impact Group. She is passionate about sharing her knowledge as a lecturer to aspiring digital experts and leaders while pursuing a Master's program at the Hertie School and Sciences Po. Prior to that, she was a project manager for the Bertelsmann Stiftung and responsible for the development of ethical rules for AI in companies and the public sector in the project "Ethics of Algorithms" and continues to advise the team as an external expert. Es gibt weit über 200 KI-Ethik-Guidelines weltweit und es werden kontinuierlich mehr. Haben wir dadurch bereits über alle wichtigen Aspekte bei der ethischen Gestaltung von algorithmischen Systemen gesprochen? Herrscht Einigkeit über die Umsetzungsmöglichkeiten in der Praxis? Hier liegt noch ein langer Weg vor uns! Abstrakte Ethik-Debatten und Richtlinien brauchen Konkretisierung, Unternehmen und der öffentliche Sektor benötigen Umsetzungshilfen für die jeweiligen Anwendungsfälle. In meiner Arbeit setze ich mich dafür ein, die Theoriedebatten in die Praxis zu übersetzen und arbeite mit dem #AlgoRules-Projekt genau an dieser Schnittstelle. Was meinen wir, wenn wir über "faire" und "vertrauenswürdige" KI sprechen? Inwiefern gelingt es, dies messbar zu machen? Wie müssen Prozesse gestaltet werden, sodass Entwickler:innen und Anwender:innen gleichermaßen auf (ungewollte) Biases reagieren können? Dafür braucht es unterschiedliche Lösungen, die ich mit Expert:innen und Betroffenen entwickle. //////////////////////////////////////////////////////////// There are well over 200 AI ethics guidelines worldwide and the number is growing continuously. Have we thus already talked about all the important aspects of the ethical design of algorithmic systems? Is there agreement on how to implement them in practice? There is indeed still a long way to go! Abstract ethical debates and guidelines need concretization, companies and the public sector need implementation aids for the respective use cases. In my work, I am committed to translating theory debates into practice, and with the #AlgoRules project, I work precisely at this intersection. What do we mean when we talk about "fair" and "trustworthy" AI? How could we make this measurable? How should processes be designed so that developers and users alike can react to (unintentional) biases? This requires different solutions, which I'm developing together with experts and stakeholders. www.lajlafetic.de

Markus Ulbricht

Dresden/Leipzig Sachsen My name is Markus Ulbricht and I am currently working at the Center for Scalable Data Analytics and Artificial Intelligence („ScaDS.AI Dreden/Leipzig“) in Leipzig. After finishing my studies in 2015 I started as a PHD student in the graduate school “Quantitative Logics and Automata” where I began my research about formal and logical foundations of artificial intelligence. In July 2019 I defended my PHD thesis “Understanding Inconsistency - A Contribution to the Field of Non-monotonic Reasoning”. The thesis received an Honorable Mention at the EurAI Artificial Intelligence Dissertation Award 2019. Before joining Scads.AI, was also employed at several international research projects. Currently, my main research is about formal methods of argumentation. At Scads.AI, I put special emphasis on their role for explainable AI. A recent paper of ours received the Ray Raiter Best Paper Award at KR 2020, one of the most important conferences in our research area. My main research is about computational models of argumentation. This reserach area is engaged with modeling arguments and the way they interact with each other, as well as the evaluation of conflicting scenarios. This also includes situations where a user has to make some kind of complex decision, with various possibilities and a hardly comprehensible amount of aspects to be taken into account. Our goal is to investigate possible applications of formal models of argumentation for explainability in AI. The latter is about making AI systems and their decisions explainable to the user, which is one of the key challenges in AI research nowadays. Due to their inherent explaining nature and clear structure, formal models of argumentation have a lot of potential to contribute to this important line of research. Our work is dedicated to exploiting this potential.
Technik- und Ingenieurwissenschaften

Pascal Klink

Darmstadt Hessen How can we develop systems that can use already acquired knowledge in new situations? My name is Pascal Klink and I have been working on this question for more than one and a half years as part of my PhD studies at TU Darmstadt. Before that, I completed a B.Sc. in Computer Science and M.Sc. in Autonomous Systems at TU Darmstadt, with a semester abroad at UBC Vancouver. During the course of the last years, research results in the domain of machine learning impressively demonstrated that we can develop virtual and physical systems that learn tasks autonomously. Nevertheless, there are hurdles that impede the widespread use of such systems. Learning a task is often time consuming and what is learned is not reusable. For example, a robot that has learned to assemble a workpiece from basic components must be "relearned" from scratch for a similar, non-identical workpiece. We humans function differently. When learning new skills, we build on what we know. As a result, we can solve more complex tasks as we gain experience. I would like to realize this property in AI systems, with a special focus on robotics. In this way, AI can be used by more companies in a future working world, as systems can be more easily adapted to their tasks. Realizing aforementioned knowledge reuse and -transfer in AI systems raises a lot of challenging research questions in a variety of research domains. I particularly focus on the domains of Reinforcement Learning, Transfer Learning as well as Bayesian Inference.

Shailza Jolly

Kaiserslautern, Germany Bitte auswählen Shailza Jolly is a second-year Ph.D. student, advised by Prof. Dr. Andreas Dengel at TU Kaiserslautern in Germany and works as a research assistant at the German Research Center for Artificial Intelligence (DFKI). She is primarily interested in developing machine learning methods for building low-resource natural language generation and understanding systems. Presently, she is working on scoring-based NLG methods in collaboration with Prof. Mou from the University of Alberta, Canada. Her other research interests include vision and language systems, interpretability, and conversational AI. She completed her master's in computer science from TU Kaiserslautern and spent a semester abroad at Kyushu University in Japan, where her work "How do Convolution Neural Networks Learn Design?" won the best student paper award at ICPR 2018. She has published her works at venues like EMNLP and COLING. During her graduate studies, she interned at SAP Machine Learning Research (Berlin, Germany), and Amazon Alexa (Aachen, Germany). Recently, she has been awarded an STSM Grant under Multi3Generation COST action to conduct research for generating fact-checking explanations in low-resource settings in collaboration with Prof. Augenstein at the University of Copenhagen, Denmark. Is it possible to have human-like conversations with chatbots by training them using a handful of samples? Can small businesses and startups build robust and interpretable NLP systems without extensive computing infrastructure and large datasets?

Stefan Seegerer

Berlin Bayern

Stefan Seegerer ist Doktorand in der Didaktik der Informatik an der FU Berlin (bis 2020: FAU Erlangen-Nürnberg). Als Informatikdidaktiker beschäftigt er sich mit künstlicher Intelligenz als Thema für die Schule und Bildung und der Frage, was jede bzw. jeder über KI wissen sollte. Hierzu identifiziert er in seiner Forschung zugrunde liegende Ideen und Prinzipien von KI, denn nur durch ein Verständnis dieser Ideen und Prinzipien werden Lernende in die Lage versetzt, Auswirkungen, Möglichkeiten und Grenzen von KI adäquat zu analysieren sowie zu diskutieren und die digitale Welt mitzugestalten. Gleichzeitig forscht er an Wegen zur Vermittlung von künstliche Intelligenz und sucht nach neuen, kreativen Wegen, komplexe Sachverhalte in anschaulichen Spielen, Visualisierungen oder Erklärungen greifbar zu machen und einen niederschwelligen Zugang zu KI zu ermöglichen. Im Rahmen seiner Arbeit sind so  verschiedene OER-Unterrichtsmaterialien entstanden, wie AI Unplugged, eine Sammlung von Aktivitäten zur spielerischen Auseinandersetzung mit KI, oder dem auf der blockbasierten Programmiersprache Snap! aufbauenden Framework SnAIp, in dem Lernende  selbst kreativ werden und aktiv eigene ML-Artefakte gestalten können. Mit seiner Begeisterung für das Thema KI und dessen Vermittlung steckt er regelmäßig Lehrkräfte in Workshops und Vorträgen an und trägt so dazu bei, künstliche Intelligenz als Thema in die Schulen zu bringen und für zukünftige Generationen zu erschließen.

Stefan Seegerer is a Ph.D. student in the computing education research group at FU Berlin (until 2020: FAU Erlangen-Nürnberg). His work focuses on artificial intelligence as a topic for schools and education and the question of what everyone should know about AI. To this end, he identifies underlying ideas and principles of AI in his research. Only an understanding of these ideas and principles allows learners to shape the digital world and discuss the impact, possibilities, and limitations of AI. At the same time, he is researching ways to teach artificial intelligence and is looking for new, creative ways to make AI accessible to students through tangible games, visualizations, or explanations. His work includes various OER teaching materials, such as AI Unplugged, a collection of activities for playful engagement with AI, or the framework SnAIp based on the block-based programming language Snap! allowing learners to actively design their own ML artifacts. He regularly encourages teachers to teach AI in workshops and talks, thus contributing to bringing artificial intelligence into schools and making it accessible to future generations.

How to best teach AI? How to reach every student and prepare them for years to come?

Natur- und Lebenswissenschaften

Stefanie Warnat-Herresthal

Bonn Nordrhein-Westfalen

My name is Stefanie Warnat-Herresthal and I am a PhD student at the Life and Medical Sciences Institute (LIMES) at the University of Bonn and at the department of Systems Medicine at the German Center for Neurodegenerative Diseases (DZNE) Bonn. My research focusses on applying new KI-based systems on clinical transcriptomics data to establish classifiers for the diagnosis of diseases such as acute myeloid leukemia, tuberculosis or COVID-19. Currently I am working on “swarm learning”, a new, decentralized machine-learning framework which enables collaborative learning across different clinical sites without the need for data sharing.

Prior to that that, I studied biology (B.Sc., 2014) and life and medical sciences (M.Sc., 2016) at the Rheinische Friedrich-Wilhelms University of Bonn. Additionally, I studied Philosophy (M.A., 2013) at the Munich School of Philosophy, with my research focus being epistemiology, philosophy of science and ethics. From 2008 to 2012 I worked at the Institute Technology – Theology – Natural Sciences at the Ludwigs-Maximilians University of Munich, where I engaged in the interdisciplinary dialogue between natural sciences and the humanities in various projects.

Meine Forschungsarbeit liegt im Schnittfeld zwischen Immunologie, Genomik und Informatik. Momentan beschäftge ich mich “Schwarm-Lernen” (SL), einem Ansatz, der gemeinsames KI-basiertes Lernen an verschiedenen Standorten ermöglicht, ohne dass dabei Daten geteilt werden müssen. Die Teilnehmer des Schwarms übergeben die Parameter ihrer lokalen Modelle an ein sicheres, blockchain-basiertes Netzwerk. Dort werden die Parameter selbstorganisiert kondensiert und an die Teilnehmer zurückgegeben, wodurch diese vom Wissen des Gesamtnetzwerks profitieren. Hochinteressant ist dies für die Anwendung in der Medizin, da hier sensible medizinischen Daten an unterschiedlichen Stellen generiert werdern, diese aber aufgrund strenger gesetzlicher Vorgaben nicht geteilt oder gar in eine zentrale Cloud geladen werden dürfen. SL haben wir bereits mit Erfolg auf Blut-Transkriptomdaten von Patienten mit Leukämie, Tuberkulose und COVID-19 angewandt und wir versprechen uns viele weitere mögliche Anwendungsfelder.


Rosemary Lee

Copenhagen, Denmark Anderes

Rosemary Lee is a practicing artist and media studies researcher. She recently completed her PhD at the IT-Unversity of Copenhagen, examining the influence machine learning has on notions of the image. Through practice-led, interdisciplinary research, Lee’s artistic and theoretical investigations critically engage with historical tendencies in discourse surrounding technology which continue to shape current perspectives. Her work has been disseminated internationally in art and research contexts related to AI, including the Artificial Creativity Virtual Conference at Malmö Univerity, the Dark Eden Transdisciplinary Imaging Conference hosted by UNSW Sydney, and the transmediale festival for art and digital culture at the Haus der Kulturen der Welt, Berlin. Lee’s recent exhibitions include Reprogramming Earth (Neme, Limassol, CY), Perpetual Interpreter (LOKALE, Copenhagen, DK), SCREENSHOTS (Galleri Image, Aarhus, DK), Ubiquitous Futures (CATCH, Helsingør, DK) and machines will watch us die (The Holden Gallery, Manchester, UK).

How does the use of artificial intelligence in art expand upon historical notions concerning the role of technology in visual media?

What can be gained from considering algorithmically produced images in terms of epistemology instead of ontology, as they have primarily been treated?

Can algorithmic methods of image production be thereby understood as ways of knowing about the world, rather than ways of being in the world?

Geistes- und Sozialwissenschaften

Anna-Sophie Ulfert

Frankfurt am Main Hessen Anna-Sophie Ulfert erforscht am Lehrstuhl für Psychologie der Goethe-Universität Frankfurt, wie Anwendungen Künstlicher Intelligenz (KI) im Arbeitskontext eingesetzt werden. Schwerpunkt ihrer Untersuchungen ist die Rolle von Vertrauen, Transparenz und Verständlichkeit in der Zusammenarbeit mit KI-Systemen. Ziel ihrer Forschung sind ableitbare Handlungsempfehlungen für Organisationen sowie die Förderung und Optimierung der Zugänglichkeit und Verständlichkeit von KI-Anwendungen im Alltag. Hierfür hat sie verschiedene internationale und multidisziplinäre Forschungsprojekte initiiert und forscht aktuell z.B. mit Forschergruppen in den Niederlanden und in Israel. Darüber hinaus setzt sie sich insbesondere in der Lehre dafür ein, das Thema KI für Studierende verschiedener Fächer zugänglich zu machen und zu entmystifizieren. Dabei soll nicht nur ein thematisches Verständnis erreicht werden, sondern ein fächerübergreifender Diskurs angeregt werden, welcher Studierende für die relevanten Fragen der Zukunft sensibilisiert. Um dies zu ermöglichen, hat sie ein transdisziplinäres Curriculum zum Thema KI für Psychologiestudierende an der Goethe-Universität entwickelt. In meiner Forschung und Lehre im Fach Psychologie beschäftige ich mich damit, wie KI die Arbeitswelt verändert, wie sich Menschen an diese Veränderungen anpassen und wie KI Methoden in der psychologischen Forschung eingesetzt werden können. Dabei konzentriere ich mich insbesondere darauf wie individuelle Faktoren den Umgang mit KI beeinflussen (z.B. der Einfluss von Expertise auf Technologievertrauen), wie KI-Systeme entwickelt werden können, um die Interaktion zu fördern (z.B. understandability) und wie KI-Systeme die Arbeitswelt verändern (z.B. Human-Agent Teaming). Zudem untersuche ich den Einsatz von Agentenbasierter Modellierung in der Teamforschung. Auch in der Lehre bin ich motiviert die Disziplinen der Informatik und Psychologie weiter zusammenzuführen um eine „gemeinsame Sprache“ zwischen den Fächern zu entwickeln (z.B. Gegenüberstellen von Konzepten wie Aufmerksamkeit oder Intelligenz).
Natur- und Lebenswissenschaften

Viktor Zaverkin

Stuttgart Baden-Württemberg My name is Viktor Zaverkin and since March 2019 I am a Ph.D. student at the University of Stuttgart with Prof. Johannes Kästner. While at school, I have developed a passion for math and natural science, especially for physics and chemistry. In October 2014 this passion led me to the University of Stuttgart where I studied materials science, an interdisciplinary subject comprising both physics and chemistry. During my undergraduate studies, I became interested in theoretical physics and chemistry which turned around my world. I have graduated in February 2019 from the Institute for Theoretical Chemistry, where I worked on instanton theory, the aim of which is the description of the quantum-mechanical tunneling effect in chemical reactions. The work in the group of Prof. Johannes Kästner awakened my interest in molecular machine learning, which is the subject of my current work. In 2019 I have been awarded the Artur Fischer Prize for outstanding graduation and since 2020 I receive a scholarship from the “Studienstiftung des Deutschen Volkes”. The chief goal of my research project is the application of machine learning methods to construct potential energy surfaces with the aim of modeling chemical reactions. The main prerequisite for a proper ML model for molecules and solids is the incorporation of manifold symmetries: the invariance of a chemical system with respect to translation, reflection, or rotation of the whole molecule and to permutation of atoms with the same nuclear charge. This can be achieved by a proper coordinate transformation which I developed by exploiting the mathematical properties of Gaussian type orbitals [JCTC 2020, 16, 8, 5410–5421]. The efficiency of ML methods depends on the quality and expressiveness of training data. I'm using the method of active learning derived in the framework of optimal experimental design to construct highly informative chemical data sets [submitted to MLST]. The specific applications of the developed methods range from the description of nitrogen atom dynamics on top of amorphous solid water at astrochemical conditions [MNRAS 2020, 499, 1, 1373–1384] to catalytic research questions, e.g., the simulation of covalent organic frameworks.