Lecture series: Machines that understand?

Large Language Models and Artificial Intelligence (WiSe 2024/2025)

The aim of the lecture series is to make current developments in the field of generative artificial intelligence understandable and to stimulate an informed dialogue about the capabilities, limitations and societal relevance of these models.

Top-class international researchers are invited to present their current research to a broad university public. In addition to technical aspects, topics include questions of fairness and responsibility in AI models, and the importance of AI for the broader university context, e.g. in the field of digital humanities.

Start: Thursday, October 3rd, 16:45-18:15
End: Thursday, January 31st, 16:45-18:15
Location: Hörsaal 33 lecture hall, Universitätsring 1
Languages: German and English

Information in u:find

Program

Date Speaker / Description
October 3, 2024 Benjamin Roth
Foundations of LLMs 1
October 10, 2024 Benjamin Roth
Foundations of LLMs 2
October 17, 2024
BIG-Hörsaal Hauptgebäude
Benjamin Roth
Foundations of LLMs 3
October 24, 2024

Inverted Classroom Discussion Alexander Koller

ChatGPT does not really understand you, does not really know anything, but is still revolutionary AI

Abstract: Large neural language models (LLMs), such as ChatGPT, have revolutionized AI. It is hard not to be impressed by the quality of text and program code they generate, and they seem to encode unprecedented amounts of knowledge about the world. At the same time, it is well known that LLMs are prone to hallucinations, often produce factually incorrect language, and the "knowledge" they encode is faulty and inconsistent.
In this talk, I will discuss some strengths and weaknesses in the ability of LLMs to understand language and reason with knowledge. I will present arguments that the way that LLMs are trained cannot, in principle, lead to human-level language understanding. I will also talk about recent work that leverages the knowledge encoded in LLMs for planning. In this way, I hope to offer a balanced picture of the situation and engage in an informed discussion with the audience.


Bio: Alexander Koller is a Professor of Computational Linguistics at Saarland University in Saarbrücken, Germany. His research interests include syntactic and semantic processing, natural language generation, dialogue systems, and the interface between language and action. He is particularly interested in neurosymbolic models that bring together principled linguistic modeling with the coverage and robustness of neural approaches. Alexander received his PhD from Saarland University and was previously a postdoc at Columbia University and the University of Edinburgh, faculty at the University of Potsdam, and Visiting Senior Research Scientist at the Allen Institute for AI.

October 31, 2024 Andreas Stephan
Practical Session
November 7, 2024

Invited Talk Terra Blevins

Breaking the Curse of Multilinguality in Language Models

Abstract: While language models (LMs) grow larger and gain new capabilities, their performance in non-English languages increasingly lags behind. This is due to the curse of multilinguality, where individual language performance suffers in models trained on many languages. In this talk, I examine how current language models do and don't capture different languages by uncovering how the curse of multilinguality develops during multilingual model training. Building on these insights, I then present two lines of work into breaking this curse. I first discuss best practices for training targeted multilingual models specializing in a single language family. Then, I present a new method, Multilingual Expert Language Models (X-ELM), that expands on the idea of targeted multilingual training and facilitates more equitable massively multilingual language modeling. We show that X-ELMs provide many performance and efficiency benefits over exisiting multilingual modeling approaches, indicating their potential to democratize multilingual NLP.

Bio: Terra Blevins is a postdoctoral researcher at the University of Vienna and an incoming assistant professor at Northeastern University. She holds a Ph.D. in Computer Science from the University of Washington, where she was advised by Luke Zettlemoyer and worked as a visiting researcher at Facebook AI Research (FAIR). Her research focuses on multilingual NLP and analyzing the linguistic knowledge of language models, with the overarching aim of using analysis insights to build better-performing and more equitable multilingual systems.

November 14, 2024

Invited Talk Žiga Škorjanc

Is there a "Brussels effect" in European AI-regulation. A legal analysis of the AI-act and its impact on European innovation.

Abstract: The lecture will provide an overview on how the European legal system tries to regulate AI in general and LLMs in particular.

Bio: Žiga Škorjanc is a postdoctoral researcher (“Habilitand”) at the Department of Innovation and Digitalisation in Law, University of Vienna, Managing Director of lexICT GmbH Austria, Member of the European Union Intellectual Property Office (EUIPO) Observatory Legal Expert Group, and Advisory Board Member of the Digital Asset Association Austria (DAAA). Previously, he worked at a law firm in Vienna (bar exam, Vienna Regional Court of Appeals). He specialises in IT-, IP-, data protection and data law, as well as the use of technological innovations in the financial sector.

November 21, 2024

Invited Talk Dagmar Gromann

Embodiment of Language and Neural Language Models

AbstractCognitive linguistics has provided compelling evidence that semantic structure in natural language reflects conceptual structure that arises from our embodied experience in the world. Yet, Large Language Models can handle sophisticated natural language understanding and generation tasks being solely trained on text, that is, their representations are not grounded in embodied, sensorimotor experiences. This raises the question on how their understanding of semantic meaning differs from that of human beings and in how far language provides access to physically grounded cognitive structures. For instance, conceptual metaphors that project knowledge structures from the physical world to an abstract domain are omnipresent in natural language. Within the context of embodied cognition, this talk investigates the ability of neural language models to explicate metaphorical projection that is believed to ground natural language. 

Bio: Dagmar Gromann (http://dagmargromann.com/) is an Associate Professor at the Centre for Translation Studies of the University of Vienna. Prior to that she worked as a post-doc at IIIA-CSIC in Barcelona and TU Dresden. Her research focuses on neural information and knowledge extraction, with a particular interest in domain-specific terminology and cognitive structures, such as conceptual metaphors and image schemas. Additionally, she is interested in the socio-technical implications of language technology, such as gender-fair machine translation. She is on the editorial board of the Semantic Web and Neurosymbolic Artificial Intelligence journal as well as the Journal of Applied Ontologies. She has also spearheaded the development of the joint master's program Multilingual Technologies with the FH Campus Wien, which strongly focuses on computational linguistics.

November 28, 2024

Invited Talk Michael Wiegand

A Roadmap to Implicitly Abusive Language Detection

Abstract: Abusive language detection is an emerging field in natural language processing that aims to automatically identify harmful or offensive language in text. This task has received considerable attention recently, yet the success of automatic detection systems remains limited. In particular, the detection of implicitly abusive language, i.e. abusive content that is not conveyed through overtly abusive words (e.g. "dumbass" or "scum"), remains a significant challenge.
In this talk, I will explain why existing datasets hinder the effective detection of implicit abuse and what needs to change in the design of these datasets. I will advocate for a divide-and-conquer strategy, where we categorize and address various subtypes of implicitly abusive language. Additionally, I will present a list of these subtypes and outline key research tasks and questions for future exploration in this area.


Bio: Michael Wiegand obtained his PhD at Saarland University in 2011. Until 2018, he was a postdoctoral researcher at the Department for Spoken Language Systems at Saarland University. In 2019, he served as the research group leader at the Leibniz ScienceCampus Empirical Linguistics and Computational Language Modeling, jointly associated with the Leibniz Institute for the German Language (Mannheim) and Heidelberg University. From 2020 to 2024, he held a fixed-term professorship in Computational Linguistics at the Digital Age Research Center (D!ARC) at the University of Klagenfurt. Since July 2024, he has been a Senior Scientist in the Digital Philology group at the University of Vienna, where he also coordinates teaching efforts in the field of digital humanities.

December 5, 2024

Invited Talk Timour Igamberdiev

Privacy-preserving Natural Language Processing

Abstract: In today's world, the protection of privacy is increasingly gaining attention, not only among the general public, but also within the fields of machine learning and natural language processing (NLP). An established gold standard for providing a guarantee of privacy protection to all individuals in a dataset is the framework of differential privacy (DP). Intuitively, differential privacy provides a formal theoretical guarantee that the contribution of any individual to some analysis on a dataset is bounded. In other words, no single individual can influence this analysis 'too much'.
In this lecture we will first discuss why privacy is important and the consequences of not using privacy-preserving methods with sensitive data. We will go through some key theoretical background on differential privacy, discussing fundamental concepts in the field such as the randomized response technique, formal and informal definitions of differential privacy, as well as how to achieve a DP guarantee for an algorithm. We will finally delve into applications of DP to the fields of machine learning and NLP, in particular with the algorithm Differentially Private Stochastic Gradient Descent (DP-SGD).

Bio: Timour is a postdoctoral researcher in the Natural Language Processing research group led by Prof. Benjamin Roth, part of the Data Mining and Machine Learning group of the Faculty of Computer Science, University of Vienna. He completed his Ph.D. in Computer Science at the Technical University of Darmstadt in 2023 under the supervision of Prof. Ivan Habernal and continued to work there as a postdoctoral researcher until late 2024. His research expertise is on privacy-preserving natural language processing, with a focus on differential privacy for NLP systems and textual data. He has further worked with graph-based deep learning and figurative language processing.

December 12, 2024

Inverted Classroom Discussion Asia Biega

Data Protection in Data-Driven Systems

Abstract: Modern AI systems are characterized by extensive personal data collection, despite increasing societal costs of such practices. To prevent harms, data protection regulations specify several principles for respectfully processing user data, such as purpose limitation, data minimization, or consent. Yet, practical implementations of these principles leave much to be desired. This talk will delve into the computational and human factors that contribute to such lax implementations, and examine potential improvements.

Bio: Asia Biega is a computer scientist and a tenure-track faculty member at the Max Planck Institute for Security and Privacy (MPI-SP) where she leads the interdisciplinary Responsible Computing group. Before joining MPI-SP, Asia was a postdoctoral researcher at Microsoft Research Montréal, and before that, she completed her Ph.D. in Computer Science at the MPI for Informatics and the MPI for Software Systems. In her research, Asia studies questions at the intersection of computing & society, particularly in the context of data-driven systems. She often collaborates with scholars in law, philosophy, and social sciences, draws from her industry experience at Microsoft and Google, and shares her expertise with policymakers and data protection authorities. She is currently an invited external expert for the European Commission and will be the General Chair of ACM FAcct 2025 conference.

January 9, 2025

Invited Talk Paul Röttger

A Brief Introduction to AI Safety
 
AbstractAI systems such as ChatGPT, are now being used by millions of people across the world. In this lecture, I will give a brief introduction to the field of AI safety, which works to ensure that AI is safe to use today and will continue to be safe as it grows more capable. I will outline what it means for an AI system to be safe, how we can test for safety and how we can improve it. I will discuss open challenges in AI safety today as well as risks that may materialise in the future. The lecture will include a small participatory component, so I encourage the audience to bring their laptops.
 
Bio: Paul is a postdoctoral researcher in Dirk Hovy's MilaNLP Lab at Bocconi University, working on evaluating and improving the alignment and safety of large language models (LLMs), as well as measuring their societal impacts. Before coming to Milan in June 2023, Paul completed his PhD at the University of Oxford, where he worked on LLMs for hate speech detection. During his PhD, Paul also co-founded Rewire, a start-up building AI for content moderation, which in March 2023 was acquired by another large online safety company.

January 16, 2025 Invited Talk Mateusz Malinowski
January 23, 2025 Q&A Session
January 30, 2025 Exam