02/23/2026 | News release | Distributed by Public on 02/23/2026 04:15
On Monday, 2 March, Prof. Deborah Sulem, the new Assistant Professor at the Faculty of Informatics of the Università della Svizzera italiana, will deliver her inaugural lecture at the Auditorium of the West Campus Lugano. Titled "Data Science: the good, the bad and the wild", the lecture will offer a critical and nuanced look at the evolution of data science, its increasingly pervasive impact on daily life, and the resulting scientific and ethical responsibilities.
Ahead of the event, we met with Prof. Sulem to discuss her academic career and the main themes of her research. Below is the interview, conducted by the USI Institutional Communication Service.
Prof. Sulem, the title of your lecture, "Data Science: the good, the bad and the wild", suggests a complex and multifaceted view of the discipline: where did this choice come from, and what do you intend to focus on most?
"Data Science (DS) is the core foundation of Artificial Intelligence (AI) technologies, and we have witnessed in recent years that its impact is ambivalent. For academic researchers in this field, it is fascinating to see such rapid progress and technological transfer from theoretical research in data analysis and machine learning into large-scale industrial applications. In fact, DS algorithms are often open source, which means that they can be implemented by most people in countless areas - this is what is meant by "in the wild". We also have daily evidence that DS algorithms can be biased and misused - consciously or not. It is our responsibility as researchers to highlight these potential negative aspects and work towards removing them. While there are many ways to address this issue, my research focuses on statistical fairness, a topic that I will discuss in my lecture".
Data science is evolving at an impressive speed and increasingly involves more areas of everyday life: what transformations do you consider most relevant today, even outside the scientific community?
"It is certain that the use of Large Language Models (LLMs) has made major transformations to our work and private lives, e.g., from learning to managing administrative tasks. However, in my opinion, the revolution in Computer Vision in the 2010s (often called the first AI revolution) has had an even deeper impact. It has reached most scientific and engineering fields, in particular robotics, the medical sector, and security systems. For instance, algorithms that can accurately identify objects in images have improved autonomous vehicles, medical diagnostics (radiology), security checks (face recognition), amongst many other applications. The majority of these transformations are positive, I believe, but it does not mean that this technology is fully mature. There is still a significant gap between practices and theoretical understanding of these algorithms, which leaves many interesting research problems to solve!".
A central part of your research concerns biases in predictive algorithms applied to data concerning people: in your opinion, what are the most underestimated risks, and what tools do we have to prevent them?
"A danger with predictive algorithms is applying them outside the context for which they were designed, without realising that their predictions are no longer reliable and can be biased. A large majority of algorithms are available on open-source platforms or via code packages and can be implemented by anyone (now even more with the help of LLMs). The risk is that those algorithms are used without extending checks on their appropriateness, legitimacy and their impact over time. In fact, validation and testing procedures are essential for mitigating risks, but preventing bias or harmful errors that could affect certain populations requires significant research. Take, for example, a self-driving car: the number of possible problematic scenarios that could occur over several years is enormous! In fact, we generally lack methodology for running validation checks on complex prediction models - more research problems to solve!".
What role can and should universities and data scientists play today in promoting a more responsible, fair and conscious use of algorithms?
"This is a really hard problem, as the same types of issues were raised with the Internet, and this is not something that can be completely solved. In some sense, I believe there will always be a "dark side" to algorithms. But it does not mean that it is hopeless or useless to educate people about responsible and fair use of DS algorithms. I think it is very important to raise public awareness and teach our students about potential pitfalls and biases in learning algorithms, and how to conduct a critical analysis (e.g., who would be most affected by a wrong prediction?). We also have to collectively set ethical standards and propose solutions for making algorithms as fair and reliable as possible within those standards".