Università della Svizzera italiana

01/23/2026 | News release | Distributed by Public on 01/23/2026 03:31

Beyond data: when programmers' values influence artificial intelligence

A recent scientific article by Peter Seele, a Full Professor at the Faculty of Communication, Culture, and Society at Università della Svizzera italiana, and Ludovico Giacomo Conti, a PhD student at the same faculty, examines the issue of bias in artificial intelligence from a largely unexplored perspective. Published in the journal Humanities & Social Sciences Communications (Nature Portfolio), the study analyses how the values and assumptions of those who program AI systems can affect their functioning, even when biased training data is not present.

Starting from a critical reflection on the current debate, the work of Seele and Conti highlights a level of arbitrariness that is often overlooked in the development of artificial intelligence systems. While much of the literature focuses on biases arising from the datasets used to train algorithms, the study proposes shifting the focus to the designers and writers of those algorithms.

According to the authors, programmers - consciously or unconsciously - incorporate values, assumptions, and priorities into their code that reflect their cultural, social, and professional contexts. This phenomenon, defined as "second-level arbitrariness", helps to guide the behaviour of AI systems, influencing their output even when the source data appears neutral.

To address this problem, the article proposes transferring a methodology well established in the social sciences to the field of artificial intelligence: reflexivity. Specifically, the authors suggest introducing Algorithm Designers' Reflexivity Statements (ADRS), i.e., internal and confidential written reflections in which programmers critically analyse their assumptions, design choices, and potential sources of bias.

These internal statements would be accompanied by an AI Positionality Statement (AIPS), a public summary aimed at end users. This tool aims to make transparent the residual and structural biases that can influence the algorithm's results, therefore enabling a more informed and contextualised reading of AI outputs.

These proposals collectively aim to enhance accountability and transparency in the development of artificial intelligence. They provide a conceptual framework that encourages us to reevaluate the human role in designing algorithmic systems and to incorporate ethical practices that have already been tested in other disciplines.

The full article is available online, together with an in-depth analysis of the theoretical concepts and practical implications of the proposal, at: https://rdcu.be/eU5iS

Università della Svizzera italiana published this content on January 23, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 23, 2026 at 09:31 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]