iHub is a part of


Autonomy, persuasion, and manipulation in the digital sphere

Emerging information technologies like large language models and opaque recommendation systems pose challenges to public values such as democracy, privacy, and autonomy. In this theme, researchers from language studies and linguistics, communication science, computer science, law, and philosophy study the current online information ecosystem. On a fundamental level, we explore how emerging digital technologies and online practices can influence Internet users’ behavior, beliefs, and choices. At this, we focus on the steering qualities of language, interaction design, algorithms, and AI. On an applied level, we design and evaluate interventions that aim to foster citizens’ digital literacy and autonomy (e.g., help reflect on online information’s accuracy and authenticity or support deliberate decision-making). These interventions, e.g., explore novel approaches to counter misinformation and alternative information ecosystems, such as social networks based on public values. Sub-themes on our research agenda include filter bubbles, decision-support systems, recommender systems, micro-targeting, AI transparency, generative AI, trustworthy AI, AI literacy, deceptive design practices, nudging, and fake news.

Technology at work

“Technology at Work” is an iHub/Radboud interdisciplinary Special Interest Group that examines the ways in which digital technologies are used in, and impact a wide range of professional, institutional and informal environments. It focuses on how values are affected by the actual use of digital technology in practice, and thus complements iHub’s reflective approaches to digitalization and society and value-driven technology design.

We speak of technology “at” work as we are interested in human practices in the context of technologies and how these practices affect or transform work, expertise and social life. The group consists of scholars with various disciplinary backgrounds, from linguists and social scientists to management scientists, who forward empirical approaches, both qualitative (observation, interaction analysis, interview) and quantitative (survey, experiment, implicit association test). Topics of research include for example how professionals deal with digital records and data, how digital delivery of treatment or education affects relationships, roles and responsibilities, and how digital communication affects work-life balance.

Technology at Work aligns with, and forwards the agenda of the following Sector Plans:

  • Humanities > Theme Humane AI
  • Social Sciences > Theme The Human Factor in New Technologies
  • Social Sciences and Humanities cross-cutting: Prosperity, Participation and Citizenship in a Digital World
An person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image.


Automated Decision-Making and Policy

Quite a few researchers at the iHub are interested in policy related to automated decision-making (ADM). Automated decision-making can be based on AI (artificial intelligence) or on simpler computer systems.

ADM is used in many sectors. For instance, insurers could use ADM to set premiums for individual consumers. The police can use ADM for predictive policing. At borders, ADM can be used for risk profiling of travellers, or to check the validity of visas. Employers could use ADM to select the best candidates from job applications. Many states use ADM to detect fraud in the welfare state.

ADM can be efficient and can improve our society in many ways. But it also has drawbacks. For instance, ADM can have discriminatory effects. Non-discrimination statutes prohibit some types of disc rimination, for instance when it harms people with certain ethnicities, genders, or similar protected characteristics. Other types of ADM-driven differentiation may be unfair, while they remain outside the scope of non-discrimination law, for example because they do not harm groups with protected characteristics.

ADM als has other drawbacks than discrimination risks. For example, since many ADM systems are based on analysing data, organisations that develiop or use ADM systems have an incentive to collect much data, including personal data.

iHub researchers who are interested in ADM come from many disciplines, including law, philosophy, computer science, anthropology, economics, and social sciences.

Interdiciplinary research hub on digitalization and society