Giulia Gaddi

 
Doctorante
Giulia Gaddi
Médier avec l’environnement en zone de recherche scientifique expérimentale : humains et défis climatiques autour du Large Hadron Collider (Pays de Gex, 01)
Sophie Houdart
...

Domaines de recherche

France, Suisse
Organisation Européenne pour la Recherche Nucléaire
Innovation, écologie, transfert technologique, anthropologie des sciences

Parcours universitaire et professionnel

Giulia Gaddi prépare une thèse portant sur les activités des physiciens, ingénieurs et techniciens de l'Organisation Européenne pour la Recherche Nucléaire. Elle s'intéresse au transfert de technologies innovantes issues de la physique des particules, conçues d'abord pour la recherche fondamentale, vers un usage social et écologique.

 

(2022-2023) Suppléante pour les représentant.es des doctorant.es 
Gaddi, G., A. Barbier et V. Manceron, 2024, « Ethnographier le proche (France, Europe) », Atelier des doctorants : Méthodes et métiers, Lesc, Université Paris Nanterre.
Gaddi, G., 2024, « Médier avec l’environnement en zone de recherche scientifique expérimentale : humains et défis climatiques autour du Large Hadron Collider (CERN, Meyrin (Suisse)/Pays de Gex (01)) », séminaire transversal de l’ED 395 - Espaces, Temps, Cultures, Université Paris Nanterre.
Valtonen, L., O. Werner, J. Poulaillon, D. Zimmermann et G. Gaddi, 2024, « Opening the Box of PandorAI: The relation between AI usage in interdisciplinary student innovation programs and collaboration with external expertise », Open Innovation in Science 2024, London, United Kingdom, en ligne : https://hal.science/hal-04778177.
Impacts of artificial intelligence (AI) on open science and innovation is an emerging area of research. Recent results suggest that AI can ameliorate the generation of novel interdisciplinary research ideas. This perceived benefit is due to the researchers revealing their own embeddedness to their field’s assumptions and language. (Beck, Poetz, & Sauermann, 2022). Namely, benefits of AI as a technology are not due to anything AI does but what it requires from researchers. This is often the challenging part of AI problem solving: people formulating the problem in a way that AI can process (Boden, 2016, p. 29). However, Beck, Poetz, and Sauermann (2022) also noted a drawback to the AI opportunities: AI usage can discourage communication between researchers due to assumptions that AI already offers all knowledge to exploit interdisciplinary research venues. In other cases AI usage has been shown to reduce communication and information sharing, posing a different type of threat to openness and collaboration. For instance, when using AI to choose between ideas, people constrain the information they offer about their reasoning (Jain et al., 2023; Valtonen and Mäkinen, 2022). In addition to losing such information, overreliance on AI poses risks to unique human knowledge and encourages passivity in decision-making (Fügener et al, 2022; Keding and Meissner, 2021). Such risks have been observed in CERN IdeaSquare student project courses, particularly for ambiguous tasks requiring additional information: students broadly acquire information from AI, neglecting human expertise and experience. Moreover, teams have offered little explanation for decisions made throughout projects when presenting their ideas. We aim to research the mechanisms behind the above-mentioned issues and to develop guidelines for their mitigation: we will begin by looking at teams using AI for idea generation and selection, collect their prompts and create a proximity metric to assess the similarity of the groups’ ideas in their final presentations to those they generated with AI. We will collect qualitative descriptions of AI usage throughout the projects, the amount of external collaboration, and compare AI usage and collaboration to the success of the courses’ intended innovation learning outcomes. These results will be compared to those of groups not allowed to use AI. With these explorative results, we can begin to see how AI usage links to the amount of external collaboration and students’ intended innovation learning outcomes. Subsequent investigations shall delve into the relation between varying approaches to AI and its users’ cognitions & behaviours such as trust, likability, and commitment to ideas, as well as the effects of these relations on the learning outcomes and degree of external collaboration. Distinct contexts, such as concrete versus abstract problems, groups with AI development experience versus none, and using AI individually versus in teams shall be studied for differences. These insights will inform the creation of guidelines for AI usage that foster positive learning outcomes for innovation. Master’s level students’ innovation habits will carry over to the professional realm. Thus, nurturing openness and collaboration in innovation education is imperative for shaping the future of science and innovation.
Gaddi, G. et J. Poulaillon, 2024, A hackathon to reconstruct the trajectories of particles in the CMS experiment, IdeaSquare, 8 mars 2024, en ligne : https://ideasquare.cern/node/351.
The 15th Patatrack Hackathon took place at IdeaSquare to focus on reconstruction algorithms for the CMS experiment.
Poulaillon, J. et G. Gaddi, 2024, Scouting the flow of data for the future High Luminosity LHC, IdeaSquare, 21 juin 2024, en ligne : https://ideasquare.cern/node/376.
How to manage large amounts of data without losing any interesting ones? The selection of data concerning particle collisions in the LHC is critical since it will determine all the resulting analyses. This is the decisive work of a novel scouting system located at the beginning of the data chain.
Sauvegarder
Choix utilisateur pour les Cookies
Nous utilisons des cookies afin de vous proposer les meilleurs services possibles. Si vous déclinez l'utilisation de ces cookies, le site web pourrait ne pas fonctionner correctement.
Tout accepter
Tout décliner
Flexicontent
Flexicontent
Accepter
Décliner