Giulia Gaddi prépare une thèse portant sur les activités des physiciens, ingénieurs et techniciens de l'Organisation Européenne pour la Recherche Nucléaire. Elle s'intéresse au transfert de technologies innovantes issues de la physique des particules, conçues d'abord pour la recherche fondamentale, vers un usage social et écologique.
Impacts of artificial intelligence (AI) on open science and innovation is an emerging area of research. Recent results suggest that AI can ameliorate the generation of novel interdisciplinary research ideas. This perceived benefit is due to the researchers revealing their own embeddedness to their field’s assumptions and language. (Beck, Poetz, & Sauermann, 2022). Namely, benefits of AI as a technology are not due to anything AI does but what it requires from researchers. This is often the challenging part of AI problem solving: people formulating the problem in a way that AI can process (Boden, 2016, p. 29). However, Beck, Poetz, and Sauermann (2022) also noted a drawback to the AI opportunities: AI usage can discourage communication between researchers due to assumptions that AI already offers all knowledge to exploit interdisciplinary research venues. In other cases AI usage has been shown to reduce communication and information sharing, posing a different type of threat to openness and collaboration. For instance, when using AI to choose between ideas, people constrain the information they offer about their reasoning (Jain et al., 2023; Valtonen and Mäkinen, 2022). In addition to losing such information, overreliance on AI poses risks to unique human knowledge and encourages passivity in decision-making (Fügener et al, 2022; Keding and Meissner, 2021). Such risks have been observed in CERN IdeaSquare student project courses, particularly for ambiguous tasks requiring additional information: students broadly acquire information from AI, neglecting human expertise and experience. Moreover, teams have offered little explanation for decisions made throughout projects when presenting their ideas. We aim to research the mechanisms behind the above-mentioned issues and to develop guidelines for their mitigation: we will begin by looking at teams using AI for idea generation and selection, collect their prompts and create a proximity metric to assess the similarity of the groups’ ideas in their final presentations to those they generated with AI. We will collect qualitative descriptions of AI usage throughout the projects, the amount of external collaboration, and compare AI usage and collaboration to the success of the courses’ intended innovation learning outcomes. These results will be compared to those of groups not allowed to use AI. With these explorative results, we can begin to see how AI usage links to the amount of external collaboration and students’ intended innovation learning outcomes. Subsequent investigations shall delve into the relation between varying approaches to AI and its users’ cognitions & behaviours such as trust, likability, and commitment to ideas, as well as the effects of these relations on the learning outcomes and degree of external collaboration. Distinct contexts, such as concrete versus abstract problems, groups with AI development experience versus none, and using AI individually versus in teams shall be studied for differences. These insights will inform the creation of guidelines for AI usage that foster positive learning outcomes for innovation. Master’s level students’ innovation habits will carry over to the professional realm. Thus, nurturing openness and collaboration in innovation education is imperative for shaping the future of science and innovation.
The 15th Patatrack Hackathon took place at IdeaSquare to focus on reconstruction algorithms for the CMS experiment.
How to manage large amounts of data without losing any interesting ones? The selection of data concerning particle collisions in the LHC is critical since it will determine all the resulting analyses. This is the decisive work of a novel scouting system located at the beginning of the data chain.
Le laboratoire d’ethnologie et de sociologie comparative (Lesc–UMR 7186, CNRS/Université Paris Nanterre), propose un dispositif de soutien aux candidat.es au concours 2025 au poste de chargé.es de recherche au CNRS se reconnaissant dans les perspectives scientifiques du laboratoire et souhaitant leur rattachement au Lesc en cas de recrutement. Plus d'informations ici