• Passa alla navigazione primaria
  • Passa al contenuto principale
  • Passa alla barra laterale primaria
  • Passa alla barra laterale secondaria
  • Passa al piè di pagina
  • Home
  • Chi sono
  • Contatti
  • Tu che sei un fisico
  • Incontri e conferenze
  • Particelle familiari
  • Work

I code out of necessity, and I might finally be able to stop

11 Marzo 2026 Lascia un commento

In mid-January I gave a public talk in Annecy where I was telling an audience of curious people how a Higgs boson is produced and its properties measured (and why, but that's another story). In the conversation with the audience that always follows these events, I received among others a question that comes up more and more often on these occasions: "For the research you do at ATLAS and the measurements on the Higgs boson, do you use artificial intelligence systems?"

The simple answer is "Obviously yes". In fact, for some time now the analysis of data collected by the LHC experiments (and, more recently, a non-negligible part of the data acquisition itself) has been based on machine learning algorithms: separation of "signal" and "noise", classification of different physical phenomena, identification of objects and particles in the detectors, calibration of the detectors' own response. We can say with some pride that particle physics has been one of the pioneer disciplines in adopting this approach to improve the use of data and squeeze the most out of it. Before programming in Python became commonplace, before the ecosystem of machine learning tools (things like Keras, PyTorch, XGBoost, …) so easily accessible today was within everyone's reach, before GPUs were used for computing (and not for rendering video games), ROOT already had TMVA integrated and particle physicists were making heavy use of it.

"Machine Learning" does not correspond exactly to "artificial intelligence", however, and so the question might require a more complex answer. If by "artificial intelligence" one means the "agentic" use of LLM language models, for instance to plan or structure a research programme, then in our field things are only now developing, and the implications and possibilities are yet to be fully understood and explored. Just this morning at CERN there was an interesting seminar by Tilman Plehn, which addressed the question from the perspective of a theoretical physicist. I'll let you go and look at the slides for the details (some of them quite technical). What strikes me as most interesting is that LLM models can be used not only for the specific tasks that would traditionally be assigned to a machine learning algorithm (for example: find the algorithm that best separates this process from that one starting from these simulated data), but to build a far more complex workflow, often eliminating the inevitable friction that comes with the difficulty of writing efficient (and correct) code directly to realise a given project. How much further will it be possible to push research, if our time is no longer dominated by resolving compilation errors, library dependencies, or algorithmic inefficiencies? If the tasks to be handed to a machine can be expressed in "natural" language, how much more rapidly will we advance? In my view, a lot.

Credits: "Transforming Particle Theory", Tilman Plehn @ CERN 11/3/2026

A few weeks ago I came across, via this post, this article by Alberto Romero arguing that the advent of agents based on AI models obliges (will oblige) us to focus on what to do, and not on how to do it. This idea seems to frighten quite a few people, worried about being replaced in their (indispensable?) competencies. To me, as an experimental particle physicist, it seems instead a tremendously liberating prospect. I write code out of necessity, but I am not a programmer. I design and build electronic circuits and mechanical structures out of necessity, but I am not an electronic engineer nor a mechanical one. These skills are, for me and my objectives (having the best instrument to interrogate nature about how it works), merely tools. I already make use of engineers' help today: being able to replace their services with those of an AI agent does not trouble me in the least. If Claude Code interfaced with my preferred code editor can give me robust code or fix the poor code I write myself, while preserving the specifications of what the code should do but improving efficiency, readability and maintainability, and can do so on my own schedule, all the better.

The real question, when you push it to the limit, is for those who work with specialised technical expertise: what constitutes the added value of a profession, if the technical knowledge that until recently made it rare and precious becomes progressively accessible to everyone? It is not a rhetorical question: it is one of the most serious ones the labour market will have to confront in the coming years. For me, whose professional value lies primarily in asking questions, imagining new directions, and exploring previously unseen solutions, having an army of efficient helpers that free me from spending my time solving problems feels like an exhilarating revolution. For those who instead built their value on the how rather than the what, the transition will probably be less painless.

(This is the English translation of this post)

Condividi:

  • Condividi su Facebook (Si apre in una nuova finestra) Facebook
  • Condividi su X (Si apre in una nuova finestra) X
  • Invia un link a un amico via e-mail (Si apre in una nuova finestra) E-mail
  • Altro
  • Stampa (Si apre in una nuova finestra) Stampa

Correlati

Archiviato in:English posts, Geeking & Hacking, Scienza e dintorni Contrassegnato con: AI, artificial intelligence, Claude, computer science, future, Higgs boson, Keras, LLM, Machine learning, physics, programming, PyTorch, research, TMVA, XGBoost

Interazioni del lettore

Lascia un commento Annulla risposta

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *

Questo sito utilizza Akismet per ridurre lo spam. Scopri come vengono elaborati i dati derivati dai commenti.

Barra laterale primaria

Marco Delmastro Mi chiamo Marco Delmastro, sono un fisico delle particelle che lavora all'esperimento ATLAS al CERN di Ginevra. Su Borborigmi di un fisico renitente divago di vita all'estero lontani dall'Italia, fisica delle particelle e divulgazione scientifica, ricerca fondamentale, tecnologia e comunicazione nel mondo digitale, educazione, militanza quotidiana e altre amenità. Ho scritto un libro, Particelle familiari, che prova a raccontare cosa faccio di mestiere, e perché. Per qualche tempo ho risposto a domande di fisica (e non solo) sul podcast Tu che sei un fisico (e prima o poi potrei riprendere).

Barra laterale secondaria

Argomenti

  • Scienza
    • Fisica
    • Raccontare la scienza
    • Scienza e dintorni
  • Opinioni
    • Militanza
    • Mezzi e messaggi
    • Intenzioni educative
  • Sulla frontiera
    • Vita di frontiera
    • Letture e riflessioni
    • Geeking & Hacking
  • English posts
This blog is primarely written in Italian. On the other hand, physics is an international entrerprise, and its main language is English, so some of the posts have been translated: you can find them in the English post category. If you wish to read those posts that are still only in Italian, an automatic translation is a good a bet!

Footer

Iscriviti al blog tramite email

Non perderti neanche un aggiornamento! Inserisci il tuo indirizzo email per ricevere un messaggio ogni volta che un nuovo articolo viene pubblicato:

Trattamento dei dati, cookie e affiliate link

Questo sito fa uso di cookie: qui ti spiego quali sono e perché li uso, così puoi decidere se ti va bene. Uso anche Google Analytics per l'analisi delle visite e del traffico; per saperne di più, leggi la pagina sulla privacy, dove ti spiego anche come gestisco i tuoi dati se decidi di iscriverti al sito o di lasciare un commento. In certi post, alcuni dei link a prodotti venduti su Amazon sono affiliate link.

Qualche diritto riservato

I contenuti di Borborigmi di un fisico renitente sono rilasciati sotto licenza Creative Commons Attribuzione-Non Commerciale-Non opere derivate. Fatene buon uso.

Licenza Creative Commons

Copyright © by Marco Delmastro · Qualche diritto riservato

  • Facebook
  • Twitter
  • Instagram
  • Youtube
  • Linkedin
  • Github
Borborigmi di un fisico renitente usa alcuni cookie per funzionare al meglio. Se continui leggere o scorrere queste pagine dò per scontato che la cosa ti vada a genio. Ok! Dimmi di più
Politica dei cookie

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Sempre abilitato
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
ACCETTA E SALVA
 

Caricamento commenti...