In mid-January I gave a public talk in Annecy where I was telling an audience of curious people how a Higgs boson is produced and its properties measured (and why, but that's another story). In the conversation with the audience that always follows these events, I received among others a question that comes up more and more often on these occasions: "For the research you do at ATLAS and the measurements on the Higgs boson, do you use artificial intelligence systems?"
The simple answer is "Obviously yes". In fact, for some time now the analysis of data collected by the LHC experiments (and, more recently, a non-negligible part of the data acquisition itself) has been based on machine learning algorithms: separation of "signal" and "noise", classification of different physical phenomena, identification of objects and particles in the detectors, calibration of the detectors' own response. We can say with some pride that particle physics has been one of the pioneer disciplines in adopting this approach to improve the use of data and squeeze the most out of it. Before programming in Python became commonplace, before the ecosystem of machine learning tools (things like Keras, PyTorch, XGBoost, …) so easily accessible today was within everyone's reach, before GPUs were used for computing (and not for rendering video games), ROOT already had TMVA integrated and particle physicists were making heavy use of it.
"Machine Learning" does not correspond exactly to "artificial intelligence", however, and so the question might require a more complex answer. If by "artificial intelligence" one means the "agentic" use of LLM language models, for instance to plan or structure a research programme, then in our field things are only now developing, and the implications and possibilities are yet to be fully understood and explored. Just this morning at CERN there was an interesting seminar by Tilman Plehn, which addressed the question from the perspective of a theoretical physicist. I'll let you go and look at the slides for the details (some of them quite technical). What strikes me as most interesting is that LLM models can be used not only for the specific tasks that would traditionally be assigned to a machine learning algorithm (for example: find the algorithm that best separates this process from that one starting from these simulated data), but to build a far more complex workflow, often eliminating the inevitable friction that comes with the difficulty of writing efficient (and correct) code directly to realise a given project. How much further will it be possible to push research, if our time is no longer dominated by resolving compilation errors, library dependencies, or algorithmic inefficiencies? If the tasks to be handed to a machine can be expressed in "natural" language, how much more rapidly will we advance? In my view, a lot.

A few weeks ago I came across, via this post, this article by Alberto Romero arguing that the advent of agents based on AI models obliges (will oblige) us to focus on what to do, and not on how to do it. This idea seems to frighten quite a few people, worried about being replaced in their (indispensable?) competencies. To me, as an experimental particle physicist, it seems instead a tremendously liberating prospect. I write code out of necessity, but I am not a programmer. I design and build electronic circuits and mechanical structures out of necessity, but I am not an electronic engineer nor a mechanical one. These skills are, for me and my objectives (having the best instrument to interrogate nature about how it works), merely tools. I already make use of engineers' help today: being able to replace their services with those of an AI agent does not trouble me in the least. If Claude Code interfaced with my preferred code editor can give me robust code or fix the poor code I write myself, while preserving the specifications of what the code should do but improving efficiency, readability and maintainability, and can do so on my own schedule, all the better.
The real question, when you push it to the limit, is for those who work with specialised technical expertise: what constitutes the added value of a profession, if the technical knowledge that until recently made it rare and precious becomes progressively accessible to everyone? It is not a rhetorical question: it is one of the most serious ones the labour market will have to confront in the coming years. For me, whose professional value lies primarily in asking questions, imagining new directions, and exploring previously unseen solutions, having an army of efficient helpers that free me from spending my time solving problems feels like an exhilarating revolution. For those who instead built their value on the how rather than the what, the transition will probably be less painless.


Lascia un commento