EPJ Web of Conferences (Jan 2024)
Calibration and Conditions Database of the ALICE experiment in Run 3
Abstract
The ALICE experiment at CERN has undergone a substantial detector, readout and software upgrade for the LHC Run 3. A signature part of the upgrade is the triggerless detector readout, which necessitates a real time lossy data compression from 1.1 TB/s to 100 GB/s performed on a GPU/CPU cluster of 250 nodes. To perform this compression, a significant part of the software, which traditionally is considered off-line, was moved to the front-end of the experiment data acquisition system, for example the detector tracking. This is the case also for the various configuration and conditions databases of the experiment, which are now replaced with a single homogeneous service, serving both the real-time compression, online data quality checks and the subsequent secondary data passes, Monte-Carlo simulation and data analysis. The new service is called CCDB (for Calibration and Conditions Database). It receives, stores and distributes objects and their metadata, created from online detector calibration tasks and control systems, from offline (Grid) workflows or by users. CCDB propagates the new objects in real time to the Online cluster and asynchronously replicates all content to Grid storage elements for later access by Grid jobs or by collaboration members. The access to the metadata and objects is done via a REST API and a ROOT-based C++ client interface which streamlines the interaction with this service from compiled code while plain curl command line calls are a simple access alternative. In this paper we will present the architecture and implementation details of the components that manage frequent updates of objects with millisecond-resolution intervals of validity and how we have achieved an independent operation of the Online cluster while also making all objects available to Grid computing nodes.