EPJ Web of Conferences (Jan 2024)

Exploring Future Storage Options for ATLAS at the BNL/SDCC facility

  • Huang Qiulan,
  • Garonne Vincent,
  • Hancock Robert,
  • Gamboa Carlos,
  • Misawa Shigeki,
  • Liu Zhenping

DOI
https://doi.org/10.1051/epjconf/202429501029
Journal volume & issue
Vol. 295
p. 01029

Abstract

Read online

The ATLAS experiment is expected to deliver an unprecedented amount of scientific data in the High Luminosity(HL-LHC) era. As the demand for disk storage capacity in ATLAS continues to rise steadily, the BNL Scientific Data and Computing Center (SDCC) faces challenges in terms of cost implications for maintaining multiple disk copies and adapting to the coming ATLAS storage requirements. To address these challenges, the SDCC Storage team has undertaken a thorough analysis of the ATLAS experiment’s requirements, matching them to suitable storage options and strategies, and has explored alternatives to enhance or replace the current storage solution. This paper aims to present the main challenges encountered while supporting big data experiments such as ATLAS. We describe the experiment’s specific requirements and priorities, particularly focusing on the critical storage system characteristics of the high-luminosity run and how the key storage components provided by the Storage team work together: the dCache disk storage system; its archival back-end, HPSS; and its OS-level backend Storage. Specifically, we investigate a novel approach to integrate Lustre and XRootD. In this setup, Lustre serves as backend storage and XRootD acts as an access layer frontend, supporting various grid access protocols. Additionally, we also describe the validation and commissioning tests, including the performance comparison between dCache and XRootd. Furthermore, we provide a performance and cost analysis comparing OpenZFS and LINUX MD RAID, evaluate different storage software stacks, and showcase stress tests conducted to validate Third Party Copy (TPC) functionality.