RTU Center of High Energy Physics and Accelerator Technologies
Show menu
RTU Center of High Energy Physics and Accelerator Technologies

Tier2 Computing

Tier2 Computing

The HPC cluster at the CERN Data Centre; credit: CERN.

During its operation, the LHC provides its experiments with 40 million proton-proton bunch collisions every second. The experiments use hugely complex hardware and software triggering mechanisms and algorithms to select only the most interesting collision events, however, even this results in a data-stream of tens or even hundreds of Gigabits per second (GB/s) that must be stored for later analysis. In addition, HEP analyses require a huge amount of Monte-Carlo (MC) simulations of various physical processes to be created. These are later compared to the experimental data in search of anomalies which could lead to new discoveries.

The immense computing tasks outlined above are performed by the Worldwide LHC Computing Grid (WLCG). This grid is made of nearly one million computing cores situated in more than 170 sites around the world. It tackles approximately 2 million computing tasks every day, with a non-stop global data transfer rate of over 60 GB/s. The hierarchy of this grid is split into tiers - Tier-0 to Tier-2[1]. Tier-2 is formed by high-performance computing (HPC) clusters of universities and research institutes around the world. In 2019, RTU and the University of Latvia (UL) undertook a joint pilot-project aimed at establishing a Tier-2 site in Latvia.

Development of a Tier2 Centre in Latvia

The development of a Tier-2 data data centre is one of Latvia’s strategic CERN-related projects. In 2019, RTU and UL undertook a collaborative effort of uniting their computing resources at the RTU HPC Centre and the UL Institute of Numerical Modelling, into a single network, with the aim of using the unified HPC cluster as a single Tier-2 site. A state-financed pilot-project validated the feasibility of the overall scheme and showed positive results.

This project is seen as a natural continuation for the BalticGrid: a grid of computing resources of the three Baltic states which was operational in the 2000s. As such, the success of this pilot-project should be viewed as an opportunity for other institutes with available HPC resources, both in Latvia and the other Baltic states, to join the next stages of the project.

The development of the Tier-2 site in Latvia is supported by experts from the Estonian Tier-2 site hosted by the National Institute of Chemical Physics and Biophysics (NICPB), as well as expert researchers from CERN and CMS.

The management system of this Tier-2 site is based on the OpenStack cloud-server platform and CEPH software-defined storage solutions. A single compute-element management system has been implemented. This is expected to receive computing tasks from CERN. These tasks are then distributed throughout the available HPC resources of the federative system, where a Slurm resource management system is used to autonomously distribute and execute the tasks on the available CPUs. These tools, recommended by CERN, have allowed for a successful proof-of-concept pilot-project and will allow for success in the development of a fully functional Tier-2 site.

A series of technical upgrades of the management system have been carried out in order to meet the performance requirements of this planned Tier-2 system, and a unique domain, t2cms.hpc-net.lv, has been registered. This is to be followed by linking up the computing resources of the federation partners and registering Latvia’s Tier-2 data centre with CERN. The subsequent tests and full-scale implementation will be performed in due course.

[1]Technically, another tier, Tier-3, exists. This tier is the collective of the personal computers of the end-users - the individual physicists, working tirelessly to analyse the data provided by the LHC and its experiments.

Details of the Tier-2 pilot-project
Main objectives achieved:
  • A technical feasibility study has been carried out and the architecture of the Latvian CERN Tier-2 federal computing system has been developed;
  • Implementation of the system in pilot mode according to the chosen architecture has been achieved;
  • The adaptation of the existing HPC infrastructure (both at RTU and UL) for use within the federation has begun;
  • Improvements to the server equipment components have been made in preparation for efficient operation within the CERN Tier-2 computing system;
  • The system architecture for a federative computing cluster, which supports  efficient use of the computing resources of the involved academic institutions in an accessible and user-friendly way, has been developed;
  • The federative computing system supports the performance of computing tasks in a wide range of fields of science, subject to Singularity virtualization, which allows for a wide applicability of the system, both for the needs of a WLCG  Tier-2 site and beyond.
credit: RTU HPC Centre

RTU role in the project: HPC Centre, HEP Centre

Implementation stage: pilot project implemented 

Project partners: Ministry of Education and Science of Latvia, RTU, UL, Dati Group

Total project expenses: 24 000 EUR

Project implementation period: December 2019-March 2020 (4 months)

The project team of RTU:  
Prof. Toms Torims  
Dr. Lauris Cikovskis