General Information

Top of Page

Computing Support for ATLAS

RHIC/ATLAS Computing Facility

A data tape storage robot at Brookhaven's RHIC/ATLAS Computing Facility.

As the sole Tier-1 computing facility for ATLAS in the United States – and the largest ATLAS computing center worldwide – Brookhaven's RHIC and ATLAS Computing Facility provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing, processing, and distributing ATLAS experimental data among scientists across the country. Brookhaven also houses one of the three U.S. ATLAS Analysis Support Centers and organizes periodic "Jamborees," which train researchers in numerous data analysis techniques.

  1. BNL / ATLAS Grid Computing

    Thursday, October 2, 2008

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing, processing, and distributing ATLAS experimental data among scientists across the country.   ATLAS Computing

A Closer Look at the ATLAS Grid

The ATLAS grid computing system comprises a complex structure analogous to the power grid that allows researchers and students around the world to analyze ATLAS data.

The beauty of the grid is that a wealth of computing resources are available for a scientist to accomplish an analysis, even if those resources are not physically available close to them. The data, software, processing power and storage may be located hundreds or thousands of miles away, but the grid makes this invisible to the researcher.

Organizationally, the grid is set up in a tier system, with the Large Hadron Collider located at CERN, the Tier-0 center. CERN receives the raw data from the ATLAS detector, performs a first-pass analysis, and then distributes it among ten Tier-1 locations, also known as regional centers, including Brookhaven. At the Tier-1 level, which is connected to Tier-0 via a dedicated high-performance optical network path, a fraction of the raw data is stored, processed, and analyzed. Each Tier-1 facility then distributes derived data to Tier-2 computing facilities that provide data storage and processing capacities for more in-depth user analysis and simulation.

The grid computing infrastructure is made up of several key components. The "fabric" consists of the hardware elements – processor "farms" comprising hundreds to thousands of compute nodes, disk and tape storage, and networking. The "applications" are the software programs that users employ, for example, to analyze data. Applications take the raw data from ATLAS and reconstruct it into meaningful information that scientists can interpret. Another type of software, called "middleware," links the fabric elements deployed within and across regions together so that they form a unified system — the Grid. The development of the middleware is a joint effort between physicists and computer scientists.

Outside of high-energy physics, grid computing is used on smaller scales to manage data within other scientific areas such as astronomy, biology, and geology. But the LHC grid is the largest of its kind.

Funding for middleware development is provided by the National Science Foundation's (NSF) Information Technology Research program and by the U.S. Department of Energy (DOE). DOE also funds the Tier-1 center activities, while the Tier-2 centers are funded mostly by the NSF and the DOE. DOE and NSF support the Open Science Grid, which is the middleware used by all of the U.S. Tier-1 and Tier-2 sites.