BNL Home

BNL Scientific Data and Computing Center

RHIC ATLAS Computing Facility

RHIC ATLAS Computing Facility

RHIC ushered in high throughput, data intensive computing as a BNL capability 15 years ago, and ATLAS computing at BNL has built on that. The RHIC and ATLAS Computing Facility (RACF) is a unified facility delivering leveraged, cost-effective computing to both RHIC (NP) and ATLAS (HEP). RCF provides ~90% of the computing capacity for PHENIX and STAR data analysis. The ATLAS Tier-1 facility is the largest of 11 Tier-1 centers worldwide and contributes a share of 23% of the worldwide computing capacity to ATLAS data analysis at 99% service availability (12 month average). Besides CERN the BNL Atlas Center is the most important ATLAS data repository: delivers ~200 TB/day to >100 data centers around the world.  Together with the Physics Application Software group these resources and capabilities make BNL one of the largest Big Data / High Throughput Computing (HTC) resources and expertise pools in US science.

  • RACF occupies ~15,000 sq. ft.
  •  ~250 Computer Racks + 9 Robotic Linear Magnetic Tape Libraries (~85k tape cartridge cells, ~200 PB Capacity)
  •  HPSS based tape management system, one of the largest instances worldwide
  •  Scalable parallel filesystem with GPFS (currently deployed instance scales to 100 GB/s)
  •  50,000 X86 based compute cores, 125 TB memory, 35 PB Disk
  •  Bisection bandwidth of 3 Terabits/sec between storage and compute resources
  •  ~800 kW Power Utilization (~7 MWh per year)

IBM Blue Gene Systems

In 2007 BNL acquired the 18 rack New York Blue/L and in 2009 the 2 rack acquired New York Blue/P. New York Blue/L debuted at number 5 on the June 2007 Top 500 list of the world's fastest computers.  New York Blue/P debuted on the June 2009 Top 500 list at number 250. New York Blue/L was decommissioned in January, 2014 and New York Blue/P was decommissioned in October, 2015. With a peak performance of 103.22 teraflops (trillion floating-point calculations per second) and 27.9 teraflops respectively, New York Blue/L and New York Blue/P allowed computations critical for research in biology, medicine, material science, nanoscience, renewable energy, climate science, finance and technology. Today BNL operates a Blue Gene Q as part of three facilities/collaborations, NYCCS, RIKEN, and LQCD.

  • 3.5 Racks, 1PB of NFS Storage
  • 3584 IBM Blue GeneQ nodes, 64512 cores, 57344 of which are compute cores, 57344 GB of RAM. Total performance capability is 700 TFlops. 

These systems were acquired in the fall of 2011.  One rack of the Blue Gene Q system was benchmarked for data intensive applications and debuted at number 6 on June 2012 Graph 500 List. The Blue GeneQ is connected to 1PB of NFS storage.

photo of Blue Gene Q racks

Blue Gene Q


Photo of CFN Cluster

CFN Cluster

The center for Functional Nanomaterials is supporting its extensive experimental user program with an accompanying theory program reliant on a wide range of numerical modeling tools. Their user services in this respect are supported by a dedicated cluster.

212 nodes of varying Intel CPU generations with a total core count of 2144.  DRAM per node varies from 16GB to 48 GB.  IB fabric is a combination of DDR and QDR.  Lustre storage of 32TB and NFS storage of 20TB.

photo fo HPC1 cluster

HPC1 Cluster

26 nodes with 416 cores and 128GB DRAM, connected with FDR IB and 10GbE.  Includes 12 NVidia GPUs and 8 Intel Phi Coprocessors.  0.5 PB Luster storage.

More about HPC1

Institutional Cluster

The SDCC currently operates the institutional cluster (IC) at BNL. The IC is made up of two sub-clusters:

  • A 108-node cluster (3888 computing cores in total), each node with a dual Intel Xeon E5-2695v4 (Broadwell) CPU, dual NVidia K80 GPUs, SAS-based local storage, and 256 GB of memory. This sub-cluster is inter-connected with a non-blocking Infiniband EDR fabric.
  • A 72-node cluster (4608 computing cores in total), each node with a single Intel Xeon Phi 7230 (Knight's Landing) CPU, SSD-based local storage, and 192 GB of memory. This sub-cluster is inter-connected with an Intel Omni-Path dual-rail, non-blocking fabric, capable of up to 22 Gb/s aggregate bi-directional bandwidth.

A GPFS-based storage system with a bandwidth of up to 24 Gb/s is connected to the IC.

Access to the IC is regulated by a Memorandum of Understanding. Please contact the SDCC Director or the CSI Director for more information.

Simons Foundation

BNL provides a highly available hosting service to the Simon Foundation incl. a maximum of four standard computer racks, disk-based data storage and data access, error-free data transfer, tape archival, ongoing data curation, and retrieval. The agreement includes an archival storage services for a data volumes of up to 2.5 PB over a five year period.

Novel Architecture Testbed

With investments from NY State BNL will establish a novel architecture testbed, where CSI will explore which architectural features are most important for its data driven discovery research, development and service provision. The initial system will include different types of accelerators (incl. FPGAs) and memory and storage devices.