General Lab Information

Photo of the SDCC

Scientific Data and Computing Center

The Scientific Data and Computer Center (SDCC) at Brookhaven Lab began in 1997 when the Relativistic Heavy Ion Collider (RHIC, pronounced “Rick”) and ATLAS Computing Facility was established. Then called RACF, the full-service scientific computing facility has since supported some of the most notable physics experiments, including Broad RAnge Hadron Magnetic Spectrometers (BRAHMS), Pioneering High Energy Nuclear Interaction eXperiment (PHENIX), PHOBOS Collaboration, and Solenoidal Tracker at RHIC (STAR), by providing dedicated data processing, storage, and analysis resources for these diverse, expansive experiments with general computing capabilities and support for users.

Today, this history of providing useful, resilient, and large-scale computational science, data management, and analysis infrastructure has grown, and the SDCC has evolved to support additional facilities and experiments. These include the National Synchrotron Light Source II (NSLS-II) and Center for Functional Nanomaterials (CFN) at Brookhaven Lab, as well as other DOE Office of Science User Facilities; the ATLAS experiment at CERN’s Large Hadron Collider in Europe; and Belle-II at KEK in Japan and DESY in Germany. SDCC also is planning for its role in future experiments with the sPHENIX, Deep Underground Neutrino Experiment, and Electron-Ion Collider.

For users, SDCC offers a single-point facility with high-throughput, high-performance, and data-intensive computing along with data management, storage, analysis, and preservation capabilities that support scientific programs in domains spanning physics, biology, climate, and energy. SDCC also serves as the core for Brookhaven Lab’s institutional computing needs, connecting scientists and researchers at home and abroad.

For more about user resources, experiments, and other services, see the SDCC website.


SDCC is a unified facility delivering leveraged, cost-effective computing to both RHIC (NP) and ATLAS (HEP). RCF provides ~90% of the computing capacity for PHENIX and STAR data analysis. The ATLAS Tier-1 facility is the largest of 11 Tier-1 centers worldwide and contributes a share of 23% of the worldwide computing capacity to ATLAS data analysis at 99% service availability (12 month average). Besides CERN the Brookhaven Lab Atlas Center is the most important ATLAS data repository: delivers ~200 TB/day to >100 data centers around the world. Together with the Physics Application Software group these resources and capabilities make Brookhaven one of the largest Big Data / High Throughput Computing (HTC) resources and expertise pools in US science.

  • SDCC occupies ~15,000 sq. ft.
  • ~250 Computer Racks + 9 Robotic Linear Magnetic Tape Libraries (~85k tape cartridge cells, ~200 PB Capacity)
  • HPSS based tape management system, one of the largest instances worldwide
  • Scalable parallel filesystem with GPFS (currently deployed instance scales to 100 GB/s)
  • ~90,000 X86 based compute cores, ~220 TB memory, 80 PB Disk
  • Bisection bandwidth of 3 Terabits/sec between storage and compute resources
  • ~800 kW Power Utilization (~7 GWU per year)

Institutional Cluster

The SDCC currently operates the institutional cluster (IC) at Brookhaven. The IC is made up of two sub-clusters:

  • A 108-node cluster (3888 computing cores in total), each node with a dual Intel Xeon E5-2695v4 (Broadwell) CPU, dual NVidia K80 GPUs, SAS-based local storage, and 256 GB of memory. This sub-cluster is inter-connected with a non-blocking Infiniband EDR fabric.
  • A 72-node cluster (4608 computing cores in total), each node with a single Intel Xeon Phi 7230 (Knight's Landing) CPU, SSD-based local storage, and 192 GB of memory. This sub-cluster is inter-connected with an Intel Omni-Path dual-rail, non-blocking fabric, capable of up to 22 GB/s aggregate bi-directional bandwidth.

A GPFS-based storage system with a bandwidth of up to 24 GB/s is connected to the IC.

CFN Cluster

The Center for Functional Nanomaterials is supporting its extensive experimental user program with an accompanying theory program reliant on a wide range of numerical modeling tools. Their user services in this respect are supported by a dedicated cluster.

212 nodes of varying Intel CPU generations with a total core count of 2144. DRAM per node varies from 16 GB to 48 GB. IB fabric is a combination of DDR and QDR. Lustre storage of 32 TB and NFS storage of 20 TB.

HPC1 Cluster

26 nodes with 416 cores and 128GB DRAM, connected with FDR IB and 10GbE. Includes 12 NVidia GPUs and 8 Intel Phi Coprocessors. 0.5 PB Luster storage.

Novel Architecture Testbed

With investments from New York State, Brookhaven will establish a novel architecture testbed, where CSI will explore which architectural features are most important for its data driven discovery research, development and service provision. The initial system will include different types of accelerators (incl. FPGAs) and memory and storage devices.