- Nuclear & Particle Physics
- Isotope Research & Production
- RIKEN BNL Research Center
Computation and Data-Driven Discovery (C3D)
Data-Intensive Research Programs
As the sole Tier-1 computing facility in the United States for the Large Hadron Collider's ATLAS experiment—and the largest ATLAS computing center worldwide—Brookhaven's RHIC and ATLAS Computing Facility provides a large portion of the overall computing resources for U.S. collaborators and serves as the central U.S. distribution hub for ATLAS experimental data.
A systems biology knowledgebase known as KBase seeks to integrate and make broadly accessible everything we know or can learn about plants and microbes. Kbase will be a unique resource, bringing together multiple research communities and empowering them with computational tools to address fundamental biological questions.
Brookhaven is using high-performance computing (HPC) methods to assess electrical grid planning and performance using the power industry's existing software tools.
New detector technology planned for NSLS-II requires new computing approaches to experiment control, data management, data analysis workflow, and metadata handling with high performance on data storage and retrieval, as well as visualization.
The Center for Functional Nanomaterials Computation Facility and Theory Group provides multi-purpose high-performance computing for internal projects. Its resources include 2,200 computing cores, supported by high-speed networking to facilitate intensive, parallel computing as well as data storage.
National Nuclear Data Center
The National Nuclear Data Center collects experimental information on nuclear structure and nuclear reactions, evaluates them, maintains nuclear databases, and uses modern information technology to disseminate the results.
Environmental Data Mgmt.
Brookhaven's Environmental Sciences Department Environmental Data Management Group created and manages an External Data Center for the Department of Energy's Atmospheric Radiation Measurement Climate Research Facility.
Lattice Quantum Chromodynamics (Lattice QCD) simulates the interaction between quarks and gluons using Monte Carlo methods based on the theory of Quantum Chromodynamics. Lattice QCD has been successfully applied to study the QCD phase transition, CP violation, hadron structure and various other important topics in theoretical nuclear and high energy physics.
CSI’s Computation and Data Driven Discovery (C3D) Division addresses the challenges and requirements of science and engineering applications that demand large-scale and innovative computing solutions.
C3D is the CSI gateway for expertise in high-performance workflows, distributed computing technologies, and FAIR (Findability, Accessibility, Interoperability, and Reusability) data. C3D integrates high-performance computing, machine learning, and streaming analytics, and translates them into reproducible science and scientific discovery. C3D conducts research, develops solutions, and provides expertise in workflows, scalable software, and analytics to address the challenges and requirements of science and engineering applications that demand large-scale, innovative solutions. C3D specializes in distributed computing research, software systems that support streaming and real-time analytics, and reproducibility. The team’s work includes performing advanced research into extreme-scale and extensible workflow systems, transparent and reproducible artificial intelligence systems in science, and real-time data analytics with a focus on explainability. Contact any C3D member for additional information and collaboration opportunities.
- Dynamic Visualization and Visual Analytics for Scientific Data
- Experimental Data Curation with the Open-source Invenio Platform
- High Performance Iterative Tomography Reconstructions
- Provenance and Reproducibility Tools for Multi-Modal X-ray Spectroscopy Experiments
- Replicating Machine Learning Experiments in Materials Sciences
- Text Mining the Scientific Literature for X-ray Absorption Spectroscopy
- Supporting Heterogeneous Task Placement for High-performance Scientific Workflows at Scale
- Streaming Visualization for Performance Anomaly Detection
- Explainable Approaches for Deep Learning Model Understanding