BNL Home

About Us

Building on its capabilities in data-intensive computing and computational science, Brookhaven National Laboratory is embarking upon a major new Computational Science Initiative.

Advances in computational science, data management and analysis have been a key factor in the success of Brookhaven Lab's scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, as well as our collaborative participation in international research endeavors, such as the ATLAS experiment at Europe's Large Hadron Collider.

The Computational Science Initiative (CSI) brings together under one umbrella the expertise that has driven this success to foster cross-disciplinary collaborations to address the next generation of scientific challenges posed by facilities such as the new National Synchrotron Light Source II (NSLS II). A particular focus of CSI's work will be the research, development and deployment of novel methods and algorithms for the timely analysis and interpretation of high volume, high velocity, heterogeneous scientific data created by experimental, observational and computational facilities to accelerate and advance scientific discovery. CSI is hereby taking an integrated approach, providing capabilities from leading edge research to multi-disciplinary teams that deliver operational data analysis capabilities to the scientific user communities.

Enabling Capabilities

Computer Science and Mathematics—fundamental research into novel methods and algorithm in support of large-scale, multi-modal, and streaming data analysis. Novel solutions for long term data curation and active reuse. Approaches to enable energy efficient, extreme-scale numerical modeling specifically in computational materials science, chemistry, lattice quantum chromo dynamics and fusion.

The BNL Scientific Data and Computing Center, housing the latest systems in high-performance and data-intensive computing, data storage, and networking, offering everything from novel research platforms to highly reliable production services.

Translational Capabilities

The Computational Science Laboratory, a collaborative space for the development of advanced algorithms and their characterization and optimization, also brings together computer scientists, mathematicians, and leading computational scientists to develop next-generation numerical simulation models

The Center for Data Driven Discovery (C3D), a multi-disciplinary center for the development, deployment, and operation of data-intensive discovery services for science, national security, and industry

A Multi-disciplinary, Collaborative Approach

The CSI philosophy is a multi-disciplinary and collaborative approach to scientific research and development, with research targeted at and informed by the key challenges observed in close interactions with our clients in science, national security agencies, and industry. Our success is measured in equal parts by the advancement we can bring to computer science and mathematics, as well as by the transformational impact we have on our clients’ mission space.

The CSI brings together under one umbrella the expertise that fosters cross-disciplinary collaboration and makes optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory. Key partners include nearby universities such as Columbia, Cornell, New York University, Stony Brook, and Yale, as well as IBM Research.

Strategic Partnerships

Computational scientists at Brookhaven will also seek to establish partnerships with key players in academia and industry (e.g. Stony Brook University’s Institute for Advanced Computational Science, Rensselaer Polytechnic Institute, Oak Ridge National Laboratory, IBM, and Intel). One existing example of a successful partnership is the collaboration of Brookhaven Lab’s high-energy and nuclear physics research groups with IBM that led to the development of the BlueGene supercomputing architecture now used on the world’s most powerful commercially available supercomputers.

More about strategic partnerships CSI All Hands Meeting 2016

Computational Science Initiative first all-hands meeting.

  1. MAY

    5

    Friday

    Computational Science Initiative Event

    "Frontiers for High Performance Computing in Cancer Research"

    Presented by Dr. Eric A. Stahlberg, Frederick Nat Lab for Cancer Research

    10 am, Seminar Room, Bldg. 725

    Friday, May 5, 2017, 10:00 am

    Hosted by: 'Frank Alexander'

    Anticipated advances in high-performance computing are enabling exciting new areas of computational and data oriented cancer research. These frontiers are being explored in a unique collaboration between the US Department of Energy and the National Cancer Institute in the Joint Design of Advanced Computing Solutions for Cancer. While the three-year collaboration is still in its first year, the collaboration is providing tremendous insight into the promise and challenges of employing extreme scale computing to advance research in the challenging and complex problem of cancer. Challenged with the aim of providing predictive insight in areas such as tumor response to treatments, molecular level interactions, and even clinical outcomes, the collaborative effort advances the frontiers of cancer research and computing in both numerically-intensive and data-intensive applications, while providing insights into opportunities for the high-performance computing community overall.

  2. MAY

    8

    Monday

    Computational Science Initiative Event

    "Enabling Computational Chemistry With New Algorithms on Next-Generation Platform"

    Presented by Wibe deJong, Lawrence Berkeley Nat. Lab

    11 am, Seminar Room, Bldg. 725

    Monday, May 8, 2017, 11:00 am

    Hosted by: 'Kerstin Kleese van Dam'

    With the advent of exascale computing the field of computational chemistry is on the verge of entering a new era of modeling. Large computing resources can enable researchers to tackle scientific problems that are larger and more realistic than ever before, and to include more of the complex dynamical behavior of nature. However, the future exascale architectures will be significantly different and require advances in algorithms and new programming paradigms. We will discuss some of the work on developing scalable algorithms for strongly correlated systems, simulations of complexes in dynamical environments, and complex spectra. Significant improvements will be reported in our development efforts of a full threaded plane wave ab initio molecular dynamics code in NWChem on Intel Phi platforms. Finally, we will demonstrate advances in the parallel communication layer Global Arrays utlizing LBNL's GasNET and barrier elision techniques. Bio: Bert de Jong leads the Computational Chemistry, Materials, and Climate Group at LBNL. He has a background in general chemistry, chemical engineering and high performance computational chemistry, with specialization and strong capabilities in modeling heavy element chemistry. He is a main developer of the NWChem software at the EMSL, one of four developers of the unique fully relativistic software MOLFDIR for quantum chemistry. Prior to joining Berkeley Lab, de Jong was at PNNL, where he lead the High Performance Software Development Group responsible for NWChem. He has published 89 journal papers, 14 conference papers and 7 book chapters and has given over 65 invited presentations and lectures at international conferences and universities.De Jong earned his doctorate in theoretical chemistry in 1998 from the University of Groningen in the Netherlands. He was a postdoctoral fellow at PNNL before transitioning to a staff member in 2000.

  3. MAY

    11

    Thursday

    Computational Science Initiative Event

    "Seminar: Is automated materials design and discovery possible?"

    Presented by Michael McKerns, California Institute of Technology

    11 am, Seminar Room, Bldg. 725

    Thursday, May 11, 2017, 11:00 am

    Hosted by: 'Frank Alexander'

    In business analytics, operations research, engineering design, and other predictive sciences, a critical step in building models of reality and making predictions is solving an optimization problem. Linear and quadratic optimizers and penalties are a mainstay of data science, and have been popular due to their ability to handle large numbers of dimensions quickly. However, the use of linear and/or quadratic tools can seriously limit the amount and quality of information that can be applied in the inverse problem. One could argue that most real-world problems are probabilistic, high-dimensional, and nonlinear with nonlinear constraints — thus linear and quadratic tools may not actually be a good choice. Too often, we are forced to solve reduced-dimensional problems that may no longer adequately represent reality, but instead fit within the resource and design limitations of the selected optimizer. These limitations become much more pronounced when attempting to predict structure-property relationships in materials, as problems typically require significant computational resources, are nonlinear, and are often governed by rare-events. This talk will introduce some tools within the `mystic' framework for efficiently solving high-dimensional non-convex optimization problems with nonlinear constraints. We will, in the context of materials discovery, also discuss how `mystic', with the OUQ algorithm, can be used for rigorous model validation, certification, and the design of experiments.

  1. JUN

    5

    Monday

    GPU Hackathon 2017

    June 5-9, 2017

  2. AUG

    6

    Sunday

    2017 New York Scientific Data Summit (NYSDS)

    August 6-9, 2017