Introduction
Together with NVIDIA, OpenACC.org and Oak Ridge National Laboratory (ORNL), Brookhaven National Laboratory (BNL) will host a virtual GPU Hackathon starting August 15 and concluding August 24, 2022. GPU Hackathons provide exciting opportunities for scientists to accelerate their AI research or HPC codes under the guidance of expert mentors from National Labs, Universities and Industry leaders in a collaborative environment. This Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model, or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.
Background
Currently there are several large-scale computing systems in the US that are based on General-Purpose Graphic Processing Units, or GPGPUs, including Summit at ORNL, Sierra at Lawrence Livermore National Lab and the Perlmutter machine at NERSC. In addition, the upcoming exascale computers, Aurora at Argonne National Laboratory (ANL), and Frontier at ORNL, will have GPU accelerators as well. These GPU-based supercomputers offer tremendous computing power for science and engineering, but writing efficient software to take full advantage of them may be challenging.
Goal
The goal of this 4-day intensive hands-on Hackathon is for current or prospective user groups of large hybrid CPU-GPU systems to send teams of at least 3 developers along with either a potentially scalable application that could benefit from GPU accelerators, or an application running on accelerators that needs optimization. There will be intensive mentoring during the hackathon, with the goal that the teams leave with applications running on GPUs, or at least with a clear roadmap of how to get there. Our mentors come from national laboratories, universities and vendors, and besides having extensive experience in programming GPUs, many of them develop the GPU-capable compilers and help define standards such as OpenACC and OpenMP.
Prerequisites and Team Application Submission
- Teams are expected to be fluent with the code or project they bring to the event and motivated to make progress during the hackathon.
- At least 3 team members must be participating throughout the entire event.
- Projects brought to the event are required to have a license attached and detailed in the application. For more information on why licenses are important and how to obtain one, please use the following links:
- No advanced GPU skills are required, but teams are expected to know the basics of GPU programming and profiling. A collection of GPU lectures, tutorials, and labs are available for all participants at no fee. Please contact organizers for more information to help you prepare for the hackathon.
GPU Compute Resources:
Teams attending the event will be given access to a GPU cluster for the duration of the Hackathon.
Event Format:
The BNL GPU Hackathon will be hosted online with all times Eastern Time (ET). All communications and collaborations will be conducted via Zoom, Slack and email.
What's Hackathon?
If you would like a better idea of what to expect, check out these short videos from past Hackathons:
Training Sponsors
Hackathon Timeline
May 31, 2022 | Team application submission deadline - 11 PM PT / 2 AM ET |
June 23, 2022 | Notification of accepted teams |
July 15, 2022 | Additional Brookhaven Lab Guest Registration for all participants closes |
August 8, 2022 | Team/Mentor meeting |
August 15, 2022 | Hackathon begins |
August 24, 2022 | Hackathon ends |
Training Information
Organizing Committee
- Iris Chen (NVIDIA and OpenACC)
- Meifeng Lin (BNL)
- Thomas Papatheodore (ORNL)
Local Coordinator
- Nicole Medaglia (BNL)
Questions or Inquiries
- Contact Nicole Medaglia (medaglia@bnl.gov)
Past Events
View information from previous events.
Note: This event falls under Exemption D (Formal classroom training held at Federal facilities, which does not exhibit indicia of a formal conference as outlined in the Conference/Event Exemption Request Form.) Participation is contingent on application acceptance.