With support from the National Center for Atmospheric Reserach (NCAR), the Consulting Services Group (CSG), and the Application Scalability and Performance Group (ASAP) team under the Computational and Information Systems Laboratory (CISL), we present this GPU training series for scientists, software engineers, and students, with emphasis on Earth science applications.
The content of this course is coordinated with the series of GPU Training sessions starting in Februrary 2022. The NVIDIA High Performance Computing Software Development Kit (NVHPC SDK) and CUDA Toolkit will be the primary software requirements for this training which will be already available on NCAR's HPC clusters as modules you may load. This software is free to download from NVIDIA by navigating to the NVHPC SDK Current Release Downloads page and the CUDA Toolkit downloads page. Any provided code is written specifically to build and run on NCAR's Casper HPC system but may be adapted to other systems or personal machines. Material will be updated as appropriate for the future deployment of NCAR's Derecho cluster and as technology progresses.
We actively encourage that all participants join the NCAR GPU Users Slack. Invite links will be sent to registered participants. This communication platform is not only intended to offer support and answer questions about GPU computing, but also aid in the fostering of collegial networks and building the Earth science community towards greater collaborative research opportunities. Feel free to get involved in this space and share your thoughts and ideas you're exploring that may benefit from GPU computing.
- Brett Neuman - Student Assistant, GPU Enthusiast CISL/CSG
- Brian Vanderwende - Software Engineer III CISL/CSG
- Cena Miller - Software Engineer II CISL/ASAP
- Daniel Howard - Software Engineer I CISL/CSG
- Shiquan Su - Software Engineer III CISL/CSG
- Supreeth Suresh - Software Engineer II CISL/ASAP
- Dates: February through September 2022
- Format: 1 hour sessions roughly every 2 weeks
- Communication: Emails to registrants and open discussion in NCAR GPU Users Slack.
- Support and offce hours: Schedule time with workshop organizers in Slack or submit a ticket to rchelp.ucar.edu.
- Register here to receive a link for interactive Zoom sessions
- Course Materials: In addition to code examples archived in this Github repository, slides and video recordings of sessions will be archived on each individual session's webpage accessed here.
The full course description detailing learning objectives and material to be covered can be found here. The condensed schedule for topics to cover is listed below.
- Introduction to Parallel Programming
- Why Use GPU Accelerators
- Introduction to GPU and Accelerator Architectures
- Software Infrastructure and Make Systems
- Directive Based Programming with OpenACC (two sessions)
- Hands-On Session Using OpenACC in MPAS-A
- Verifying Code Correctness with PCAST
- IDEs, Debugging, and Optimization Tools for GPU Computing
- Hands-On Session with Nsight Systems and Compute
- Multi-GPU Programming (two sessions)
- GPU Python with CuPy and Legate
- Multiple GPUs in Python with Dask
- Optimizing AI/ML workflows in Python for GPUs
- Co-Design Process Training for Scientists and Project Leads
- Slides - Recording
We have decided to focus primarily on descriptive and directive based programming plus usage of libraries/APIs in this GPU training program given the greater ease of deployment and cost savings in development time while still achieving significant performance. Nonetheless, many of the directive based and/or library implementations can work alongside CUDA kernels for when the greatest control of GPU hardware or performance optimizations are required. We recommend consulting this 9-part CUDA Training Series offered by Oak Ridge National Laboratory or other resources for a more in depth training on CUDA code development.
We will design this course to be self-contained but given the limited time and the ever changing landscape in GPU computing, we recommend you consult additional resources for your own personal development. We have already consolidated a wide set of relevant resources in this public Google Drive folder which you are welcome to explore. Some of this material will be used as references for specific sessions.
We will primarily be using interactive Jupyter notebooks and Jupyter's built-in development platform to present material and streamline hands-on sessions for users as well as provide other exercices.
If you are not using Casper or encounter issues updating your repository to the latest version, please consult the recommended steps in GIT_INSTRUCTIONS for using git or reach out to workshop organizers.
For each interactive session, the respective session folder will feature a Jupyter Notebook of the presented material as well as folders named with the language and/or programming style used in any relevant examples presented. Consult the README.md
file in each session folder for any specifics about running provided code.
We greatly appreciate and encourage contributions to this workshop and GPU learning material. If you have a fix or contribute in any way, feel free to fork this repository and then submit a pull request to directly contribute to the material.
If you have a suggestion or encounter a problem you'd like to share with workshop organizers, please open a GitHub issue. We are more than happy to improve this material in any way. Specifically, if there is a GPU topic which you would like us to cover that is not provided above, opening an issue will help us prioritize making that material available in the future.