Accelerating CUDA C++ Applications with Concurrent Streams
(NACCAWCS-OD)
The concurrent overlap of GPU computation and the transfer of memory to and from the GPU can drastically improve the performance of CUDA applications. In this workshop you will learn to utilize CUDA Streams to perform copy/compute overlap in CUDA C++ applications by:
- Learning the rules and syntax governing the use of concurrent CUDA Streams
- Refactoring and optimizing an existing CUDA C++ application to use CUDA Streams and perform copy/compute overlap
- Rely on the NVIDIA® Nsight™ Systems Visual Profiler timeline to observe improvement opportunities and the impact of the techniques covered in the workshop.
Upon completion, you will be able to build robust and efficient CUDA C++ applications that can leverage copy/compute overlap for significant performance gains.
Prerequisities
- Professional experience programming CUDA C/C++ applications, including the use of the nvcc compiler, kernel launches, grid-stride loops, host-to-device and device-to-host memory transfers, and CUDA error handling.
- Familiarity with the Linux command line.
- Experience using Makefiles to compile C/C++ code
Suggested Resources to Satisfy Prerequisites
- Fundamentals of Accelerated Computing with CUDA C/C++.
- Ubuntu Command Line for Beginners (sections 1 through 5).
- Makefile Tutorial (through Simple Examples).
Tools, Libraries, and Frameworks Used
- CUDA C++
- nvcc
- Nsight Systems