TL;DR: Singularity is used routinely with containers provided via the NVIDIA GPU Cloud (NGC). In this SUG talk, best practices and lessons learned are shared for the benefit of all users of containers that target execution on one or more GPUs.
SUG Series Introduction
The inaugural meeting of the Singularity User Group (SUG) was held March 12-13, 2019, at the San Diego Supercomputer Center (SDSC). The event attracted diverse representation from the international advanced computing community as conveyed through the post-event press release issued jointly by SDSC and Sylabs.
Over the course of the two-day event, over 20 talks were presented by members of the Singularity user, developer, and provider community. Because SUG generated a significant amount of interest, even from those who were unable to attend, we are sharing online each of the talks presented.
SUG Talk Introduction
Singularity favors integration over isolation; that critical containerization differentiator ensures it remains simple to make use of GPUs. In this presentation from NVIDIA Systems Software Engineer Adam Simpson, some of NVIDIA’s own experiences are shared; these best practices and lessons learned are certain to be of interest.
The abstract for Adam’s contributed SUG talk NVIDIA HPC Container Efforts: An Overview is as follows:
The NVIDIA GPU Cloud, NGC, is a hub providing performance-optimized application containers which can be deployed on NVIDIA GPU-powered desktops, data center servers, and cloud services. This talk will cover engineering challenges that NVIDIA has faced in deploying such containers to NGC, and their solutions. Focus areas will include NVIDIA GPU access within containers, multi-node distributed containers, cluster integration, and performance portable optimization strategies.
Adam’s talk from SUG can be found below and here. Enjoy!