Now that we have readied the WSL2 environment with Singularity and the relevant CUDA libraries, it’s time to run the sample Keras workflow.
Singularity Container Services Workflow Readiness
This is part two of the Singularity Container Services (SCS) demonstration.
Singularity is a standalone command line tool that integrates with SCS to augment your container workflows.
The first part covered the background of SCS, this second part will include installation readiness and a container workflow in Windows with the following components:
- ^Windows Subsystem Linux (WSL2) with Ubuntu 20.04
- ^SingularityCE (Community Edition) the open source container runtime
- ^Sylabs’ Singularity Container Services
- ^Tensorflow and a sample data set
- ^NVIDIA GPU
The Singularity open source container project was created in 2016 and has received improvements along the way, thanks to the dedication of the open source community and the creation of SIF, the Singularity Image Format. SIF continues to grow, as shown by the adoption of Redhat in Podman, support of SIF through various registries, such as Amazon ECR. Nvidia NGC, Azure Batch-Shipyard and other container runtimes. Here are a few benefits of the SIF container format:
- ^SIF has everything built into single file, rather than set of layers
- ^Layers work well in microservices containers, where storage can be de-duplicated.
- ^Layers have higher startup cost, which is important in an HPC environment.
- ^SIF is immutable by design. This is key to knowing your application will run months or years into the future.
This content of this video is based upon a blog post titled, CUDA GPU Containers on Windows with WSL2, and will include several steps such as
- ^Update the linux environment
- ^Install the latest Nvidia drivers and container tools
- ^Install and test SingularityCE on WSL2
By now WSL2 should be installed along with a distribution, in our case we are using Ubuntu 20.0.4. We have followed the Microsoft blog post titled Install Windows on Linux with WSL. The next step is to start the Ubuntu environment for the first time and install SingularityCE, described on GitHub What is SingularityCE.
Let’s begin by getting the WSL2 environment ready
Open up your ‘Ubuntu’ app to drop into a WSL2 session.
For the WSL2 default of Ubuntu 20.04 you can just use the commands below:
First we’ll update the installation of Ubuntu in WSL2.
$ sudo apt update && sudo apt upgrade
Singularity natively supports running application containers that use NVIDIA’s CUDA GPU compute framework, lets install the NVIDIA utilities.
Fetch and add the signing key and the repository file
$ curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - $ curl -s -L https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/libnvidia-container.list
Get the NVIDIA metadata from the new repositories
$ sudo apt update
Install the needed packages
$ sudo apt install libnvidia-container-tools
In this demonstration we will be installing open source SingularityCE (community edition). If you are a SingularityPRO customer and would like to follow along, your customer-specific installation instructions can be found in the Sylabs SingularityPRO welcome email or via the SingularityPRO Administration guide.
In this demo we will be using open source SingularityCE, the packages are available at the GitHub releases page or you can quickly install the 3.10.2 package with:
$ wget https://github.com/sylabs/singularity/releases/download/v3.10.2/singularity-ce_3.10.2-focal_amd64.deb $ sudo apt install ./singularity-ce_3.10.2-focal_amd64.deb
*The latest SingularityCE version will be noted in this link for the changelog
Quick Test the Singularity Installation
We will test the installation with a quick Singularity version check, followed by performing an “exec” operation on a base container stored in the Singularity Container Services Library, Alpine in this scenario
$ singularity --version singularity-ce version 3.10.2-focal $ singularity exec library://alpine cat /etc/alpine-release INFO: Downloading library image 2.7MiB / 2.7MiB [===================================================================================] 100 % 1.4 MiB/s 0s 3.15.5
Add NVIDIA GPU Configuration
Because the nvidia-container-cli utility functionality is a new, and still experimental feature, the pre-built packages are not set up by default. We will add the functionality by editing the singularity configuration file
$ sudo nano /etc/singularity/singularity.conf
Find the line that begins “nvidia-container-cli path =” uncomment add in the path the NVIDIA
nvidia-container-cli path = /usr/bin/nvidia-container-cli
Finally we will run an NVIDIA command to detect GPU availability
nvidia-container-cli path = /usr/bin/nvidia-container-cli $ nvidia-smi -L GPU 0: NVIDIA GeForce GTX 950 (UUID: GPU-f2ec52a2-0f12-a0c6-1eb5-7aad5c2f7ed5)
We are now at the point where Windows has the Nvidia drivers installed, Ubuntu distribution of 20.04.4 is running inside WSL2, and SingularityCE is installed with the NVIDIA libraries packages.
Coming Next Week
In the next video we’ll create a Singularity Container Services account, configure SingularityCE to use Singularity Container servers and then play with some containers. We are so excited, we can’t contain ourselves…
Thank you for joining us!
Join Our Mailing List
Signing the Container The Singularity 3.0 family introduced the ability to create (and manage) PGP keys to sign and verify containers. This provides a trusted method for Singularity users to share containers and ensures a bit-for-bit reproduction of the original...
Create an Account & Authentication Token Now that we have SingularityCE installed in WSL2, and NVIDIA GPU support is enabled, we will create a Singularity Container Services account and configure the local Singularity client, followed by building a remote...