Singularity Container Workflow Part 5: Running the Tensorflow container in WSL2

By Staff

Sep 1, 2022 | Blog, How To Guides

Running the Tensorflow container in WSL2

Now that we have readied the WSL2 environment with Singularity and the relevant CUDA libraries, it’s time to run the sample Keras workflow.

Let’s run the GPU enabled version of TensorFlow from SCS Library, using the “singularity run” command. We’ll also use command line options –nv and –nvcli to check that tensorflow can see our GPU. This can take a minute or two on a slower internet connection as the container image is very large.

$ singularity run --nv --nvccli library://sylabsdemo/containers/tensorflow:latest-gpu-signed
INFO:    Downloading library image
2.6GiB / 2.6GiB [===============================================================================================] 100 % 5.8 MiB/s 0s
INFO:    Setting 'NVIDIA_VISIBLE_DEVICES=all' to emulate legacy GPU binding.
INFO:    Setting --writable-tmpfs (required by nvidia-container-cli)

________                               _______________
___  __/__________________________________  ____/__  /________      __
__  /  _  _ \_  __ \_  ___/  __ \_  ___/_  /_   __  /_  __ \_ | /| / /
_  /   /  __/  / / /(__  )/ /_/ /  /   _  __/   _  / / /_/ /_ |/ |/ /
/_/    \___//_/ /_//____/ \____//_/    /_/      /_/  \____/____/|__/

You are running this container as user with ID 1000 and group 1000,
which should map to the ID and group for your user on the Docker host. Great!


Singularity has been launched, let’s start Python.

$ Singularity> python
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
From the Python we’ll run two commands, first import the tensorflow package, then perform a device query to see the physical_device:GPU:0
>>> import tensorflow as tf
>>> tf.config.list_physical_devices('GPU')
2022-08-22 13:36:02.589647: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 13:36:02.655163: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 13:36:02.655699: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
You can safely ignore the warnings about NUMA support. This is a side effect of the WSL2 session, and won’t harm our ability to use Tensorflow. Now we’ll create and train a model, based on the TensorFlow beginner quickstart, and load some data creating the Keras model:
>>> mnist = tf.keras.datasets.mnist
>>> (x_train, y_train), (x_test, y_test) = mnist.load_data()
>>> x_train, x_test = x_train / 255.0, x_test / 255.0
>>> model = tf.keras.models.Sequential([
...   tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras...   tf.keras.layers.Dense(128, activation='relu'),
...   tf.keras.layers.Dropout(0.2),
...   tf.keras.layers.Dense(10)
... ])
2022-08-22 14:11:28.582693: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 14:11:28.583124: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 14:11:28.583538: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 14:11:30.640140: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 14:11:30.640488: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 14:11:30.640544: I tensorflow/core/common_runtime/gpu/] Could not identify NUMA node of platform GPU id 0, defaulting to 0.  Your kernel may not have been built with NUMA support.
2022-08-22 14:11:30.640862: I tensorflow/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-08-22 14:11:30.640967: I tensorflow/core/common_runtime/gpu/] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1327 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 950, pci bus id: 0000:01:00.0, compute capability: 5.2

The “model=” line should output information about the GPU, and on the test system it shows a NVIDIA GeForce GTX 950:

The next steps are to continue preparing the model

>>> loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
>>> model.compile(optimizer='adam',
...               loss=loss_fn,
...               metrics=['accuracy'])
And now run the training task
>>>, y_train, epochs=5)
Epoch 1/5
1875/1875 [==============================] - 11s 5ms/step - loss: 0.2988 - accuracy: 0.9129
Epoch 2/5
1875/1875 [==============================] - 9s 5ms/step - loss: 0.1499 - accuracy: 0.9554
Epoch 3/5
1875/1875 [==============================] - 9s 5ms/step - loss: 0.1107 - accuracy: 0.9660
Epoch 4/5
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0905 - accuracy: 0.9719
Epoch 5/5
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0748 - accuracy: 0.9768
<keras.callbacks.History object at 0x7f4d5437eb80>

We can see above that the training is successfully using our GPU via the cuda_blas library. If you want to continue with more steps, please see TensorFlow beginner quickstart.

Singularity containers for CUDA applications can now be developed, tested, and used on a Windows laptop or desktop, and using the Remote Build, Library and Keystore functions of Singularity Container Services.. All of the standard Singularity features work well under WSL2, so it’s a really powerful development environment. When you need to run on more powerful GPU nodes, just take your SIF file to your HPC environment.

In future versions of SingularityCE and SingularityPRO we’ll be aiming to make the –nvccli method of GPU setup the default, simplifying this process further.If you have any questions, comments, or hit any trouble. Reach out via the Singularity community spaces.

We’ve had some questions about this video series and have decided to add a few more features of Singularity and Singularity Container Services.

Encryption of SIF

Encryption is a well-known technology used to protect sensitive information, it can also be used to encrypt the contents of a container across a variety of use cases.

The Singularity client can locally build an encrypted container from a definition file, using passphrase. This feature allows the rootfs within a container to be encrypted at rest, in transit, and even while running. As a result, data is decrypted in kernel space, and there will not be any intermediate, decrypted rootfs lying around after the container is terminated.

This encryption can be performed using either a passphrase or asymmetrically via an RSA key pair in Privacy Enhanced Mail (PEM/PKCS1) format. The container is then encrypted while in transit, at rest, and even while running. In other words, there is no intermediate, decrypted version of the container on disk. Container decryption occurs at runtime completely within kernel space.

Passphrase encryption is less secure than encrypting containers using an RSA key pair. This feature is provided as a convenience, and as a way for users to familiarize themselves with the encrypted container workflow, but users running encrypted containers in production are encouraged to use asymmetric keys.

The container we are going to build from a definition file uses “curl” to call out to a console oriented weather service. Within the Ubuntu WSL2 environment, you can use the nano editor to copy this text and save to a file called “wttr.def”, which is calling to Singularity Container Services Library for a source Debian container, then adding “%post” and “%runscript” commands. For more information on how to use definition files and choosing different bootstrap sources, here is the documentation.

As with our previous demonstration of signing a container, an encrypted container must be locally created since Singularity Container Services does not store your private keys. What’s nice about the WSL2 environment is that users have sudo privileges and can build containers using the sudo command.

Launch the nano editor and paste the contents into the definition file:

$ nano wttr.def
With the definition file created, let’s create the encrypted container called “wttr-e.sif”, using the simplified passphrase method (not intended for production purposes.) For more information on Singularity’s encryption, here is a link to the documentation.
$ sudo singularity build --passphrase wttr.sif wttr.def
[sudo] password for demo:
INFO:    Starting build...
Getting image source signatures
Copying blob 83a1cd2c9d18 done
Copying blob 4a56a430b2ba done
Copying blob 82177f28f446 done
Copying config 51ee328dee done
Writing manifest to image destination
Storing signatures
2022/08/23 17:56:52  info unpack layer: sha256:4a56a430b2bac33260d6449e162017e2b23076c6411a17b46db67f5b84dde2bd
2022/08/23 17:56:53  info unpack layer: sha256:82177f28f44660a30bd8a207c5b88b83c4be2329e71f92ad01f50febe1cf9caf
2022/08/23 17:56:54  info unpack layer: sha256:83a1cd2c9d1841681590aa66db329ade62a159dc0fa8f88534d8400861c6695d
INFO:    Creating SIF file...
INFO:    Build complete: wttr.sif

Now that we have built an encrypted container, let’s look at the contents using the Singularity client. We can see that the ID 4 is encrypted. For more information about this detail, please see the Singularity encryption documentation.

$ singularity sif list wttr.sif
ID   |GROUP   |LINK    |SIF POSITION (start-end)  |TYPE
1    |1       |NONE    |32768-33028               |Def.FILE
2    |1       |NONE    |36864-37994               |JSON.Generic
3    |1       |NONE    |40960-41052               |JSON.Generic
4    |1       |NONE    |45056-82952192            |FS (Encrypted squashfs/*System/amd64)
We can also see that the container image includes the definition file used during the build process.
$ singularity inspect -d wttr.sif
Bootstrap: library
From: debian:buster

        apt-get update
        apt-get -y install curl

        if [ $# -eq 0 ]
                curl -s https://wttr.i
                curl -s "$1"

Because we are encrypting the container, we may want to remove the definition information, as this section will not be encrypted. We can remove the definition file information with the Singularity client command:

$ singularity sif del 1 wttr.sif

And if we now look back at the container contents again, we can see the definition section is now deleted.

$  singularity sif list wttr.sif
ID   |GROUP   |LINK    |SIF POSITION (start-end)  |TYPE
2    |1       |NONE    |36864-37994               |JSON.Generic
3    |1       |NONE    |40960-41052               |JSON.Generic
4    |1       |NONE    |45056-82952192            |FS (Encrypted squashfs/*System/amd64)
$ singularity inspect -d wttr.sif
WARNING: No SIF metadata partition, searching in container..

With the container encrypted as well as the definition file removed, we will now sign this container before pushing into our SCS Library.

$ singularity sign wttr.sif
Signing image: wttr.sif
Enter key passphrase :
Signature created and applied to wttr.sif

With an encrypted and signed container, lets again look at the detail of the container with the Singularity client, and notice ID 1 with information about the SHA-256 signature.

$ singularity sif list wttr.sif
ID   |GROUP   |LINK    |SIF POSITION (start-end)  |TYPE
1    |NONE    |1   (G) |82952192-82953790         |Signature (SHA-256)
2    |1       |NONE    |36864-37994               |JSON.Generic
3    |1       |NONE    |40960-41052               |JSON.Generic
4    |1       |NONE    |45056-82952192            |FS (Encrypted squashfs/*System/amd64)
$ singularity push wttr.sif library://sylabsdemo/containers/wttr:esign
78.6MiB / 78.6MiB [=================================================================================] 100 % 2.4 MiB/s 0s

We’d like to show what a “pull” and “run” operation of an encrypted container will look like to another Singularity installation. If you want to replicate this configuration, go into the Microsoft Marketplace and download the Debian App for WSL2. We set the screen background to blue for a differentiated view from Ubuntu. For Debian, we installed Singularity from sources, which are not covered here. However, you can follow the instructions to install Singularity on Debian from source code. This will ensure all the dependencies are installed.

If we run the “singularity remote status” command, we will see that this environment is “logged out”, and not using any authentication tokens.

$ singularity remote status
INFO:    Checking status of default remote.
Builder    OK      v1.6.6-0-gadbc4fe4
Consent    OK      v1.6.6-0-gd8f171e
Keyserver  OK      v1.18.9-0-g76cbd56
Library    OK      v0.3.6-0-g15dfbe8
Token      OK      v1.6.6-0-gd8f171e

No authentication token set (logged out).
$ singularity pull library://sylabsdemo/containers/wttr:esign
INFO:    Downloading library image
79.1MiB / 79.1MiB [==============================================================================================] 100 % 4.1 MiB/s 0s

Ok, we should be ready Pull the encrypted/signed image from the Library.

$ singularity sif list wttr_esign.sif
ID   |GROUP   |LINK    |SIF POSITION (start-end)  |TYPE
1    |NONE    |1   (G) |82952192-82953790         |Signature (SHA-256)
2    |1       |NONE    |36864-37994               |JSON.Generic
3    |1       |NONE    |40960-41052               |JSON.Generic
4    |1       |NONE    |45056-82952192            |FS (Encrypted squashfs/*System/amd64)

As the new local user, a quick “sif list” of the “wttr_esign.sif” container will show that it is encrypted (ID 4) and signed (ID 1)

$ singularity verify wttr_esign.sif
Verifying image: wttr_esign.sif
[REMOTE]  Signing entity: Sylabs Demo (sylabs demo keys) <>
[REMOTE]  Fingerprint: B7C3D2DF4C055EA0930714387A168BA6BB54B028
Objects verified:
2   |1       |NONE    |JSON.Generic
3   |1       |NONE    |JSON.Generic
4   |1       |NONE    |FS
Container verified: wttr_esign.sif

Before doing anything with the container, it’s best to verify where it came from. This can be done with the verify command. We mentioned earlier in the video series that when the Singularity client is not the author of a container, the verify command will perform a [REMOTE] verification of the container, along with the email of the author, and the Fingerprint.

Success, the container has been verified. Upon running this command, the Singularity client has reached out to the SCS Key Service, and retrieved the appropriate public key material.

$ singularity run wttr_esign.sif
FATAL:   Unable to use container encryption. Must supply encryption material through environment variables or flags.

While we have the container, attempting to run it displays an error because this user in this environment does not have a passphrase to decrypt the image.

$ singularity run --passphrase wttr_esign.sif sf

Now, trying that again with the passphrase, the image can be decrypted, and runs as expected. The “sf” part passes a location into the curl command.

Browsing the Library on the Web

The largest collection of SIF containers made available for the community, by the community, can be browsed by using the SCS Library search function. Consider this a SIF registry, and if you need a base SIF container, or perhaps are working with a fellow researcher that has published some work and has included a container to verify their efforts, we can search for them from the Search function in the web interface.

Since we created a Tensorflow container for our WSL2 example, let’s start by looking for other Tensorflow containers. Our search shows multiple Tensorflow containers including the one demonstrated in this series.

Next we will do a quick search for who else might be interested in sharing their Alpine containers, which of course you can always select from the Sylabs base images. We created an Alpine container called MyAlpine earlier in our demonstration series and can see the example “pull command” to retrieve this container.

Lets try searching for another popular container called RStudio

And finally, we can look for a WTTR containerized weather forecasting container, like the one we created, encrypted and signed

Because this container was encrypted, and signed, we can see the “lock” icon next to the container name and tag, along with the “signed” icon, with additional metadata below them.

We can also see the example pull command that the Singularity client can use for this container too.


We hope you’ve enjoyed this five part series demonstrating Sylabs’ Container Services platform. Included were a containerized workflow in WSL2 with Tensorflow and Keras, and some advanced functions in SingularityCE, with signed and encrypted containers stored in SCS Library, and the simplified search function. Follow us on at @Sylabs and @Singularity_CE or sign up for our newsletter to stay updated on the latest news and videos. Please feel free to try out the services, and let us know about your experience, and what features you’d like to see at

Join Our Mailing List

Related Posts

Pin It on Pinterest