Singularity Container Workflow: Part 3 – Create an Account & Authentication Token

By Staff

Aug 17, 2022 | Blog, How To Guides

Create an Account & Authentication Token

Now that we have SingularityCE installed in WSL2, and NVIDIA GPU support is enabled, we will create a  Singularity Container Services account and configure the local Singularity client, followed by building a remote container.

This demonstration will include several steps such as:

  • Sign up for a Singularity Container Services account
  • Remote build of a SIF container from Docker source
  • Publish the resulting container to the SCS Library

An account may be created by using SSO linked to a Google, GitHub, GitLab, or Microsoft ID. To get started browse to https://cloud.sylabs.io

Registration

Select the blue sign-up button, registration will begin with selecting a preferred authentication provider,

You’ll then be asked to choose a Sylabs username. Using all lowercase letters, the username will identify you in the container library, and other Singularity Container Services. In this scenario we are choosing “sylabsdemo”.

Describe the screen layout starting from the Left side My Projects

 

My Projects Section

  • By clicking on the My Project section on the left, it will display your current project in the center. Clicking on the highlighted project name displays an overview of the project,  such a public or private
  • My Remote Builds: see your remote builds in action, or past remote builds and their associated definition files.
  • My Keys: This is your public key created within the Singularity CLI and pushed to Singularity Container Services.
  • Access Tokens: is for allowing access to Singularity Container Services by way of the SingularityCE CLI, or when other endpoints may be available.
  • Quotas: Lists the quotas associated with Build Minutes and Storage

Official Base Images

On the right side of the screen is a list of curated OS SIF containers that are available to all SCS registrations. These images are accessible to pull via the Singularity command line. But more of that in a later section of this demonstration.

Selecting Alpine brings us to the repository artifacts, that include the latest signed version of 3.15 for power, Intel and ARM architectures. Scrolling through displays older versions that have not been signed. Scroll back to the top and select Dashboard for the next steps.

Access Token

Connecting the Singularity CLI client to SCS. Back at the Dashboard, click the Access Tokens button, then enter a name in “create new access token” and click the green “+” button. In this scenario, it is called “sdkey2”.

You’ll be presented with the token in a copy-paste text box to enter into your password manager, or download as a ‘sylabs-token’ file. Choose the methodology that works best for your workflow.

You’ll be presented with the token in a copy-paste text box to enter into your password manager, or download as a ‘sylabs-token’ file. Choose the methodology that works best for your workflow. Important: Tokens are the keys to the Singularity Container Services, and they may be revoked prior to the automatic 30-day expiration period. Before leaving this screen, you may want to user your favorite password manager, or copy the displayed access token into your clipboard. Once you move away from this screen, you will not longer be able to view the token, that is why you would securely store the “sylabs-token” file.

Singularity CLI remote login

The Singularity command line interacts with Singularity Container Services to build, push and verify container images. To authorize the Singularity client (CLI) to access the library, the access token is required. Hop back into your WSL2 terminal to first check the Singularity client status of the endpoint. By default you will see that Singularity will have “SylabsCloud” configured. *If you are using a customer deployed version of Singularity Enterprise, your endpoint may need to be different.
$ singularity remote status
INFO:    Checking status of default remote.
SERVICE    STATUS  VERSION             URI
Builder    OK      v1.6.3-0-gc98a236a  https://build.sylabs.io
Consent    OK      v1.6.4-0-g3e6c61e   https://auth.sylabs.io/consent
Keyserver  OK      v1.18.8-0-gb1b58f9  https://keys.sylabs.io
Library    OK      v0.3.5-0-g66fbae8   https://library.sylabs.io
Token      OK      v1.6.4-0-g3e6c61e   https://auth.sylabs.io/token

No authentication token set (logged out).

You will notice that the Singularity client is “(logged out)” of the remote services.
Lets display the default Singularity Container Services endpoint. This indicates the Singularity Client is configured to use the Remote Build, Library and Keystore at cloud.sylabs.io

$ singularity remote list
Cloud Services Endpoints
========================

NAME         URI              ACTIVE  GLOBAL  EXCLUSIVE  INSECURE
SylabsCloud  cloud.sylabs.io  YES     YES     NO         NO

Keyservers
==========

URI                     GLOBAL  INSECURE  ORDER
https://keys.sylabs.io  YES     NO        1*

* Active cloud services keyserver

Now we will login to those remote services. We’ll take the content of the “sylabs-token” Access Token file that was previously created,  and paste it in the singularity remote list command  “Access Token” prompt. The contents of the paste are hidden. Pressing enter, the singularity remote login command should verify the token is correct, followed by “INFO: Acces Token Verified!” message.

$ singularity remote login
Generate an access token at https://cloud.sylabs.io/auth/tokens, and paste it here.
Token entered will be hidden for security.
Access Token: <paste here>
INFO:    Access Token Verified!
INFO:    Token stored in /home/demo/.singularity/remote.yaml

With verification of the access token, the next step is to check the status of the services.

$ singularity remote status 
INFO:    Checking status of default remote.
SERVICE    STATUS  VERSION             URI
Builder    OK      v1.6.3-0-gc98a236a  https://build.sylabs.io
Consent    OK      v1.6.4-0-g3e6c61e   https://auth.sylabs.io/consent
Keyserver  OK      v1.18.8-0-gb1b58f9  https://keys.sylabs.io
Library    OK      v0.3.5-0-g66fbae8   https://library.sylabs.io
Token      OK      v1.6.4-0-g3e6c61e   https://auth.sylabs.io/token
INFO:    Access Token Verified!

Valid authentication token set (logged in).

Verification of the authentication token and the message “(logged in)” indicates success.

A Container Build 

You can interact with the remote build services either through the SCS Recipe Editor, Singularity CLI, or by the open source Singularity Remote Build Client

The SCS Remote Build function provides a valuable feature to anyone who:

  • Does not have privilege on your workstation
  • Wants to perform a container build request from a tablet or web interface
  • Needs to include a SIF image build directly from a CI/CD workflow
  • Would like to build for an architecture different from that of your client system (x86, ARM, Power)

*Singularity Enterprise can support multiple architectures, this free version of Singularity Container Services currently supports x86 only. 

The remote build function of SCS is secure. Each build request creates a new compute environment specifically for your build, it then builds the container, securely returning it to the SCS Library or to your workstation, then tears down that session’s environment. 

Today we are demonstrating how to build a container from the definition editor within SCS, using a DockerHub source container and directly storing it in the SCS Library. This container will be 2.56GB, but you may find yourself building much smaller containers more suitable to your needs.

*Singularity Container Services freemium version comes with 11GB of storage. 

We mentioned earlier SCS has a feature to build a SIF container directly from the WebGUI without the need of using the Singularity CLI. 

Lets go to the Dashboard and then into Remote Builder

You’ll see below a previously created a container, and the account has used 57 minutes of build-minute allocation.

Through that Build Identifier and Recipe links we have the option to view the container build logs,

Through that Build Identifier and Recipe links we have the option to view the container build logs,

T

And the associated Build Recipe file.

There is also an option to delete the container.

Now we will scroll back up to the sample definition file listed in the Definition file editor, and quickly build that container. The “Bootstrap” section denotes the latest Alpine container from DockerHub. The “From” section points us to the container name and the version “latest”

All you need to do here is put a name in the “Repository”, following the format described in the text box

We’ll put this in our “container” collection, followed by a container name of “myalpine” and tag named “test”

Clicking on “Submit Build” will quickly generate the container. We can watch the container build within the “Build Output” window. The build will start by pulling in the blob sources from DockerHub, writing the manifest and storing the internal signatures. Layer will be unpacked, the %info detail from the Recipe file will be included along with the labels, then the SIF file will be created. The build will complete and the container will be stored in the SCS Library. Since the container has not been signed, the container verification will be skipped and upload to SCS Library will complete. Your container is built as indicated by the green success status indicator.

From here, the recipe file can be viewed, as well as the container image and the overall build time.

Let’s take a look at the container we just built. The container is identified as an AMD64 architecture, with the “test” tag, the creation date, unique identifier and the image size. We can also see the image is yet to be signed which we will get to later.

Ok, now that we created a quick container, let’s create the TensorFlow container needed to run the sample workload. We’ll repeat a similar process to the alpine container.

We have a definition file already created and will paste the content into the recipe editor. Taking the source container from Docker, adding in some %label and %help information describing the container. We’ll put it in the container repository and name it accordingly. This is a sizable container (2.56GB) and will take a little more time to build, which we’ll fast forward. 

The container is now completed and we’ll take a look at the details. As with all containers, this one can be downloaded from the Web interface, we can see the sample pull commands and the commands needed to sign the container. Because we can scroll back through the logged output of the build we will see

  • The Remote builder has been selected and access token has been authenticated
  • The cloud.sylabs.io is the target service
  • The build is gathering the blob sources from DockerHub
  • The layers are being unpacked
  • The SIF image is being created and the build is complete
  • The warning message is that the container does not have a signature yet, this is just informational
  • The completed container is being placed into SCS Library

While we have not covered the topic of performing a remote build with a Singularity CLI, or with the Singularity Remote Build Client, here is an example command to build the same TensorFlow container as previously demonstrated within the WSL2 environment. If you are interested in seeing a video of Singularity Remote Build Client, or have another suggestion, let us know!

Singularity CLI

 $ singularity build --remote library://{project}/containers/tensorflow:latest-gpu docker://tensorflow/tensorflow:latest-gpu

Summary

Next week we will cover signing the container we just made and more!  Thank you for joining us…

Join Our Mailing List

Related Posts

An Introduction to Singularity Containers

Enabling Portable and Secure Computing Environments for High-Performance Workloads.As part of their ongoing efforts to streamline workflows, enhance productivity, and save time, engineers, and developers in enterprises and high performance computing (HPC) focused...

read more

SingularityCE Now Available in EPEL

EPEL (Extra Packages for Enterprise Linux) is a repository of additional packages for Enterprise Linux, including Red Hat Enterprise Linux, AlmaLinux, Oracle Linux, Rocky Linux and others. By integrating SingularityCE with EPEL, starting with release 3.10.4, users may...

read more