Workstations & Containers
Users of the Strong Compute ISC develop and launch experiments from inside containers which facilitate access to project files and data, and allows flexibility to install any necessary project dependencies. Users can create and start their containers by visiting the "Workstations" page on Control Plane (https://cp.strongcompute.ai).
Important things to note about containers on Strong Compute:
Containers are associated jointly with both users and organisations. Containers can be thought of as workspaces belonging to a user which are each associated with a specific organisation.
Users can create multiple containers associated with each organisation they belong to.
Containers have no size limit, but the larger the container (packages installed, files saved inside) the longer the time required to prepare the container to run, so it is recommended to try to make containers no bigger than necessary.
User container data is stored in cloud storage when the container is
Offline
, downloaded to a burst workstation or a cluster when the user requests to start the container, and shipped back to cloud storage when the container is stopped.Containers are run in interactive mode on burst workstations or cluster workstations when a user clicks "Start". When the container is
Running
the user is able to create and make changes to files in the container via SSH.Users are billed for time that the container is
Running
and for cloud storage of container data. Container data is shipped to cloud storage when an experiment withcompute_mode="interruptible"
orcompute_mode="burst"
is launched, and when the container is stopped. See Billing for more information about billing.Containers can be deleted when the container status is
Offline
, which has the effect of permanently deleting all container data from cloud storage and stopping billing for container storage.Containers are run in non-interactive mode on training nodes (e.g. a "burst cluster") when the user launches an experiment. A copy of the container is run on each compute node, and the user's project files and data are made available read-only to the container.
Strong Compute bases all containers on a set of pre-prepared images. User container data is stored as a sequence of diffs from these base images.
Container diffs are created each time an experiment is launched (except for experiments with
compute_mode="cycle"
) and when containers are stopped on a burst workstation or a cluster.Containers do not come with python! We recommend installing a handful of helpful packages immediately after creating and starting your container.
apt update && apt install python3-dev python3-pip python3-virtualenv git nano
Users are free to install anything they like to (almost) anywhere they like within their container. This means
pip
installing to the container itself (though we would still strongly recommend using virtual environments for python dependencies), andapt
installing much more to the container.Containers do not allow saving or installing anything at
/
. Attempts to save or install anything to/
will result in an error. This is not a problem for mostapt
packages as these will install to pre-existing OS directories.
This functionality includes significant differences from the nature and behaviour of containers on Strong Compute prior to 11 July 2025. Containers that were created prior to 11 July 2025 are no longer functional but can be converted to the new container format. Users with container created prior to 11 July 2025 that wish to have their containers converted to the new container format should contact Strong Compute.
Container orientation
The internal structure of containers include the following directories and files.
/root
: the primary intended destination for user project files and data. This directory is available to all nodes when running an experiment./shared
: this is a secondary storage volume made available to container while running in interactive mode on a cluster workstation that is private to an organisation. This volume:Is read-write accessible to all containers belonging to the same organisation running in interactive mode on a cluster.
Intended for use as a shared space for organisation members to securely share files and data.
Fundamentally belongs to a cluster, is not shipped to cloud storage, and is not available to compute nodes while running experiments.
Is limited to 100GB in size on a given cluster.
Creating your container
Users can create containers from the Containers section of the Workstations page in Control Plane.

Starting your container on a Workstation
A Workstation is a computer in a Constellation Cluster that runs user Containers in interactive mode. Workstations can either be "burst workstations" (cloud machines provisioned on-demand for the sole purpose of running a container in interactive mode) or cluster workstations (machines designated as workstation machines within a running cluster).
Burst workstations
Users can launch their container on a burst workstation by clicking "Start" on the container, selecting a burst shape from the "Select Burst Shapes" menu for the container, and clicking "Burst".
The menu of available burst workstation machine types and their details and attributed is available on the "Burst" page in Control Plane.
Starting the burst workstation will take 5-10 minutes, so please be patient. When you start your burst workstation, the ISC will provision the on-demand instance for you and ship your container (as well as any requested mounted dataset) to that workstation machine.
When you Stop your container on a burst workstation, the ISC will ship your container back from the burst workstation to cloud storage. Stopping can also take 5-10 minutes.
When started, your container SSH command will be displayed and the container status will show "running".
Containers running on a burst workstation will have a volumes at /shared
which is not available to other members of the user's organisation (their containers will be running on machines in a different cluster). This volume is not shipped from the container's home cluster but created empty on the burst workstation. Similarly, the /shared
volume is not shipped back to the cloud storage when the container is stopped on the burst workstation.
Users should save data in /root
when working in a container in interactive mode to ensure that data is shipped back to cloud storage when stopped.
Containers running on a burst workstation can launch experiments in compute_mode = "burst"
but cannot launch experiments in compute_mode = "cycle"
or compute_mode = "interruptible"
.
Cluster workstations
Users can launch their container on a cluster workstation by clicking "Start" on the container, selecting a cluster from the "Select Cluster" menu for the container, and clicking "Start".
Containers can be started with access to one or more GPUs, subject to the availability of GPUs on cluster workstations, by selecting from the "Select GPUs" menu.
Containers running with zero GPUs share the system resources of the machine, such as CPU cores and memory, with other user containers running on that machine. Performance of containers running with zero GPUs may therefore be impacted at times by the number of other user containers also running on the same workstation.
Containers running with one or more GPUs will have dedicated system resources such as CPU cores and memory. Performance of Containers running with one or more GPUs will not be impacted by other user activity.
When your container has started, an SSH command will be shown with which you can connect to your running container and the container status will show "running".
Mounting Datasets
Users can also optionally select one or more Datasets to mount to their container when it is started from the "Select Datasets" menu. Datasets mounted to the Container are accessible within the Container at /data
.
Timeout
Containers are automatically stopped after a period of 3 months without an active SSH connection.
Member containers
Organisation Admin and Finance members can stop running member containers associated with that organisation by visiting the Member Containers section of the Workstations page. This allows organisation Admin and Finance members to manage the utilisation of organisation containers, including those running with one or more GPUs.
Where to save data
Containers provide secure access to your data, which is stored in external volumes mounted within your Container including /root
.
To share files across an organisation easily, use the /shared
directory which is the organisation shared storage. This has a 100GB allocation, which is seperate from container dedicated storage.
When you SSH into your container from a bare terminal (Terminal
on MacOS and Linux, cmd
or powershell
on Windows) a report estimating your organisation shared storage utilisation is displayed as follows.
organisation shared storage utilisation: Y% (Y/100G)
Tip: run
source ~/.bashrc
to prompt the above report to display at any time. Recommend deactivating any python virtual environments first to avoid sourcing conflicts.
Last updated