Containers
Last updated
Last updated
Users of the Strong Compute ISC develop and launch experiments from inside Containers which facilitates access to their project data, and allows flexibility to install any necessary project dependencies. Users can generate and start their Containers by visiting the "Workstations" page on Control Plane ().
We have heard from many users wanting more freedom to install necessary dependencies without compromising the viability of their Containers, so we have changed the way we generate and serve containers to provide exactly that.
From 6 February 2025, all newly generated Containers will be "Freedom" containers (the new Container type). All Containers generated prior to 6 February 2025 are now designated "Legacy" containers and will continue to work the same way.
Important things to note about the new Freedom type Containers:
Users are free to install anything they like to (almost) anywhere they like within their Freedom Container. This means pip
installing to the Container itself (though we would still strongly recommend using virtual environments for python dependencies), and apt
installing much more to the Container.
The new Freedom Container is still limited in size to a maximum of 75 GB. This means both pip
and apt
installing is now limited only by the 75GB size of your Container. Your Freedom Container will still launch on a Workstation and on training nodes just fine.
The new Freedom Container does not come with python! We recommend installing a handful of helpful packages immediately after generating your new Freedom Container.
The one trade-off for all of this extra freedom is that Users may not save or install anything to /
within the container. If you (or any software you use) attempts to create directories or save anything else at /
an error will be returned. This is not a problem for most apt
packages as these will install to pre-existing OS directories.
Please get contact Strong Compute if you have a Legacy Container that you would like to convert to a Freedom Container.
/
- shipped for burst, accessible to the user only, writeable but not recommended as over-inflating /
will break the container.
/root
- shipped for burst, accessible to the user only, 75GB quota, recommended for user data storage.
/shared
- not shipped for burst, accessible to the whole Organisation, 100GB quota, fast for small files, eg. millions of kb files.
/scratch
- not shipped for burst, accessible to the whole Organisation, suitable for multi-TB data storage, recommended file size >100kb-5MB+
/
- not writeablge if you attempt to create a new file or folder at /
.
/root
- shipped for burst, accessible to the user only, 75GB quota, recommended for user data storage.
/shared
- not shipped for burst, accessible to the whole Organisation, 100GB quota, fast for small files, eg. millions of kb files.
/scratch
- not shipped for burst, accessible to the whole Organisation, suitable for multi-TB data storage, recommended file size >100kb-5MB+
The only limitations on installing packages with apt
are that they must not require creating files / folders at /
, /dev
, /sys
, or /proc
, and must fit within the 75GB user storage quota.
Not sure which kind of Container you have? Try running touch /foo.txt
. If this works without throwing an error, you're on legacy. If an error is returned, you're on freedom.
To make it more clear what travels with your container to burst: The mounted folders that do not go to burst will be moved to /mnt
eg. /mnt/shared
and /mnt/scratch
.
These will be temporarily symlinked during a transition period, however this will be deprecated in time.
Users can generate one Container per Organisation that they are a member of, and each Container is generated on one and only one Constellation Cluster ("Cluster"). Once a Container has been generated on a particular Cluster, it cannot be started on any other Cluster. Users can select the Cluster to generate their Container on using the Constellation Cluster menu associated with their desired Container, then click "Generate" to generate the Container.
A Workstation is a computer in a Constellation Cluster that runs User Containers.
Containers can be started with access to one or more GPUs subject to the availability of GPUs on Workstation machines within the Cluster. Users can start their Container with zero or more GPUs by selecting from the GPUs menu for the Container before clicking "Start".
GPUs are made available within Containers this way for development and testing purposes, and are not intended to be used for training. Each Organisation is limited to four (4) "running" Containers with one or more GPUs attached at any given time. If other 4 members of your Organisation already have a Container associated with that Organisation running with 1+ GPUs, Control Plane will return an error when you also try to start your Container also associated with that Organisation with 1+ GPUs.
Users can also optionally select a Dataset to have mounted to their Container when it is started from the "Mounted Dataset" menu for container. Datasets mounted to the Container are accessible within the Container at /data
.
Containers running with zero GPUs share the system resources of the machine such as CPU cores and memory with other user containers running on that machine. Performance of Containers running with zero GPUs may therefore be impacted at times by the number of other User Containers also running on the same workstation.
Containers running with one or more GPUs will have dedicated system resources such as CPU cores and memory. Performance of Containers running with one or more GPUs will not be impacted by other User activity.
Containers are automatically stopped after a period of 3 months without an active SSH connection.
Organisation Admin and Finance members can stop running member containers associated with that Organisation by visiting the Member Containers section at the bottom of the User Credentials page. This allows Organisation Admin and Finance members to manage the utilisation of Organisation Containers, including those running with one or more GPUs.
Containers provide secure access to your data, which is stored in external volumes mounted within your Container including /root
. Containers by default have a maximum dedicated storage volume of 75GB (subject to change). Users will find that attempting to save more than 75GB of data to their container dedicated storage such as datasets or model checkpoints will result in system refusal.
If your container dedicated storage is over-utilised, you will no longer be able to access your container from any IDE such as VSCode or Cursor. If this is the case, you will still be able to access your container by SSH-ing from a bare terminal.
When you SSH into your container from a bare terminal (Terminal
on MacOS and Linux, cmd
or powershell
on Windows) a report estimating your container dedicated storage utilisation is displayed as follows.
Users are strongly encouraged to delete any unnecessary data from their container dedicated storage, which can be done from a bare terminal, to avoid running out of storage space and to restore access via their preferred IDE.
Tip: run
source ~/.bashrc
to prompt the above report to display at any time. Recommend deactivating any python virtual environments first to avoid sourcing conflicts.
To share files across an organisation easily, use the /shared
directory which is the organisation shared storage. This has a 100GB allocation, which is seperate from container dedicated storage.
There is also /scratch
, which is a larger allocation (useful for model weights, etc).
Note: burst can only access container dedicated storage - so, make sure to copy needed files from /scratch or /shared !