Strong Docs
  • Welcome
  • Getting Started
    • 1. Registration and VPN
    • 2. Setting up your development environment
    • 3. Hello World
  • Basic Concepts
    • Organisation & Teams
    • Containers
    • Projects
    • Datasets
    • Launching Experiments
    • Experiment States
    • Artifacts
    • ISC Commands (CLI)
    • Resuming Experiments
    • Billing
  • Advanced
    • Clusters
    • Destinations
    • BYO Cloud API Keys
    • Cluster health logs
  • Training with ISC
    • Deep dive tutorial
    • Data Parallel Scaling
  • Use Cases
    • More Examples & Demos
  • Change Log
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
Powered by GitBook
On this page
  • Update 6 February 2025: Legacy -> Freedom Containers
  • Legacy Containers (generated prior to 30 January 2025)
  • Freedom Containers (generated after 30 January 2025)
  • Updates coming soon
  • Generating your container
  • Starting your container on a Workstation
  • Burst containers
  • Timeout
  • Member containers
  • Where to save data
  1. Basic Concepts

Containers

PreviousOrganisation & TeamsNextProjects

Last updated 7 days ago

Users of the Strong Compute ISC develop and launch experiments from inside Containers which facilitates access to their project data, and allows flexibility to install any necessary project dependencies. Users can generate and start their Containers by visiting the "Workstations" page on Control Plane ().

Update 6 February 2025: Legacy -> Freedom Containers

We have heard from many users wanting more freedom to install necessary dependencies without compromising the viability of their Containers, so we have changed the way we generate and serve containers to provide exactly that.

From 6 February 2025, all newly generated Containers will be "Freedom" containers (the new Container type). All Containers generated prior to 6 February 2025 are now designated "Legacy" containers and will continue to work the same way.

Important things to note about the new Freedom type Containers:

  • Users are free to install anything they like to (almost) anywhere they like within their Freedom Container. This means pip installing to the Container itself (though we would still strongly recommend using virtual environments for python dependencies), and apt installing much more to the Container.

  • The new Freedom Container is still limited in size to a maximum of 75 GB. This means both pip and apt installing is now limited only by the 75GB size of your Container. Your Freedom Container will still launch on a Workstation and on training nodes just fine.

  • The new Freedom Container does not come with python! We recommend installing a handful of helpful packages immediately after generating your new Freedom Container.

apt update && apt install python3-dev python3-pip python3-virtualenv git nano
  • The one trade-off for all of this extra freedom is that Users may not save or install anything to / within the container. If you (or any software you use) attempts to create directories or save anything else at / an error will be returned. This is not a problem for most apt packages as these will install to pre-existing OS directories.

Please get contact Strong Compute if you have a Legacy Container that you would like to convert to a Freedom Container.

Legacy Containers (generated prior to 30 January 2025)

  • / - shipped for burst, accessible to the user only, writeable but not recommended as over-inflating / will break the container.

  • /root - shipped for burst, accessible to the user only, 75GB quota, recommended for user data storage.

  • /shared - not shipped for burst, accessible to the whole Organisation, 100GB quota, fast for small files, eg. millions of kb files.

  • /scratch - not shipped for burst, accessible to the whole Organisation, suitable for multi-TB data storage, recommended file size >100kb-5MB+

Freedom Containers (generated after 30 January 2025)

  • / - not writeablge if you attempt to create a new file or folder at /.

  • /root - shipped for burst, accessible to the user only, 75GB quota, recommended for user data storage.

  • /shared - not shipped for burst, accessible to the whole Organisation, 100GB quota, fast for small files, eg. millions of kb files.

  • /scratch - not shipped for burst, accessible to the whole Organisation, suitable for multi-TB data storage, recommended file size >100kb-5MB+

The only limitations on installing packages with apt are that they must not require creating files / folders at /, /dev, /sys, or /proc, and must fit within the 75GB user storage quota.

Not sure which kind of Container you have? Try running touch /foo.txt. If this works without throwing an error, you're on legacy. If an error is returned, you're on freedom.

Updates coming soon

To make it more clear what travels with your container to burst: The mounted folders that do not go to burst will be moved to /mnt eg. /mnt/shared and /mnt/scratch.

These will be temporarily symlinked during a transition period, however this will be deprecated in time.

Generating your container

Users can generate one Container per Organisation that they are a member of, and each Container is generated on one and only one Constellation Cluster ("Cluster"). Once a Container has been generated on a particular Cluster, it cannot be started on any other Cluster. Users can select the Cluster to generate their Container on using the Constellation Cluster menu associated with their desired Container, then click "Generate" to generate the Container.

Starting your container on a Workstation

A Workstation is a computer in a Constellation Cluster that runs User Containers.

Containers can be started with access to one or more GPUs subject to the availability of GPUs on Workstation machines within the Cluster. Users can start their Container with zero or more GPUs by selecting from the GPUs menu for the Container (leaving the "Burst Shape" as "None", more on this below) before clicking "Start".

GPUs are made available within Containers this way for development and testing purposes, and are not intended to be used for training. Each Organisation is limited to four (4) "running" Containers with one or more GPUs attached at any given time. If other 4 members of your Organisation already have a Container associated with that Organisation running with 1+ GPUs, Control Plane will return an error when you also try to start your Container also associated with that Organisation with 1+ GPUs.

Users can also optionally select a Dataset to have mounted to their Container when it is started from the "Mounted Dataset" menu for container. Datasets mounted to the Container are accessible within the Container at /data.

Containers running with zero GPUs share the system resources of the machine such as CPU cores and memory with other user containers running on that machine. Performance of Containers running with zero GPUs may therefore be impacted at times by the number of other User Containers also running on the same workstation.

Containers running with one or more GPUs will have dedicated system resources such as CPU cores and memory. Performance of Containers running with one or more GPUs will not be impacted by other User activity.

When your container has started, an SSH command will be shown with which you can connect to your running container and the container status will show "running".

Burst containers

Users can also start their containers on a "burst" workstation, which means a cloud machine spun up for the purpose of running the User's container.

Users can launch their container on a burst workstation by selecting a shape from the "Burst Shapes" menu for the container. The "Action" button will then change to show "Burst". Clicking "Burst" will start the process of initialising the burst workstation.

The menu of available burst workstation machine types is available on the "Burst" page in Control Plane.

Starting the burst workstation will take 5-10 minutes, so please be patient. When you start your burst workstation, the ISC will provision the on-demand instance for you and ship your container (as well as any requested mounted dataset) to that workstation machine.

When started, your container SSH command will be displayed and the container status will show "running".

When you Stop your container on a burst workstation, the ISC will ship your container back from the burst workstation to your container's home cluster. Stopping can also take 5-10 minutes.

Containers running on a burst workstation will have volumes at /shared and /scratch which are not available to other members of the User's organisation (their containers will be running on machines in a different cluster). These volumes are not shipped from the container's home cluster but created empty on the burst workstation. Similarly, the /shared and /scratch volumes are not shipped back to the container's home cluster when the container is stopped on the burst workstation.

Users should save data in /root when working in a container on a burst workstation to ensure that data is shipped back to the container's home cluster when stopped.

Containers running on a burst workstation can launch experiments in compute_mode = "burst" but cannot launch experiments in compute_mode = "cycle" or compute_mode = "interruptible".

Timeout

Containers are automatically stopped after a period of 3 months without an active SSH connection.

Member containers

Organisation Admin and Finance members can stop running member containers associated with that Organisation by visiting the Member Containers section at the bottom of the User Credentials page. This allows Organisation Admin and Finance members to manage the utilisation of Organisation Containers, including those running with one or more GPUs.

Where to save data

Containers provide secure access to your data, which is stored in external volumes mounted within your Container including /root. Containers by default have a maximum dedicated storage volume of 75GB (subject to change). Users will find that attempting to save more than 75GB of data to their container dedicated storage such as datasets or model checkpoints will result in system refusal.

If your container dedicated storage is over-utilised, you will no longer be able to access your container from any IDE such as VSCode or Cursor. If this is the case, you will still be able to access your container by SSH-ing from a bare terminal.

When you SSH into your container from a bare terminal (Terminal on MacOS and Linux, cmd or powershell on Windows) a report estimating your container dedicated storage utilisation is displayed as follows.

container dedicated storage utilisation: X% (X/75G)
organisation shared storage utilisation: Y% (Y/100G)

Users are strongly encouraged to delete any unnecessary data from their container dedicated storage, which can be done from a bare terminal, to avoid running out of storage space and to restore access via their preferred IDE.

Tip: run source ~/.bashrc to prompt the above report to display at any time. Recommend deactivating any python virtual environments first to avoid sourcing conflicts.

To share files across an organisation easily, use the /shared directory which is the organisation shared storage. This has a 100GB allocation, which is seperate from container dedicated storage.

There is also /scratch, which is a larger allocation (useful for model weights, etc).

Note: burst can only access container dedicated storage - so, make sure to copy needed files from /scratch or /shared !

https://cp.strongcompute.ai