GuidesDiscussionChangelogLog In

System Requirements

Host requirements for running Gigantum Client

Operating Systems

When running on Windows, currently Docker Desktop using HyperV is supported. This restricts use to only systems running Professional, Education, or Enterprise editions of Microsoft Windows 10.

In the near future, Docker Desktop using WSL2 will be supported, adding support for Windows Home as well.

Mac hardware must be a 2010 or a newer model, with Intel’s hardware support for memory management unit (MMU) virtualization, including Extended Page Tables (EPT) and Unrestricted Mode. macOS must be version 10.14 or newer. Read more

Most popular Linux distributions support running Docker. Gigantum supports and prefers Ubuntu or CentOS in most scenarios. The Quick-start Script supports various distros including Ubuntu, Amazon Linux, and CentOS.

System Resources

While the required amount of CPU, RAM and disk space required will be dependent on the intended computation workload at a minimum a host should have:

  • 2 cores
  • 4 GB of RAM
  • 30 GB of available disk space, but is dependent on workload

Note, that for macOS and Windows running on HyperV, these values are minimums for what is allocated to the Docker VM. These settings are available in the Docker Desktop app, and if using Gigantum Desktop to run the Client can be automatically set.

It is recommended that a host has at least:

  • 4 cores
  • 8 GB of RAM
  • 50 GB of available disk space, but is dependent on workload

A warning will be displayed to the user in their Client if less than 2.5GB of storage is available. This is because container builds and other operations can take up significant disk space and the Client can become unstable if Docker runs out of available disk space.


The Client currently supports GPU acceleration via NVIDIA GPUs only. To work, NVIDIA drivers must be installed on the host as described in the Using Nvidia GPUs with CUDA section. If using the Quick-start Script to install the Client, it can automatically install and configure drivers for you.

If a Project Base contains CUDA drivers that are compatible with the host driver version, the GPUs will automatically be configured in the Project container at launch. Shared memory allocated to the container will also be set automatically to 2GB by default, but is configurable via the Client config file.

If the host contains multiple GPUs, currently all GPUs are passed through to a compatible Project container. This means if multiple Projects are running or the Client is run in multi-tenant mode, care must be taken to properly select which GPU to access.