Defining Cloud Resources
When a workflow is executed and tasks are scheduled, the machines needed to run the task are provisioned automatically and managed for the user until task completion. Tasks can be annotated with the resources they are expected to consume (eg. CPU, RAM, GPU) at runtime and these requests will be fullfilled during the scheduling process.
Prespecified Task Resource
The Latch SDK currently supports a set of prespecified task resource requests represented as decorators:
small_task
: 2 cpus, 4 gigs of memory, 0 gpusmedium_task
: 32 cpus, 128 gigs of memory, 0 gpuslarge_task
: 96 cpus, 192 gig sof memory, 0 gpussmall_gpu_task
: 8 cpus, 32 gigs of memory, 1 gpu (24 gigs of VRAM, 9,216 cuda cores)large_gpu_task
: 31 cpus, 120 gigs of memory, 1 gpu (24 gigs of VRAM, 9,216 cuda cores)v100_x1_task
: 16 cpus, 64 gigs of memory, 1 V100 gpu (16 gigs of VRAM, 5,120 cuda cores)v100_x4_task
: 64 cpus, 256 gigs of memory, 4 V100 gpus (64 gigs of VRAM, 20,480 cuda cores)v100_x8_task
: 128 cpus, 512 gigs of memory, 8 V100 gpus (128 gigs of VRAM, 40,960 cuda cores)
We use the tasks as follows:
Custom Task Resource
You can also arbitrarily specify task resources using @custom_task
:
Dynamic Task Resource
You can dynamically define task resources based on the tasks’ input
parameters by passing functions as arguments for the custom_task
decorator.
The provided functions will execute at runtime, and the task will launch
with the resulting resource values:
In the provided example, the allocate_cpu
function is designed to process the
input parameters files
. Upon execution, the function returns an integer representing
the total number of CPU cores that should be allocated to the task based on the input
file size.
my_task
function and the allocate_cpu
function both accept a parameter
named files
of type List[LatchFile]
.