By default, all Snakemake jobs will run on a machine with 4 CPUs available. To modify the number of CPUs allocated to the job, use the resources
directive of the Snakefile rule as follows:
Snakemake requires that the user specify the number of cores available to the workflow via the --cores
command line argument. To define the number of cores available to the job, set the cores
keyword in your SnakemakeMetadata
. The cores
field will default to 4 if there is no value provided.
By default, all Snakemake jobs will run on a machine with 8 GB of RAM. To modify the amount of memory allocated to the job, use the resources
directive of the Snakefile rule. For example, to allocate 32 GB of RAM to a task:
To run a Snakemake job on a GPU instance, modify the resources
directive of the Snakefile rule. For example:
GPU tasks will execute as either a small_gpu_task
or large_gpu_task
as defined here. To request a large GPU instance, add CPU and memory requirements as follows:
Limitations:
container
directive inside GPU instances is currently not supported. Use conda or add runtime dependencies to your Dockerfile to use GPUs.By default, all Snakemake jobs will run on a machine with 4 CPUs available. To modify the number of CPUs allocated to the job, use the resources
directive of the Snakefile rule as follows:
Snakemake requires that the user specify the number of cores available to the workflow via the --cores
command line argument. To define the number of cores available to the job, set the cores
keyword in your SnakemakeMetadata
. The cores
field will default to 4 if there is no value provided.
By default, all Snakemake jobs will run on a machine with 8 GB of RAM. To modify the amount of memory allocated to the job, use the resources
directive of the Snakefile rule. For example, to allocate 32 GB of RAM to a task:
To run a Snakemake job on a GPU instance, modify the resources
directive of the Snakefile rule. For example:
GPU tasks will execute as either a small_gpu_task
or large_gpu_task
as defined here. To request a large GPU instance, add CPU and memory requirements as follows:
Limitations:
container
directive inside GPU instances is currently not supported. Use conda or add runtime dependencies to your Dockerfile to use GPUs.