The VALID_RESOURCES_MAPPING derived from the AWS documentation on ResourceRequirement is only required for Fargate compute environments and requires that job definitions be very specific about how much memory/threads to request. When you request an amount of memory that isn't a key to this dictionary, or a number of threads that doesn't match with the used memory value, things don't behave as expected, either selecting the minimum memory for the threads provided or erroring out respectively. (Happens in the _validate_resources method of the BatchJobBuilder class.)
It would be great if this handling was more flexible for EC2 compute environments, which can have any instance type and can be much more flexible than the Fargate related expectations. It's possible I get the chance to implement an improvement here, but I figured I'd document the problem in case others encounter the same.
The
VALID_RESOURCES_MAPPINGderived from the AWS documentation on ResourceRequirement is only required for Fargate compute environments and requires that job definitions be very specific about how much memory/threads to request. When you request an amount of memory that isn't a key to this dictionary, or a number of threads that doesn't match with the used memory value, things don't behave as expected, either selecting the minimum memory for the threads provided or erroring out respectively. (Happens in the_validate_resourcesmethod of theBatchJobBuilderclass.)It would be great if this handling was more flexible for EC2 compute environments, which can have any instance type and can be much more flexible than the Fargate related expectations. It's possible I get the chance to implement an improvement here, but I figured I'd document the problem in case others encounter the same.