Skip to content

Add support for batched tasks.#668

Open
bosilca wants to merge 3 commits intoICLDisco:masterfrom
bosilca:topic/batched_tasks
Open

Add support for batched tasks.#668
bosilca wants to merge 3 commits intoICLDisco:masterfrom
bosilca:topic/batched_tasks

Conversation

@bosilca
Copy link
Copy Markdown
Contributor

@bosilca bosilca commented Sep 10, 2024

The idea is the following:

  • tasks incarnations (aka. BODY) can be marked with the "batch" property allowing the runtime to provide the task with the entire list of ready tasks of the execution stream instead of just extracting the head.
  • this list of ready tasks is in fact a ring, that can then be trimmed by the kernel and divided into the tasks to be batch and the rest. While the batch group will be submitted for execution (user responsibility), the rest of the tasks will be added back into the stream pending list, in the order in which they were provided in the ring. This mechanism also allow the user to reorder the tasks based on some user-level criteria.
  • the kernel also needs to provide a callback into the gpu_task complete_stage, such that the runtime can call the specialized function able to complete all batched tasks.

Comment thread parsec/mca/device/device_gpu.c
if( NULL != type_property) {

if (!strcasecmp(type_property->expr->jdf_var, "cuda")
if (!strncasecmp(type_property->expr->jdf_var, "cuda", 4) /* for batched */
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a leftover from a prior iteration of that patchset that used the type=cuda_batched instead of adding a new batched property.

I assume the expectation is that we can have batched and non batched CUDA bodies simultaneously. Did you test this works?

bosilca added 3 commits May 9, 2026 02:24
The idea is the following:
- tasks incarnations (aka. BODY) can be marked with the "batch" property
  allowing the runtime to provide the task with the entire list of ready
  tasks of the execution stream instead of just extracting the head.
- this list of ready tasks is in fact a ring, that can then be trimmed
  by the kernel and divided into batch and the rest. The rest of the
  tasks will be left in the ring, while the batch group will be
  submitted for execution.
- the kernel also needs to provide a callback into the gpu_task
  complete_stage, such that the runtime can call the specialized
  function able to complete all batched tasks.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Replace the CUDA-specific batch build switch with PARSEC_HAVE_DEV_CAPABILITY_BATCH so batching is a runtime capability shared by all supported device types. Export the new option through parsec_options and PaRSECConfig.

Add per-device MCA parameters to disable batching for CPU, recursive, CUDA, HIP, and Level Zero devices. Use shared helpers to sanitize batch chore types in DTD and to gate GPU task-ring batching on the selected device.

Teach PTG to accept batch=true for CPU/default bodies as well as typed device bodies, and add CPU batch examples for both PTG and DTD with ctest coverage for the enabled and CPU-disabled DTD paths.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Replace the CUDA-specific batch build switch with PARSEC_HAVE_DEV_CAPABILITY_BATCH so batching is a runtime capability shared by all supported device types. Export the new option through parsec_options and PaRSECConfig.

Add per-device MCA parameters to disable batching for CPU, recursive, CUDA, HIP, and Level Zero devices. Use shared helpers to sanitize batch chore types in DTD and to gate GPU task-ring batching on the selected device.

Teach PTG to accept batch=true for CPU/default bodies as well as typed device bodies, and add CPU batch examples for both PTG and DTD with ctest coverage for the enabled and CPU-disabled DTD paths.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
@bosilca bosilca force-pushed the topic/batched_tasks branch from 88bbbd3 to 41fa201 Compare May 9, 2026 06:32
@bosilca bosilca requested a review from abouteiller May 9, 2026 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants