Skip to content

Commit 084b358

Browse files
champion254dankosekyondaMeta
authored
Fix typo "handles" to "handled" in pinmem_nonblock.py (#3807)
## Description This PR fixes a small grammatical typo in `pinmem_nonblock.py`. - **Original:** "...cuda streams can be handles using" - **Fixed:** "...cuda streams can be handled using" ## Checklist <!--- Make sure to add `x` to all items in the following checklist: --> - [x] The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER") (N/A - Typo fix) - [x] Only one issue is addressed in this pull request - [x] Labels from the issue that this PR is fixing are added to this pull request - [x] No unnecessary issues are included into this pull request. Co-authored-by: danko <danko.pistek@gmail.com> Co-authored-by: sekyondaMeta <127536312+sekyondaMeta@users.noreply.github.com>
1 parent f79c3d9 commit 084b358

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

intermediate_source/pinmem_nonblock.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@
127127
# 1. The device must have at least one free DMA (Direct Memory Access) engine. Modern GPU architectures such as Volterra,
128128
# Tesla, or H100 devices have more than one DMA engine.
129129
#
130-
# 2. The transfer must be done on a separate, non-default cuda stream. In PyTorch, cuda streams can be handles using
130+
# 2. The transfer must be done on a separate, non-default cuda stream. In PyTorch, cuda streams can be handled using
131131
# :class:`~torch.cuda.Stream`.
132132
#
133133
# 3. The source data must be in pinned memory.

0 commit comments

Comments
 (0)