Skip to content

Commit 93231cc

Browse files
Gasoonjiafacebook-github-bot
authored andcommitted
bring cuda export ci back by using A100 as target GPU
Summary: Currently ci keeps crash by the error msg `Not enough SMs to use max_autotune_gemm mode` due to gpu resource limitation. Make ci always run on A100 to bring ci back Differential Revision: D104504782
1 parent a49171d commit 93231cc

1 file changed

Lines changed: 6 additions & 0 deletions

File tree

backends/cuda/tests/TARGETS

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
load("@fbsource//xplat/executorch/build:runtime_wrapper.bzl", "runtime")
22
load("@fbcode_macros//build_defs:python_unittest.bzl", "python_unittest")
33
load("@fbcode_macros//build_defs:python_unittest_remote_gpu.bzl", "python_unittest_remote_gpu")
4+
load("@fbcode_macros//build_defs/lib:re_test_utils.bzl", "re_test_utils")
45

56
oncall("executorch")
67

@@ -22,6 +23,11 @@ python_unittest_remote_gpu(
2223
"//executorch/examples/models/toy_model:toy_model",
2324
],
2425
keep_gpu_sections = True,
26+
remote_execution = re_test_utils.remote_execution(
27+
subplatform = "A100",
28+
mig = "false",
29+
platform = "gpu-remote-execution",
30+
),
2531
)
2632

2733
python_unittest(

0 commit comments

Comments
 (0)