Skip to content

Commit f08200f

Browse files
morgolockgunes-arm
authored andcommitted
fix: Do not mutate shared _gemm_output_3d in CpuGemmConv2d::run()
CpuGemmConv2d::run() was mutating the shared member _gemm_output_3d by extending its padding before soft_init()/import_memory(). When the same operator instance is reused from multiple threads via the experimental Operator API in certain but rare cases, this can cause later extend_padding() calls to fail. Multiple runs of the operator doesn't cause this because the TensorAllocator destructor marks the TensorInfo object resizable again. This makes it difficult to reliably test and since it's for an experimental feature to make ACL easy to integrate in certain frameworks, we'll not aim to test this until the feature has been fully developed and used. This is done by using a local TensorInfo copy in run() for padding extension and soft_init()/import_memory(), leaving _gemm_output_3d unchanged. Change-Id: I3e4e2d25cabf85724ecf126b1c93df6733ee7d48 Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
1 parent 7d0d25f commit f08200f

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

src/cpu/operators/CpuGemmConv2d.cpp

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -939,8 +939,10 @@ void CpuGemmConv2d::run(ITensorPack &tensors)
939939
// Handle the case where output has top/bottom padding
940940
const ITensor *out_to_use = out_has_padding ? gemm_output.get() : dst;
941941
Tensor gemm3d;
942-
_gemm_output_3d.extend_padding(out_to_use->info()->padding());
943-
gemm3d.allocator()->soft_init(_gemm_output_3d);
942+
TensorInfo gemm3d_info(_gemm_output_3d);
943+
gemm3d_info.set_is_resizable(true);
944+
gemm3d_info.extend_padding(out_to_use->info()->padding());
945+
gemm3d.allocator()->soft_init(gemm3d_info);
944946
gemm3d.allocator()->import_memory(out_to_use->buffer());
945947
auto gemm_output_to_use = gemm_output.get();
946948

0 commit comments

Comments
 (0)