Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion runtime/core/portable_type/tensor_impl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,11 @@
#include <executorch/runtime/core/portable_type/tensor_impl.h>

#include <algorithm>
#include <climits>
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#include <climits> was added but doesn't appear to be used in this file. Consider removing it to avoid unnecessary dependencies.

Suggested change
#include <climits>

Copilot uses AI. Check for mistakes.
#include <cstdint>

#include <c10/util/irange.h>
#include <c10/util/safe_numerics.h>
Comment on lines 11 to +16
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

<climits> appears unused in this translation unit. If you need bounds for ssize_t, prefer <limits> + std::numeric_limits<ssize_t>; otherwise, drop the include to reduce churn.

Copilot uses AI. Check for mistakes.

#include <executorch/runtime/core/exec_aten/util/dim_order_util.h>
#include <executorch/runtime/core/exec_aten/util/scalar_type_util.h>
Expand All @@ -38,7 +40,14 @@ ssize_t compute_numel(const TensorImpl::SizesType* sizes, ssize_t dim) {
"Size must be non-negative, got %zd at dimension %zd",
static_cast<ssize_t>(sizes[i]),
i);
numel *= sizes[i];
ssize_t next_numel;
ET_CHECK_MSG(
!c10::mul_overflows(numel, static_cast<ssize_t>(sizes[i]), &next_numel),
"Overflow computing numel: %zd * %zd would overflow ssize_t at dimension %zd",
Comment thread
lucylq marked this conversation as resolved.
numel,
static_cast<ssize_t>(sizes[i]),
i);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remove ET_CHECK_MSG, this has a large blast radius here though (see PR summary).

Comment on lines +44 to +49
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overflow ET_CHECK_MSG uses %zd for the dimension index, but i comes from c10::irange(dim) (typically an int64_t). With printf-style formatting this mismatch is undefined behavior on some platforms. Cast i to ssize_t (or switch to the correct format macro/specifier) when passing it to the format string.

Copilot uses AI. Check for mistakes.
numel = next_numel;
}
return numel;
}
Expand Down
18 changes: 14 additions & 4 deletions runtime/executor/tensor_parser_portable.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@

#include <executorch/runtime/executor/tensor_parser.h>

#include <c10/util/safe_numerics.h>

#include <executorch/runtime/core/exec_aten/exec_aten.h>
#include <executorch/runtime/core/exec_aten/util/dim_order_util.h>
#include <executorch/runtime/core/exec_aten/util/scalar_type_util.h>
Expand Down Expand Up @@ -118,17 +120,25 @@ Result<Tensor> parseTensor(
dim_order =
const_cast<executorch::aten::DimOrderType*>(serialized_dim_order);
}
// Validate sizes before using them in case the PTE data is bad. We can't
// detect bad positive values, but we can reject negative values, which would
// otherwise panic in the TensorImpl ctor. dim_order_to_stride() will validate
// dim_order.
// Validate sizes before using them in case the PTE data is bad. Reject
// negative values and check that the product of all dimensions doesn't
// overflow ssize_t, which would otherwise abort in the TensorImpl ctor.
// dim_order_to_stride() will validate dim_order.
ssize_t numel = 1;
for (flatbuffers::uoffset_t i = 0; i < dim; i++) {
ET_CHECK_OR_RETURN_ERROR(
sizes[i] >= 0,
InvalidProgram,
"Negative size[%zu] %" PRId32,
static_cast<size_t>(i),
sizes[i]);
ssize_t next_numel;
ET_CHECK_OR_RETURN_ERROR(
!c10::mul_overflows(numel, static_cast<ssize_t>(sizes[i]), &next_numel),
InvalidProgram,
Comment thread
lucylq marked this conversation as resolved.
"Overflow computing numel at dim %zu",
static_cast<size_t>(i));
numel = next_numel;
Comment on lines +135 to +141
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to TensorImpl::compute_numel(): this calls c10::mul_overflows with signed ssize_t. On MSVC the fallback implementation multiplies signed values directly, which can be UB when overflow occurs. Since tensor sizes are validated as non-negative here, use an unsigned accumulator for the overflow check (and separately ensure the final product fits in ssize_t) to make the check well-defined across compilers.

Copilot uses AI. Check for mistakes.
}
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guards overflow for numel, but tensor_impl->nbytes() later multiplies numel by elementSize(scalar_type); that multiplication can still overflow size_t and wrap, leading to undersized allocation/reads. Consider adding a byte-size overflow check here (or in TensorImpl::nbytes()) using SIZE_MAX / elementSize(scalar_type) to ensure the computed nbytes is safe.

Suggested change
}
}
// Also ensure that the total number of bytes fits in size_t when
// multiplied by the element size for this scalar type.
const size_t element_size = elementSize(scalar_type);
ET_CHECK_OR_RETURN_ERROR(
numel == 0 ||
static_cast<size_t>(numel) <= SIZE_MAX / element_size,
InvalidProgram,
"Overflow computing tensor nbytes");

Copilot uses AI. Check for mistakes.

// We will remove strides from schema.
Expand Down
Loading