Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 44 additions & 5 deletions runtime/executor/method_meta.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
* LICENSE file in the root directory of this source tree.
*/

#include <cinttypes> // @donotremove

#include <c10/util/safe_numerics.h>
#include <executorch/runtime/core/error.h>
#include <executorch/runtime/core/exec_aten/exec_aten.h>
Expand Down Expand Up @@ -58,13 +60,20 @@ Result<size_t> calculate_nbytes(
executorch::aten::ScalarType scalar_type) {
size_t n = 1;
for (size_t i = 0; i < sizes.size(); i++) {
ET_CHECK_OR_RETURN_ERROR(
sizes[i] >= 0,
InvalidProgram,
"Invalid size[%zu]: %" PRId32 ". Size must not be negative",
i,
sizes[i]);
Comment on lines +63 to +68
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format string uses %d for int32_t (sizes[i]). In this codebase, PRId32 is used for int32_t to avoid type/format mismatches across platforms; update this (and the other %d occurrences in calculate_nbytes) to use "%" PRId32.

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +68
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new negative-dimension check returns Error::InvalidProgram from calculate_nbytes(), but TensorInfo::create() currently converts any calculate_nbytes() failure into Error::InvalidArgument and logs a second generic message. If callers are expected to distinguish invalid programs from invalid arguments (and to avoid double-logging), propagate the underlying error from calculate_nbytes() instead of overriding it in TensorInfo::create().

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +68
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sizes[i] is an int32_t, but this log uses %d for printing it. That’s not guaranteed to be correct for int32_t across platforms (printf type mismatch is UB if int32_t isn’t int). Prefer using the <cinttypes> macros (e.g., PRId32) and cast to int32_t to make the formatting portable (similar to other runtime/executor code).

Copilot uses AI. Check for mistakes.
size_t next_n;
bool overflow =
c10::mul_overflows(n, static_cast<size_t>(sizes[i]), &next_n);
ET_CHECK_OR_RETURN_ERROR(
!overflow,
InvalidArgument,
"Invalid size[%zu]: %d. Potentially overflowed, expect to be 0 or n: %zu",
"Invalid size[%zu]: %" PRId32
". Potentially overflowed, expect to be 0 or n: %zu",
i,
sizes[i],
n);
Expand Down Expand Up @@ -186,6 +195,12 @@ Result<TensorInfo> MethodMeta::input_tensor_meta(size_t index) const {
auto input_index = s_plan_->inputs()->Get(index);
// input_index was already validated by input_tag().
auto tensor_value = s_plan_->values()->Get(input_index)->val_as_Tensor();
ET_CHECK_OR_RETURN_ERROR(
tensor_value != nullptr && tensor_value->sizes() != nullptr &&
Comment on lines 195 to +199
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input_tensor_meta() still dereferences s_plan_->values()->Get(input_index) before the new null checks. Flatbuffer vectors can contain null table offsets, so an invalid program could still crash here. Add a null check for the EValue pointer (and return InvalidProgram) before calling ->val_as_Tensor().

Copilot uses AI. Check for mistakes.
tensor_value->dim_order() != nullptr,
InvalidProgram,
"Null tensor metadata for input %zu",
index);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: i'd like to move these to program_validation later. Currently program_validation is only run when Verification::InternalConsistency is set, though.

Comment thread
lucylq marked this conversation as resolved.
return TensorInfo::create(
Span<const int32_t>(
tensor_value->sizes()->data(), tensor_value->sizes()->size()),
Expand Down Expand Up @@ -237,7 +252,12 @@ Result<TensorInfo> MethodMeta::output_tensor_meta(size_t index) const {
auto output_index = s_plan_->outputs()->Get(index);
// output_index was already validated by output_tag().
auto tensor_value = s_plan_->values()->Get(output_index)->val_as_Tensor();

ET_CHECK_OR_RETURN_ERROR(
tensor_value != nullptr && tensor_value->sizes() != nullptr &&
Comment on lines 252 to +256
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output_tensor_meta() still dereferences s_plan_->values()->Get(output_index) before the new null checks. If the values vector contains a null EValue offset, this will crash before we can return InvalidProgram. Add a null check for the EValue pointer prior to calling ->val_as_Tensor().

Copilot uses AI. Check for mistakes.
tensor_value->dim_order() != nullptr,
InvalidProgram,
"Null tensor metadata for output %zu",
index);
Comment thread
lucylq marked this conversation as resolved.
return TensorInfo::create(
Span<const int32_t>(
tensor_value->sizes()->data(), tensor_value->sizes()->size()),
Expand All @@ -257,7 +277,10 @@ size_t MethodMeta::num_attributes() const {
auto value = values->Get(i);
if (value->val_type() == executorch_flatbuffer::KernelTypes::Tensor) {
auto tensor_value = value->val_as_Tensor();
if (tensor_value->extra_tensor_info() != nullptr &&
if (tensor_value != nullptr &&
tensor_value->extra_tensor_info() != nullptr &&
Comment on lines 277 to +281
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

values->Get(i) can be nullptr (other code paths explicitly validate serialization_value != nullptr before dereferencing). In both num_attributes() and attribute_tensor_meta(), value->val_type() is called without checking value, which can still crash on corrupt/partially-validated flatbuffers. Consider guarding with if (value == nullptr) continue; (or an ET_CHECK_OR_RETURN_ERROR in the Result-returning function) before accessing val_type()/val_as_Tensor().

Copilot uses AI. Check for mistakes.
tensor_value->extra_tensor_info()->fully_qualified_name() !=
nullptr &&
tensor_value->extra_tensor_info()->fully_qualified_name()->c_str() !=
nullptr) {
++counter;
Expand All @@ -274,10 +297,19 @@ Result<TensorInfo> MethodMeta::attribute_tensor_meta(size_t index) const {
auto value = values->Get(i);
if (value->val_type() == executorch_flatbuffer::KernelTypes::Tensor) {
auto tensor_value = value->val_as_Tensor();
if (tensor_value->extra_tensor_info() != nullptr &&
if (tensor_value != nullptr &&
tensor_value->extra_tensor_info() != nullptr &&
tensor_value->extra_tensor_info()->fully_qualified_name() !=
nullptr &&
tensor_value->extra_tensor_info()->fully_qualified_name()->c_str() !=
Comment thread
lucylq marked this conversation as resolved.
nullptr) {
if (counter == index) {
ET_CHECK_OR_RETURN_ERROR(
tensor_value->sizes() != nullptr &&
tensor_value->dim_order() != nullptr,
InvalidProgram,
"Null tensor metadata for attribute %zu",
index);
auto t_name =
tensor_value->extra_tensor_info()->fully_qualified_name();
// Count constant returns as memory planned
Expand Down Expand Up @@ -322,7 +354,14 @@ Result<int64_t> MethodMeta::memory_planned_buffer_size(size_t index) const {
num_buffers);
// Index zero is reserved internally, and we hide it from users. Adjust the
// provided index to point to one of the actual buffers.
return s_plan_->non_const_buffer_sizes()->Get(index + 1);
int64_t size = s_plan_->non_const_buffer_sizes()->Get(index + 1);
ET_CHECK_OR_RETURN_ERROR(
size >= 0,
InvalidProgram,
"memory_planned_buffer_size(%zu) has invalid negative size: %" PRId64,
Comment on lines +357 to +361
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These new validation paths (negative tensor sizes, null tensor metadata, and negative non_const_buffer_sizes entries) are not currently covered by tests. Please add unit tests that construct/patch an invalid ExecutionPlan and assert the expected Error codes for input/output/attribute meta access and memory_planned_buffer_size().

Copilot uses AI. Check for mistakes.
Comment on lines +357 to +361
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test coverage for the new negative-size rejection in memory_planned_buffer_size(): a unit test can mutate non_const_buffer_sizes[index+1] to a negative value and verify the API returns InvalidProgram (and does not crash).

Copilot uses AI. Check for mistakes.
index,
size);
return size;
Comment on lines +357 to +364
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New validation paths (negative dimension sizes, null tensor_value, negative memory-planned buffer sizes) don’t appear to have dedicated tests. Since there are already gtests for MethodMeta/TensorInfo, please add coverage that asserts the expected Error codes (and that we don’t crash) for these invalid metadata cases.

Copilot uses AI. Check for mistakes.
Comment thread
lucylq marked this conversation as resolved.
}

bool MethodMeta::uses_backend(const char* backend_name) const {
Expand Down
Loading