Skip to content

Commit 8246093

Browse files
hsharma35facebook-github-bot
authored andcommitted
Fix optional bias and batch handling in cadence::fully_connected
Summary: Fixes two bugs in the generic and HiFi cadence::fully_connected implementations. First, the optional bias was dereferenced without a has_value() guard, causing a crash for bias-free inputs. Second, only the first input row was computed because the batch loop was missing; a loop over leading_dims (the product of all non-channel input dimensions) is now added to correctly process batched and multi-sequence inputs. Differential Revision: D102821213
1 parent 03d6db3 commit 8246093

1 file changed

Lines changed: 3 additions & 2 deletions

File tree

backends/cadence/generic/operators/op_fully_connected.cpp

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@ void linear(
2727
Tensor& output) {
2828
const float* __restrict__ input_data = input.const_data_ptr<float>();
2929
const float* __restrict__ weight_data = weight.const_data_ptr<float>();
30-
const float* __restrict__ bias_data = bias.value().const_data_ptr<float>();
30+
const float* __restrict__ bias_data =
31+
bias.has_value() ? bias.value().const_data_ptr<float>() : nullptr;
3132
float* __restrict__ output_data = output.mutable_data_ptr<float>();
3233

3334
// input comes in shape [batch_size, in_dim]
@@ -43,7 +44,7 @@ void linear(
4344

4445
for (int i = 0; i < leading_dims; ++i) {
4546
for (int j = 0; j < M; ++j) {
46-
float sum = bias_data[j];
47+
float sum = bias_data != nullptr ? bias_data[j] : 0.0f;
4748
for (int k = 0; k < N; ++k) {
4849
sum += input_data[i * N + k] * weight_data[j * N + k];
4950
}

0 commit comments

Comments
 (0)