Skip to content

Commit ff1641a

Browse files
committed
deps: Tensors 0.50.0 -> 0.50.1 + adapt to Func<Tensor<T>> trace lambdas
Tensors 0.50.1 is the release the open compile-tracer bugs point to; 0.50.0 was the one I initially pinned but it's strictly a prerequisite step. The API change that matters for us: GetOrCompileInference / GetOrCompileTraining accept Func<Tensor<T>> (return = plan output tensor) instead of Action (tail-op inferred). The explicit return fixes the shape-conditional truncation that forced PR #1167's Predict-bypass workaround: the tracer no longer guesses which op produced the plan's output, so rank-3 -> rank-4 promotions and post-forward reshapes no longer get silently dropped. Three call sites updated to return the last tensor explicitly: src/AiModelBuilder.cs:1344 — return nnModel.Predict(input) src/Training/CompiledTapeTrainingStep.cs:139 — return computeLoss(...) src/Training/CompiledTapeTrainingStep.cs:269 — return computeLoss(...) Follow-up (separate PR): revert the 11 Predict-override-restore sites from PR #1167 back to PredictEager now that the tracer actually captures their forward behaviour correctly. Build: dotnet build -f net10.0 + net471 both green, 0 errors.
1 parent 4edcccf commit ff1641a

3 files changed

Lines changed: 16 additions & 6 deletions

File tree

Directory.Packages.props

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
<ItemGroup>
66
<!-- AiDotNet ecosystem -->
77
<PackageVersion Include="AiDotNet" Version="0.113.0" />
8-
<PackageVersion Include="AiDotNet.Tensors" Version="0.50.0" />
8+
<PackageVersion Include="AiDotNet.Tensors" Version="0.50.1" />
99
<PackageVersion Include="AiDotNet.Native.OneDNN" Version="0.50.0" />
1010
<PackageVersion Include="AiDotNet.Native.OpenBLAS" Version="0.50.0" />
1111
<PackageVersion Include="AiDotNet.Native.CLBlast" Version="0.50.0" />

src/AiModelBuilder.cs

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1371,9 +1371,11 @@ private void ApplyGradientCheckpointingFromMemoryConfig()
13711371
try
13721372
{
13731373
using var noGrad = new AiDotNet.Tensors.Engines.Autodiff.NoGradScope<T>();
1374-
// Discard return: CompiledModelCache treats the last recorded
1375-
// op's output tensor as the plan's output.
1376-
nnModel.Predict(input);
1374+
// Tensors 0.50.1 changed GetOrCompileInference from Action to
1375+
// Func<Tensor<T>> — the tracer now binds the plan output to
1376+
// whatever the lambda returns, rather than inferring it from
1377+
// the last recorded op. Return the Predict result explicitly.
1378+
return nnModel.Predict(input);
13771379
}
13781380
finally
13791381
{

src/Training/CompiledTapeTrainingStep.cs

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -138,8 +138,12 @@ public static T Step(
138138
compositeKey,
139139
() =>
140140
{
141+
// Tensors 0.50.1 changed GetOrCompileTraining to take a
142+
// Func<Tensor<T>> — the trace lambda must return the
143+
// scalar output (loss) so the compile-graph has a single
144+
// terminal node to differentiate from.
141145
var predicted = forward(input);
142-
computeLoss(predicted, target);
146+
return computeLoss(predicted, target);
143147
},
144148
parameters);
145149

@@ -268,8 +272,12 @@ or AiDotNet.Tensors.Engines.Compilation.OptimizerType.Adam
268272
compositeKey,
269273
() =>
270274
{
275+
// Tensors 0.50.1 changed GetOrCompileTraining to take a
276+
// Func<Tensor<T>> — the trace lambda must return the
277+
// scalar output (loss) so the compile-graph has a single
278+
// terminal node to differentiate from.
271279
var predicted = forward(input);
272-
computeLoss(predicted, target);
280+
return computeLoss(predicted, target);
273281
},
274282
parameters);
275283

0 commit comments

Comments
 (0)