Skip to content

Add disc injection-moulding cooling simulation notebook#1279

Closed
Tuesdaythe13th wants to merge 8 commits into
NVIDIA:mainfrom
Tuesdaythe13th:claude/disc-config-struct-ndXr3
Closed

Add disc injection-moulding cooling simulation notebook#1279
Tuesdaythe13th wants to merge 8 commits into
NVIDIA:mainfrom
Tuesdaythe13th:claude/disc-config-struct-ndXr3

Conversation

@Tuesdaythe13th
Copy link
Copy Markdown

@Tuesdaythe13th Tuesdaythe13th commented Mar 10, 2026

Description

This PR adds a comprehensive GPU-accelerated physics simulation notebook for disc cooling after injection moulding, built with NVIDIA Warp. The notebook demonstrates:

  • 2-D axisymmetric heat diffusion in cylindrical coordinates with explicit finite-difference time stepping
  • Avrami-based crystallinity evolution for a PET + CSR + PTFE polymer blend
  • Warp-risk scoring that combines thermal gradients and crystallinity asymmetry to predict deflection risk
  • Interactive parameter exploration including a mould-temperature sweep to optimize cooling conditions
  • Comprehensive visualizations of temperature fields, crystallinity profiles, and warp-risk distributions

The simulation uses two Warp structs (DiscConfig and CoolingParams) to keep kernel signatures concise, and includes a ArtifexCoolingSim class that manages GPU memory and orchestrates the time-stepping loop. The notebook is designed for Colab with GPU acceleration and includes stability analysis for the explicit time-stepping scheme.

Key features:

  • Dirichlet boundary conditions on mould walls (top/bottom) and Neumann (zero-flux) on axis and outer radius
  • Fourier stability checking with automatic dt calculation
  • Quality gates based on thermal gradients and crystallinity thresholds
  • Parameter sweep demonstrating how mould temperature affects warp risk

Future enhancements (documented in the notebook):

  • Replace placeholder Avrami kinetics with Nakamura model fitted to DSC data
  • Add thermoelastic plate solver to convert warp-risk score to actual deflection (mm)
  • Support asymmetric mould cooling channels
  • Adaptive time-stepping

Checklist

  • I am familiar with the Contributing Guidelines.
  • New notebook includes self-contained example with clear documentation.
  • Code follows Warp conventions (kernel definitions, struct usage, device management).

Test plan

The notebook is self-contained and executable end-to-end in Google Colab or any Jupyter environment with Warp installed. Verification includes:

  • Successful kernel compilation and execution on both CPU and CUDA devices
  • Numerical stability of the explicit finite-difference scheme (Fourier number < 0.4)
  • Reasonable physical outputs (temperature decay from melt to mould temperature, crystallinity growth in the crystallization window)
  • Parameter sweep producing expected trends (higher mould temperature → lower warp risk)

https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB

Summary by CodeRabbit

Release Notes

  • New Features
    • Added GPU-accelerated disc cooling simulation with temperature distribution, crystallinity evolution, and warp-risk assessment.
    • Parameter sweep functionality to explore mould temperature effects on final warp risk.
    • Comprehensive visualization suite: temperature, crystallinity, and warp-risk field plots plus profile analysis.
    • Automated pass/fail evaluation for cooling scenarios.

Adds notebooks/disc_cooling_sim.ipynb with an Open-in-Colab badge,
covering 2-D axisymmetric heat diffusion, Avrami crystallinity
kinetics, warp-risk scoring, 2-D field visualisations, radial
profile plots, and a mould-temperature parameter sweep.

https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB
Signed-off-by: Claude <noreply@anthropic.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Mar 10, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 10, 2026

📝 Walkthrough

Walkthrough

A new GPU-accelerated disc cooling simulation notebook built on Warp is introduced, featuring configurable geometry and material properties, explicit finite-difference heat diffusion in cylindrical coordinates, Avrami-style crystallisation updates, warp-risk scoring, and parameter sweep visualization capabilities.

Changes

Cohort / File(s) Summary
GPU-Accelerated Cooling Simulation
notebooks/disc_cooling_sim.ipynb
Adds complete notebook pipeline: DiscConfig and CoolingParams data structures; Warp kernels for temperature initialization, heat diffusion with boundary conditions, crystallinity updates, and warp-risk computation; ArtifexCoolingSim orchestration class managing GPU arrays; setup cells for geometry/material/process parameters; simulation execution; visualization of temperature, crystallinity, warp-risk fields and radial/axial profiles; and parameter sweep section for mould temperature vs warp risk analysis with pass/fail coloring.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/Notebook
    participant Setup as Setup Phase
    participant GPU as GPU Memory
    participant Kernels as Warp Kernels
    participant Sim as ArtifexCoolingSim
    participant Viz as Visualization

    User->>Setup: Define DiscConfig, CoolingParams
    Setup->>GPU: Allocate temperature, crystallinity, warp_risk arrays
    GPU-->>Sim: Initialize arrays
    
    Sim->>Kernels: Call init_temperature
    Kernels->>GPU: Set initial T field
    Kernels->>GPU: Return initialized state
    
    loop Time-stepping loop
        Sim->>Kernels: Call step_temperature
        Kernels->>GPU: Compute heat diffusion (FD in cylindrical coords)
        Kernels->>GPU: Apply boundary conditions
        Kernels->>GPU: Update temperature field
        
        Sim->>Kernels: Call update_crystallinity
        Kernels->>GPU: Compute Avrami crystallisation update
        Kernels->>GPU: Update crystallinity field
        
        Sim->>Kernels: Call compute_warp_risk
        Kernels->>GPU: Score warp risk from gradients
        Kernels->>GPU: Update warp_risk field (mid-plane only)
    end
    
    Sim->>GPU: Fetch results (metrics, fields)
    GPU-->>Sim: Return results dictionary
    Sim->>Viz: Pass final fields and metrics
    Viz->>User: Display temperature, crystallinity, warp-risk plots and profiles
    User->>User: Execute parameter sweep (mould temp vs warp risk)
    User->>Viz: Generate bar plots and pass/fail analysis
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'Add disc injection-moulding cooling simulation notebook' accurately and concisely describes the main change: adding a new notebook for GPU-accelerated disc cooling simulation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
notebooks/disc_cooling_sim.ipynb (5)

638-660: Parameter sweep creates new ArtifexCoolingSim instance per iteration.

Each sweep iteration allocates new GPU arrays. For memory efficiency and speed, consider reusing the simulation instance and just re-initializing the arrays.

♻️ Suggested improvement
 T_mold_range = np.linspace(288, 353, 12)  # 15 °C – 80 °C in K
 sweep_results = []
+sim = ArtifexCoolingSim(config, device=DEVICE)  # Reuse instance
 
 for T_mold_K in T_mold_range:
     p                = CoolingParams()
     # ... parameter setup ...
 
-    s   = ArtifexCoolingSim(config, device=DEVICE)
-    res = s.simulate_cooling(p)
+    res = sim.simulate_cooling(p)
     sweep_results.append({...})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 638 - 660, The loop creates a
new ArtifexCoolingSim on each T_mold_K which allocates GPU arrays each
iteration; instead, instantiate one ArtifexCoolingSim before the loop and reuse
it by reinitializing per-run state via CoolingParams (set p.T_mold, p.T_init,
etc.) and calling simulate_cooling on the same simulator instance, ensuring any
simulator-level buffers or CUDA arrays are reset or reallocated only when shape
changes; update the code to move "s = ArtifexCoolingSim(config, device=DEVICE)"
outside the for-loop and keep the existing per-iteration creation of
CoolingParams and call s.simulate_cooling(p), reusing sweep_results as before.

686-689: Late import and fragile legend merging.

The Patch import on line 686 should be at the top of the notebook with other imports. The legend handle concatenation assumes ax.legend() was previously called; consider building the complete legend handles list upfront.

📝 Suggested improvement
+# Move to imports cell (line ~78)
+from matplotlib.patches import Patch
+
 # In the sweep cell:
-# Legend patch
-from matplotlib.patches import Patch
 legend_els = [Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")]
-for ax in axes:
-    ax.legend(handles=ax.get_legend().legend_handles + legend_els)
+for ax in axes:
+    handles, labels = ax.get_legend_handles_labels()
+    ax.legend(handles=handles + legend_els)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 686 - 689, The Patch import
and legend construction are fragile: move "from matplotlib.patches import Patch"
into the notebook's top imports, then stop relying on ax.get_legend() being
present; instead for each axis (axes) build the full handles list by collecting
existing handles via ax.get_legend_handles_labels() (or
ax.get_legend_handles_labels()[0]) and concatenating your legend_els =
[Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")]
before calling ax.legend(handles=full_handles) so the legend is created
deterministically even if no prior legend call exists.

403-403: Hardcoded quality thresholds reduce reusability.

The is_ok check uses hardcoded values 0.15 and 15.0. Consider making these configurable via CoolingParams or as method arguments for different quality requirements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` at line 403, The quality check currently
hardcodes thresholds in the expression that sets is_ok (avg_chi_groove < 0.15
and max_warp_risk < 15.0); make these thresholds configurable by adding fields
to CoolingParams (e.g., max_avg_chi_groove and max_warp_risk) or by accepting
them as arguments to the function/method that computes is_ok, then replace the
literals with references to those fields/arguments (e.g., use
cooling_params.max_avg_chi_groove and cooling_params.max_warp_risk or the
passed-in parameters) so different quality requirements can be supplied without
changing the code.

266-291: Crystallinity update formula differs from standard Avrami.

Line 287: chi = chi + params.dt * params.avrami_n * rate doesn't match standard Avrami kinetics (which involves time explicitly). The docstring correctly notes this is a placeholder, but consider renaming avrami_k0/avrami_n to avoid confusion with the actual Avrami equation when real kinetics are implemented.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 266 - 291, The
update_crystallinity kernel uses a simplified heuristic but keeps Avrami-like
names that are misleading; rename the parameters avrami_k0 and avrami_n (and
their uses in update_crystallinity) to explicit names like growth_prefactor and
growth_exponent (or similar) in the CoolingParams dataclass, update the kernel
to use those new names (e.g., rate = growth_prefactor * x * (1.0 - chi /
params.chi_max) and chi += params.dt * growth_exponent * rate), and update the
docstring to state these are heuristic growth prefactor/exponent rather than
true Avrami kinetics so future readers won’t assume standard Avrami behavior.

463-467: Stability comment is slightly misleading but implementation is correct.

The comment references 1D stability criteria (dt < dr²/(2α)), but for 2D diffusion the combined Fourier number matters. The implementation correctly uses min(dr, dz)² with a 0.4 safety factor, which ensures Fo < 0.5 in the limiting direction.

📝 Suggested comment clarification
-# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α)
+# Fourier stability for 2D explicit diffusion: Fo = α·dt/Δx² < 0.5
+# Using min(dr,dz)² ensures stability in the limiting direction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@notebooks/disc_cooling_sim.ipynb` around lines 463 - 467, The comment above
the stability calculation is misleadingly framed as 1D (dt < dr²/(2α)); update
the comment to state the 2D diffusion stability context and clarify that you
enforce the limiting direction by using dt_max = 0.4 * min(dr, dz)**2 / alpha
(variables alpha, dt_max, dr, dz, config) so the Fourier number in the smallest
grid spacing remains below 0.5 with a safety factor; keep the implementation
(alpha = config.k/(config.rho*config.cp) and the dt_max computation) unchanged
but replace the 1D formula text with a short note about using the minimum grid
spacing for multi-dimensional stability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 396-403: max_delta_t currently compares Dirichlet boundary slices
T_np[:, 0] and T_np[:, -1] (both set to T_mold) so it is ~0; change the
computation in the block using T_np (and related variables) to use the first
interior cells adjacent to the boundaries (e.g., T_np[:, 1] and T_np[:, -2])
instead of indices 0 and -1 so the through-thickness thermal gradient uses
interior values; update the use of T_np, the expression computing max_delta_t,
and any dependent logic (is_ok thresholding) to reflect the new interior-indexed
gradient measurement.

---

Nitpick comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 638-660: The loop creates a new ArtifexCoolingSim on each T_mold_K
which allocates GPU arrays each iteration; instead, instantiate one
ArtifexCoolingSim before the loop and reuse it by reinitializing per-run state
via CoolingParams (set p.T_mold, p.T_init, etc.) and calling simulate_cooling on
the same simulator instance, ensuring any simulator-level buffers or CUDA arrays
are reset or reallocated only when shape changes; update the code to move "s =
ArtifexCoolingSim(config, device=DEVICE)" outside the for-loop and keep the
existing per-iteration creation of CoolingParams and call s.simulate_cooling(p),
reusing sweep_results as before.
- Around line 686-689: The Patch import and legend construction are fragile:
move "from matplotlib.patches import Patch" into the notebook's top imports,
then stop relying on ax.get_legend() being present; instead for each axis (axes)
build the full handles list by collecting existing handles via
ax.get_legend_handles_labels() (or ax.get_legend_handles_labels()[0]) and
concatenating your legend_els = [Patch(facecolor="green", label="Pass"),
Patch(facecolor="red", label="Fail")] before calling
ax.legend(handles=full_handles) so the legend is created deterministically even
if no prior legend call exists.
- Line 403: The quality check currently hardcodes thresholds in the expression
that sets is_ok (avg_chi_groove < 0.15 and max_warp_risk < 15.0); make these
thresholds configurable by adding fields to CoolingParams (e.g.,
max_avg_chi_groove and max_warp_risk) or by accepting them as arguments to the
function/method that computes is_ok, then replace the literals with references
to those fields/arguments (e.g., use cooling_params.max_avg_chi_groove and
cooling_params.max_warp_risk or the passed-in parameters) so different quality
requirements can be supplied without changing the code.
- Around line 266-291: The update_crystallinity kernel uses a simplified
heuristic but keeps Avrami-like names that are misleading; rename the parameters
avrami_k0 and avrami_n (and their uses in update_crystallinity) to explicit
names like growth_prefactor and growth_exponent (or similar) in the
CoolingParams dataclass, update the kernel to use those new names (e.g., rate =
growth_prefactor * x * (1.0 - chi / params.chi_max) and chi += params.dt *
growth_exponent * rate), and update the docstring to state these are heuristic
growth prefactor/exponent rather than true Avrami kinetics so future readers
won’t assume standard Avrami behavior.
- Around line 463-467: The comment above the stability calculation is
misleadingly framed as 1D (dt < dr²/(2α)); update the comment to state the 2D
diffusion stability context and clarify that you enforce the limiting direction
by using dt_max = 0.4 * min(dr, dz)**2 / alpha (variables alpha, dt_max, dr, dz,
config) so the Fourier number in the smallest grid spacing remains below 0.5
with a safety factor; keep the implementation (alpha =
config.k/(config.rho*config.cp) and the dt_max computation) unchanged but
replace the 1D formula text with a short note about using the minimum grid
spacing for multi-dimensional stability.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 43722938-c610-4e52-8b7e-eeadd8243506

📥 Commits

Reviewing files that changed from the base of the PR and between 3af8dfa and d5b8b8c.

📒 Files selected for processing (1)
  • notebooks/disc_cooling_sim.ipynb

Comment thread notebooks/disc_cooling_sim.ipynb
@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Mar 10, 2026

Greptile Summary

This PR adds a new Jupyter notebook demonstrating GPU-accelerated 2-D axisymmetric heat diffusion, Avrami crystallinity evolution, and warp-risk scoring for disc injection-moulding cooling, using NVIDIA Warp kernels and structs.

Several correctness issues remain across the notebook (some flagged in prior review rounds, one new here):

  • dT_thickness is always 0compute_warp_risk reads j=0/j=nz-1 (Dirichlet nodes pinned to T_mold), so the through-thickness thermal gradient never contributes to the risk score and max_delta_t always prints 0.00 K.
  • Section 9 "Top/Bottom surface" temperature plots are flat lines at T_moldT_field[:, 0] and T_field[:, -1] are also Dirichlet nodes; the crystallinity panel correctly uses interior nodes [:, 1]/[:, -2] but the temperature panel does not.
  • 2-D stability limit is under-constrained — using min(dr, dz) individually rather than the combined 2-D von Neumann criterion; diverges for square-ish grids.
  • Boundary nodes accumulate crystallinity at high mould temperatures (T_mold > T_g) during the parameter sweep.

Confidence Score: 2/5

Not ready to merge — multiple metrics are silently wrong, undermining the notebook's educational and diagnostic value.

Several P1 bugs remain: dT_thickness and max_delta_t are always zero due to Dirichlet-node indexing; the Section 9 temperature visualization shows misleadingly flat surface lines; the 2-D stability formula is under-constrained for non-default grid aspect ratios; and boundary nodes accumulate crystallinity during high-temperature sweep points.

notebooks/disc_cooling_sim.ipynb — the compute_warp_risk kernel, simulate_cooling max_delta_t computation, Section 9 radial temperature plot, and stability formula in Cell 6.

Important Files Changed

Filename Overview
notebooks/disc_cooling_sim.ipynb New notebook adding a GPU-accelerated disc cooling simulation with several correctness issues: dT_thickness always 0, max_delta_t always 0, Section 9 temperature surface plots misleadingly flat at T_mold, boundary-node crystallinity contamination at high sweep temperatures, and incorrect 2D stability formula.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Set geometry and DiscConfig] --> B[Compute dt_max stability limit]
    B --> C[ArtifexCoolingSim init\nAllocate GPU arrays]
    C --> D[simulate_cooling\nInitialise T_a and chi_a]
    D --> E{Time loop}
    E --> F[step_temperature\nDirichlet BC at j=0 and j=nz-1]
    F --> G[update_crystallinity\nNo boundary skip]
    G --> E
    E -->|done| H[compute_warp_risk\nreads j=0 and j=nz-1\ndT_thickness always zero]
    H --> I[Copy arrays to numpy]
    I --> J[max_delta_t from boundary nodes\nalways 0.00 K]
    I --> K[avg_chi_groove and max_warp_risk]
    J --> M[Return results dict]
    K --> M
    M --> N[Cell 8 - 2D field plots]
    M --> O[Cell 9 - T_field boundary slices\nflat lines at T_mold]
    M --> P[Cell 10 - Parameter sweep\nnew sim object per iteration]
Loading

Reviews (6): Last reviewed commit: "Merge branch 'main' into claude/disc-con..." | Re-trigger Greptile

Comment thread notebooks/disc_cooling_sim.ipynb
Comment thread notebooks/disc_cooling_sim.ipynb
Comment thread notebooks/disc_cooling_sim.ipynb
Comment thread notebooks/disc_cooling_sim.ipynb
@Tuesdaythe13th Tuesdaythe13th marked this pull request as draft March 10, 2026 18:07
@Tuesdaythe13th Tuesdaythe13th marked this pull request as ready for review March 10, 2026 18:07
Copy link
Copy Markdown
Author

@Tuesdaythe13th Tuesdaythe13th left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checjk

@Tuesdaythe13th Tuesdaythe13th marked this pull request as draft March 10, 2026 18:09
@Tuesdaythe13th Tuesdaythe13th marked this pull request as ready for review March 10, 2026 18:09
Comment thread notebooks/disc_cooling_sim.ipynb
Comment thread notebooks/disc_cooling_sim.ipynb
Comment on lines +573 to +575
"plt.tight_layout()\n",
"plt.show()"
]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 "Top surface" / "Bottom surface" temperature lines are always flat at T_mold

T_field[:, 0] and T_field[:, -1] are the Dirichlet boundary nodes hard-pinned to params.T_mold by step_temperature on every timestep. Both curves will be a constant horizontal line at the mold temperature (e.g. 25 °C) for every run, which looks like a formatting artefact rather than a meaningful surface-polymer temperature.

The crystallinity plot in the same cell correctly uses the first interior nodes (chi_field[:, 1] and chi_field[:, -2]). The temperature plot should do the same:

Suggested change
"plt.tight_layout()\n",
"plt.show()"
]
ax.plot(r_mm, T_field[:, mid] - 273.15, label="Mid-plane", color="tab:orange")
ax.plot(r_mm, T_field[:, 1] - 273.15, label="Near top wall", color="tab:blue", linestyle="--")
ax.plot(r_mm, T_field[:, -2] - 273.15, label="Near bot wall", color="tab:green", linestyle=":")

Copy link
Copy Markdown
Contributor

shi-eric commented May 5, 2026

Thanks for the contribution. I do not think this is the right fit for notebooks/ or warp/examples.

Our notebooks are focused on core Warp concepts and interoperability patterns, not domain-specific simulation workflows. This contribution is closer to an application prototype: it uses a simple explicit finite-difference cooling model, placeholder crystallization kinetics, and a heuristic warp-risk score. That may be useful as a standalone project, but it is not something we would want to maintain as an official Warp notebook/example.

For warp/examples, we generally look for examples that demonstrate a reusable Warp feature, API pattern, or computational technique that is not already covered elsewhere. I do not think this clears that bar as-is.

I would suggest publishing this in a separate repository and sharing it in the Show and Tell discussions:
https://github.com/NVIDIA/warp/discussions/categories/show-and-tell

Tagging the repository with nvidia-warp will also make it discoverable here:
https://github.com/topics/nvidia-warp

Closing this PR, but thanks again for sharing the idea.

@shi-eric shi-eric closed this May 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants