Skip to content

Migrate RMM usage to CCCL MR design#1035

Open
bdice wants to merge 7 commits intoNVIDIA:mainfrom
bdice:rmm-cccl-migration
Open

Migrate RMM usage to CCCL MR design#1035
bdice wants to merge 7 commits intoNVIDIA:mainfrom
bdice:rmm-cccl-migration

Conversation

@bdice
Copy link
Copy Markdown
Contributor

@bdice bdice commented Apr 3, 2026

Summary

  • Remove dependency on rmm::mr::device_memory_resource base class; resources now satisfy the cuda::mr::resource concept directly
  • Replace shared_ptr<device_memory_resource> with value types and cuda::mr::any_resource<cuda::mr::device_accessible> for type-erased storage
  • Replace set_current_device_resource(ptr) / set_per_device_resource(id, ptr) with set_current_device_resource_ref / set_per_device_resource_ref
  • Remove make_owning_wrapper usage and dynamic_cast on memory resources (no common base class)
  • Add missing thrust/iterator/transform_output_iterator.h include (no longer transitively included via CCCL)

Depends on rapidsai/rmm#2361.
Depends on rapidsai/ucxx#636.
Depends on rapidsai/raft#2996.

Remove dependency on rmm::mr::device_memory_resource base class. Resources
now satisfy the cuda::mr::resource concept directly.

- Replace shared_ptr<device_memory_resource> with value types and
  cuda::mr::any_resource<cuda::mr::device_accessible> for type-erased storage
- Replace set_current_device_resource(ptr) with set_current_device_resource_ref
- Replace set_per_device_resource(id, ptr) with set_per_device_resource_ref
- Remove make_owning_wrapper usage
- Remove dynamic_cast on memory resources (no common base class)
- Remove owning_wrapper.hpp and device_memory_resource.hpp includes
- Add missing thrust/iterator/transform_output_iterator.h include
  (no longer transitively included via CCCL)
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 3, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@bdice bdice force-pushed the rmm-cccl-migration branch 4 times, most recently from a624829 to 8fe66e5 Compare April 17, 2026 07:09
@bdice bdice added breaking Introduces a breaking change Feature improvement Improves an existing functionality labels Apr 17, 2026
@bdice bdice force-pushed the rmm-cccl-migration branch 6 times, most recently from 717e877 to b42797f Compare April 18, 2026 15:52
@bdice
Copy link
Copy Markdown
Contributor Author

bdice commented Apr 19, 2026

/ok to test

@bdice bdice marked this pull request as ready for review April 19, 2026 07:12
@bdice bdice requested review from a team as code owners April 19, 2026 07:12
Comment on lines +250 to +251
mem_handler.device_alloc = cudss_device_alloc<void>;
mem_handler.device_free = cudss_device_dealloc<void>;
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem like this template parameter is being used. The rmm::mr::device_memory_resource class has been removed, but this change in an unused template parameter should have no effect.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I confirm, it is not used

Comment thread cpp/src/utilities/cuda_helpers.cuh
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 19, 2026

📝 Walkthrough

Walkthrough

CI scripts now source helpers to consume PR-provided conda channels and wheels; dependency-generator calls include extra channel args. C++ replaces shared_ptr-wrapped RMM memory resources with concrete resource objects, updates tests and cuDSS allocation handlers, and simplifies device-memory reporting. Two new CI helper scripts were added.

Changes

Cohort / File(s) Summary
CI: Conda package integration
ci/build_cpp.sh, ci/build_docs.sh, ci/build_python.sh, ci/test_cpp.sh, ci/test_cpp_memcheck.sh, ci/test_notebooks.sh, ci/test_python.sh, ci/test_skills_assets.sh
Added source ./ci/use_conda_packages_from_prs.sh (some SPDX year updates); where applicable, pass "${RAPIDS_EXTRA_CONDA_CHANNEL_ARGS[@]}" into rapids-dependency-file-generator and add discovered channels to conda config.
CI: Wheel package integration
ci/build_wheel_cuopt.sh, ci/build_wheel_cuopt_mps_parser.sh, ci/build_wheel_cuopt_server.sh, ci/build_wheel_cuopt_sh_client.sh, ci/build_wheel_libcuopt.sh, ci/test_self_hosted_service.sh, ci/test_wheel_cuopt.sh, ci/test_wheel_cuopt_server.sh
Added source ./ci/use_wheels_from_prs.sh (after rapids-init-pip/pip init) to incorporate PR-provided wheelhouse configuration into wheel/download/install steps.
CI: New helper scripts
ci/use_conda_packages_from_prs.sh, ci/use_wheels_from_prs.sh
Added scripts: one resolves PR artifact conda channels, updates system conda channels, and builds RAPIDS_EXTRA_CONDA_CHANNEL_ARGS; the other resolves PR wheelhouse dirs and appends local file:// wheel constraints to pip constraints.
C++: Memory-resource refactor (runtime & tests)
cpp/cuopt_cli.cpp, cpp/tests/mip/load_balancing_test.cu, cpp/tests/mip/multi_probe_test.cu, cpp/tests/utilities/base_fixture.hpp
Replaced std::shared_ptr<rmm::mr::...> usage with concrete rmm::mr::* resource objects; factories now return resource objects (and create_memory_resource now returns cuda::mr::any_resource<cuda::mr::device_accessible>); updated calls to set_per_device_resource/set_current_device_resource to pass resources directly.
C++: cuDSS handlers & CUDA helpers
cpp/src/barrier/sparse_cholesky.cuh, cpp/src/utilities/cuda_helpers.cuh
Changed cuDSS device allocation/deallocation callbacks to use cudss_device_alloc/dealloc<void> template specializations; removed RMM limiting-adaptor inspection and simplified get_device_memory_size() to return total_mem from cudaMemGetInfo.
C++: Misc. includes & cleanup
cpp/src/pdlp/termination_strategy/infeasibility_information.cu, cpp/src/routing/ges_solver.cu
Added thrust/iterator/transform_output_iterator.h include; removed an unused rmm/mr/device_memory_resource.hpp include.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.08% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'Migrate RMM usage to CCCL MR design' directly and clearly summarizes the main objective of the changeset, which is to migrate from RMM's memory resource design to CCCL's memory resource design.
Description check ✅ Passed The PR description clearly relates to the changeset, documenting the migration from rmm::mr::device_memory_resource to CCCL MR design with specific technical details about the changes being made.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (2)
cpp/cuopt_cli.cpp (1)

69-72: Dead code: make_async() is now unused.

The make_async() helper function at line 71 is no longer called anywhere in this file after the migration to value-type memory resources. Consider removing it.

🧹 Proposed removal
-/**
- * `@brief` Make an async memory resource for RMM
- * `@return` std::shared_ptr<rmm::mr::cuda_async_memory_resource>
- */
-inline auto make_async() { return std::make_shared<rmm::mr::cuda_async_memory_resource>(); }
-
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/cuopt_cli.cpp` around lines 69 - 72, The helper function make_async() is
now unused and should be removed: delete the inline auto make_async() { return
std::make_shared<rmm::mr::cuda_async_memory_resource>(); }
declaration/definition to eliminate dead code and any associated includes or
comments that only served it; ensure no remaining references to make_async exist
and run a build to confirm no usages remain.
cpp/src/pdlp/termination_strategy/infeasibility_information.cu (1)

18-19: Remove the duplicate Thrust include.

Line 18 adds <thrust/iterator/transform_output_iterator.h>, which is already included at Line 32. Keeping only one include improves clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/pdlp/termination_strategy/infeasibility_information.cu` around lines
18 - 19, Remove the duplicate Thrust include by deleting the redundant `#include`
for <thrust/iterator/transform_output_iterator.h> so only the single existing
include remains; locate the duplicate include line (the
<thrust/iterator/transform_output_iterator.h> directive) in
infeasibility_information.cu and remove it, leaving the original include at the
later occurrence intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ci/build_wheel_libcuopt.sh`:
- Around line 8-9: The script currently sources ci/use_wheels_from_prs.sh which
appends to PIP_CONSTRAINT, but later reassigns PIP_CONSTRAINT to
/tmp/constraints.txt, dropping those entries; update the script so the
PR-sourced constraints are merged into the final /tmp/constraints.txt instead of
being overwritten—e.g., after sourcing ci/use_wheels_from_prs.sh, if
PIP_CONSTRAINT contains extra constraints, append or merge them into the file
referenced by PIP_CONSTRAINT (/tmp/constraints.txt) or change the later
assignment to preserve existing values; refer to the PIP_CONSTRAINT variable and
the ci/use_wheels_from_prs.sh source to locate where to merge rather than
overwrite.

In `@ci/test_wheel_cuopt.sh`:
- Line 11: The script sources wheel pins from use_wheels_from_prs.sh into
${PIP_CONSTRAINT} but then clobbers them when recreating ${PIP_CONSTRAINT};
update the creation of ${PIP_CONSTRAINT} so it preserves existing pins (from
use_wheels_from_prs.sh) instead of overwriting—e.g., append the new constraints
to ${PIP_CONSTRAINT} (or merge existing content) rather than using a destructive
write; key symbols: use_wheels_from_prs.sh and the ${PIP_CONSTRAINT} write
(currently using cat >) should be changed to an append/merge approach (cat >> or
equivalent).

In `@ci/use_conda_packages_from_prs.sh`:
- Around line 6-11: The script hard-codes PR artifact IDs in the calls that
populate LIBRMM_CHANNEL, RMM_CHANNEL, LIBUCXX_CHANNEL, UCXX_CHANNEL,
LIBRAFT_CHANNEL and RAFT_CHANNEL; change these invocations of
rapids-get-pr-artifact to take PR IDs from environment variables or script
arguments (e.g. PR_RMM, PR_UCXX, PR_RAFT) instead of literal numbers, and
implement a clear fallback behavior: if a PR ID var is unset or empty, skip
requesting the PR artifact and fall back to the stable channel (or exit with a
clear error if that is preferred for your CI policy). Ensure the logic that
constructs those six variables checks the env vars before calling
rapids-get-pr-artifact and documents the new environment variable names.

In `@ci/use_wheels_from_prs.sh`:
- Around line 7-24: Replace the hardcoded external PR IDs used in the
rapids-get-pr-artifact invocations (the numeric IDs in the calls that populate
LIBRMM_WHEELHOUSE, RMM_WHEELHOUSE, LIBUCXX_WHEELHOUSE, UCXX_WHEELHOUSE,
LIBRAFT_WHEELHOUSE, RAFT_WHEELHOUSE) with configurable inputs: read PR IDs from
environment variables (e.g., RMM_PR_ID, UCXX_PR_ID, RAFT_PR_ID) with sensible
fallbacks (empty/none meaning "use stable/latest" or resolve dynamically), and
update the rapids-get-pr-artifact calls to use those variables instead of
literal numbers; also ensure the script logs or skips gracefully if an artifact
cannot be found so CI jobs aren’t tightly coupled to fixed external PR
artifacts.

In `@cpp/src/utilities/cuda_helpers.cuh`:
- Around line 241-246: The cudaMemGetInfo call in get_device_memory_size lacks
CUDA error checking; wrap the call with the RAFT_CUDA_TRY macro (or equivalent
from <raft/util/cudart_utils.hpp>) so any cudaMemGetInfo failure is
propagated/logged, e.g., call RAFT_CUDA_TRY(cudaMemGetInfo(&free_mem,
&total_mem)); then return total_mem as before; update get_device_memory_size to
use RAFT_CUDA_TRY to satisfy the coding guideline and existing header utilities.

In `@cpp/tests/utilities/base_fixture.hpp`:
- Around line 29-47: make_pool() and make_binning() currently return
memory_resource adaptors that hold non-owning references to temporaries (from
make_async() and the local pool), causing dangling references; change the
factory functions to return owning wrappers that keep upstream resources alive
(e.g., std::shared_ptr or rmm::mr::any_resource) and construct the pool/bins
from those owned resources instead of temporaries—update
make_async()/make_cuda()/make_managed() if needed to return owned resources or
wrappers, then have make_pool() create and retain the async resource in an
owning container and return the pool as an owning object, and have
make_binning() accept/hold the pool owner (or build and return a single owning
wrapper representing the whole chain) so downstream users never reference
destroyed temporaries.

---

Nitpick comments:
In `@cpp/cuopt_cli.cpp`:
- Around line 69-72: The helper function make_async() is now unused and should
be removed: delete the inline auto make_async() { return
std::make_shared<rmm::mr::cuda_async_memory_resource>(); }
declaration/definition to eliminate dead code and any associated includes or
comments that only served it; ensure no remaining references to make_async exist
and run a build to confirm no usages remain.

In `@cpp/src/pdlp/termination_strategy/infeasibility_information.cu`:
- Around line 18-19: Remove the duplicate Thrust include by deleting the
redundant `#include` for <thrust/iterator/transform_output_iterator.h> so only the
single existing include remains; locate the duplicate include line (the
<thrust/iterator/transform_output_iterator.h> directive) in
infeasibility_information.cu and remove it, leaving the original include at the
later occurrence intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 466802a0-542e-4646-b49c-c1f6235231c7

📥 Commits

Reviewing files that changed from the base of the PR and between 6b0d8a4 and b42797f.

📒 Files selected for processing (26)
  • ci/build_cpp.sh
  • ci/build_docs.sh
  • ci/build_python.sh
  • ci/build_wheel_cuopt.sh
  • ci/build_wheel_cuopt_mps_parser.sh
  • ci/build_wheel_cuopt_server.sh
  • ci/build_wheel_cuopt_sh_client.sh
  • ci/build_wheel_libcuopt.sh
  • ci/test_cpp.sh
  • ci/test_cpp_memcheck.sh
  • ci/test_notebooks.sh
  • ci/test_python.sh
  • ci/test_self_hosted_service.sh
  • ci/test_skills_assets.sh
  • ci/test_wheel_cuopt.sh
  • ci/test_wheel_cuopt_server.sh
  • ci/use_conda_packages_from_prs.sh
  • ci/use_wheels_from_prs.sh
  • cpp/cuopt_cli.cpp
  • cpp/src/barrier/sparse_cholesky.cuh
  • cpp/src/pdlp/termination_strategy/infeasibility_information.cu
  • cpp/src/routing/ges_solver.cu
  • cpp/src/utilities/cuda_helpers.cuh
  • cpp/tests/mip/load_balancing_test.cu
  • cpp/tests/mip/multi_probe_test.cu
  • cpp/tests/utilities/base_fixture.hpp
💤 Files with no reviewable changes (1)
  • cpp/src/routing/ges_solver.cu

Comment thread ci/build_wheel_libcuopt.sh
Comment thread ci/test_wheel_cuopt.sh
Comment thread ci/use_conda_packages_from_prs.sh
Comment thread ci/use_wheels_from_prs.sh
Comment thread cpp/src/utilities/cuda_helpers.cuh
Comment thread cpp/tests/utilities/base_fixture.hpp
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
cpp/src/utilities/cuda_helpers.cuh (1)

241-246: ⚠️ Potential issue | 🟠 Major

Add cuopt-native CUDA error checking for cudaMemGetInfo.

At Line 244, cudaMemGetInfo is unchecked. On failure, total_mem can be invalid and the CUDA error state handling is inconsistent. Please wrap it with the cuopt-native CUDA check macro (not RAFT).

Suggested patch
 inline size_t get_device_memory_size()
 {
-  size_t free_mem, total_mem;
-  cudaMemGetInfo(&free_mem, &total_mem);
+  size_t free_mem{0}, total_mem{0};
+  CUDA_CHECK(cudaMemGetInfo(&free_mem, &total_mem));  // or cuopt-native equivalent from utilities/macros.cuh
   // TODO (bdice): Restore limiting adaptor check after updating CCCL to support resource_cast
   return total_mem;
 }
#!/bin/bash
# Verify the exact cuopt-native CUDA check macro and current usage patterns.

set -euo pipefail

echo "== locate macros header =="
fd -i 'macros.cuh'

echo "== CUDA-related macro definitions in macros header(s) =="
fd -i 'macros.cuh' | xargs -r rg -n 'define\s+.*(CUDA|cuda|CHECK)'

echo "== current get_device_memory_size implementation =="
rg -n -C3 'get_device_memory_size|cudaMemGetInfo' cpp/src/utilities/cuda_helpers.cuh

echo "== existing CUDA check macro usage in utilities =="
rg -n 'CUDA_CHECK|CHECK_CUDA|CUOPT_.*CUDA|cudaGetLastError' cpp/src/utilities

As per coding guidelines: "**/*.{cu,cuh}: Every CUDA kernel launch and memory operation must have error checking with CUDA_CHECK or equivalent verification." Based on learnings: "cuopt code should use cuopt-native error-checking macros instead of depending on RAFT-specific ones."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/utilities/cuda_helpers.cuh` around lines 241 - 246, The
cudaMemGetInfo call inside get_device_memory_size lacks cuopt-native error
checking; wrap the cudaMemGetInfo(&free_mem, &total_mem) invocation with the
project’s cuopt-native CUDA-check macro (e.g., CUDA_CHECK or the equivalent
cuopt macro used across the codebase) so errors are asserted/handled
consistently (preserve the free_mem/total_mem variables and return total_mem as
before), and ensure the header that defines the macro is included if not
already.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@cpp/src/utilities/cuda_helpers.cuh`:
- Around line 241-246: The cudaMemGetInfo call inside get_device_memory_size
lacks cuopt-native error checking; wrap the cudaMemGetInfo(&free_mem,
&total_mem) invocation with the project’s cuopt-native CUDA-check macro (e.g.,
CUDA_CHECK or the equivalent cuopt macro used across the codebase) so errors are
asserted/handled consistently (preserve the free_mem/total_mem variables and
return total_mem as before), and ensure the header that defines the macro is
included if not already.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: dd1a577e-69e7-4bcb-9642-c0245c0990b5

📥 Commits

Reviewing files that changed from the base of the PR and between b42797f and fcb5027.

📒 Files selected for processing (1)
  • cpp/src/utilities/cuda_helpers.cuh

@bdice bdice force-pushed the rmm-cccl-migration branch from 8d387c2 to 8506b5d Compare April 19, 2026 16:54
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
ci/build_wheel_libcuopt.sh (1)

8-49: ⚠️ Potential issue | 🟠 Major

Sourced PR constraints are likely discarded by later PIP_CONSTRAINT reassignment.

After Line 8 sources use_wheels_from_prs.sh, Line 48 points PIP_CONSTRAINT to /tmp/constraints.txt, which can bypass previously appended entries.

Minimal fix
 CUOPT_MPS_PARSER_WHEELHOUSE=$(RAPIDS_PY_WHEEL_NAME="cuopt_mps_parser" rapids-download-wheels-from-github python)
-echo "cuopt-mps-parser @ file://$(echo ${CUOPT_MPS_PARSER_WHEELHOUSE}/cuopt_mps_parser*.whl)" >> /tmp/constraints.txt
-export PIP_CONSTRAINT="/tmp/constraints.txt"
+echo "cuopt-mps-parser @ file://$(echo "${CUOPT_MPS_PARSER_WHEELHOUSE}"/cuopt_mps_parser*.whl)" >> "${PIP_CONSTRAINT}"
#!/bin/bash
set -euo pipefail
rg -n 'use_wheels_from_prs\.sh|/tmp/constraints\.txt|export PIP_CONSTRAINT=' ci/build_wheel_libcuopt.sh

Expected: helper sourcing appears before reassignment to /tmp/constraints.txt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ci/build_wheel_libcuopt.sh` around lines 8 - 49, The script currently
overwrites any constraints set by sourcing use_wheels_from_prs.sh because it
later unconditionally exports PIP_CONSTRAINT="/tmp/constraints.txt"; fix by
preserving or merging prior constraints: either move the source
./ci/use_wheels_from_prs.sh line to after the block that writes
/tmp/constraints.txt (so the PR helper runs last) or modify the logic around
CUOPT_MPS_PARSER_WHEELHOUSE and the export PIP_CONSTRAINT to append to an
existing PIP_CONSTRAINT if present (check $PIP_CONSTRAINT and include it when
writing /tmp/constraints.txt or export a combined value); update references in
this script to PIP_CONSTRAINT and CUOPT_MPS_PARSER_WHEELHOUSE accordingly.
ci/test_wheel_cuopt.sh (1)

11-27: ⚠️ Potential issue | 🟠 Major

PIP_CONSTRAINT entries from the sourced helper are being clobbered.

Line 11 appends PR wheel pins via use_wheels_from_prs.sh, but Line 22 recreates the file with cat >, dropping those earlier constraints.

Minimal fix
-cat > "${PIP_CONSTRAINT}" <<EOF
+cat >> "${PIP_CONSTRAINT}" <<EOF
 cuopt-${RAPIDS_PY_CUDA_SUFFIX} @ file://$(echo ${CUOPT_WHEELHOUSE}/cuopt_${RAPIDS_PY_CUDA_SUFFIX}-*.whl)
 cuopt-mps-parser @ file://$(echo ${CUOPT_MPS_PARSER_WHEELHOUSE}/cuopt_mps_parser-*.whl)
 cuopt-sh-client @ file://$(echo ${CUOPT_SH_CLIENT_WHEELHOUSE}/cuopt_sh_client-*.whl)
 libcuopt-${RAPIDS_PY_CUDA_SUFFIX} @ file://$(echo ${LIBCUOPT_WHEELHOUSE}/libcuopt_${RAPIDS_PY_CUDA_SUFFIX}-*.whl)
 EOF
#!/bin/bash
set -euo pipefail
rg -n 'use_wheels_from_prs\.sh|cat > "\$\{PIP_CONSTRAINT\}"|cat >> "\$\{PIP_CONSTRAINT\}"' ci/test_wheel_cuopt.sh

Expected: shows helper sourcing plus a truncating cat > (current behavior).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ci/test_wheel_cuopt.sh` around lines 11 - 27, The PIP_CONSTRAINT file is
being overwritten after sourcing use_wheels_from_prs.sh because the here-doc
uses "cat > \"${PIP_CONSTRAINT}\"", which drops entries added by the helper;
change the redirection to append (use "cat >> \"${PIP_CONSTRAINT}\"") so the
block that writes cuopt wheel pins (the here-doc that writes
cuopt-${RAPIDS_PY_CUDA_SUFFIX}, cuopt-mps-parser, cuopt-sh-client,
libcuopt-${RAPIDS_PY_CUDA_SUFFIX}) is appended to the existing ${PIP_CONSTRAINT}
written/modified by use_wheels_from_prs.sh instead of truncating it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@ci/build_wheel_libcuopt.sh`:
- Around line 8-49: The script currently overwrites any constraints set by
sourcing use_wheels_from_prs.sh because it later unconditionally exports
PIP_CONSTRAINT="/tmp/constraints.txt"; fix by preserving or merging prior
constraints: either move the source ./ci/use_wheels_from_prs.sh line to after
the block that writes /tmp/constraints.txt (so the PR helper runs last) or
modify the logic around CUOPT_MPS_PARSER_WHEELHOUSE and the export
PIP_CONSTRAINT to append to an existing PIP_CONSTRAINT if present (check
$PIP_CONSTRAINT and include it when writing /tmp/constraints.txt or export a
combined value); update references in this script to PIP_CONSTRAINT and
CUOPT_MPS_PARSER_WHEELHOUSE accordingly.

In `@ci/test_wheel_cuopt.sh`:
- Around line 11-27: The PIP_CONSTRAINT file is being overwritten after sourcing
use_wheels_from_prs.sh because the here-doc uses "cat > \"${PIP_CONSTRAINT}\"",
which drops entries added by the helper; change the redirection to append (use
"cat >> \"${PIP_CONSTRAINT}\"") so the block that writes cuopt wheel pins (the
here-doc that writes cuopt-${RAPIDS_PY_CUDA_SUFFIX}, cuopt-mps-parser,
cuopt-sh-client, libcuopt-${RAPIDS_PY_CUDA_SUFFIX}) is appended to the existing
${PIP_CONSTRAINT} written/modified by use_wheels_from_prs.sh instead of
truncating it.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: cbcee9aa-472f-478f-9c5a-070f7b3d1fcb

📥 Commits

Reviewing files that changed from the base of the PR and between 8d387c2 and 8506b5d.

📒 Files selected for processing (19)
  • ci/build_cpp.sh
  • ci/build_docs.sh
  • ci/build_python.sh
  • ci/build_wheel_cuopt.sh
  • ci/build_wheel_cuopt_mps_parser.sh
  • ci/build_wheel_cuopt_server.sh
  • ci/build_wheel_cuopt_sh_client.sh
  • ci/build_wheel_libcuopt.sh
  • ci/test_cpp.sh
  • ci/test_cpp_memcheck.sh
  • ci/test_notebooks.sh
  • ci/test_python.sh
  • ci/test_self_hosted_service.sh
  • ci/test_skills_assets.sh
  • ci/test_wheel_cuopt.sh
  • ci/test_wheel_cuopt_server.sh
  • ci/use_conda_packages_from_prs.sh
  • ci/use_wheels_from_prs.sh
  • cpp/src/utilities/cuda_helpers.cuh
✅ Files skipped from review due to trivial changes (2)
  • ci/use_wheels_from_prs.sh
  • ci/use_conda_packages_from_prs.sh
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/utilities/cuda_helpers.cuh

Copy link
Copy Markdown
Contributor

@akifcorduk akifcorduk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving considering that the limiting adaptor will be restored. Thanks!


#include <mip_heuristics/mip_constants.hpp>

#include <thrust/iterator/transform_output_iterator.h>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it an intentional change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking Introduces a breaking change Feature improvement Improves an existing functionality

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants