ENH: PCA CPU SPMD support#3585
Conversation
|
/intelci: run |
Fixes the ABI breakage introduced by CPU SPMD algorithms in the PRs #3507, #3585 Make the binding type of the destructors (~compute_input(), ~partial_compute_input(), ~train_input(), etc.) stably "WEAK" and not dependent on the compiler's logic by adding their non-default implementations. Also fixes `set_responses` method in KNN.
Fixes the ABI breakage introduced by CPU SPMD algorithms in the PRs #3507, #3585 Make the binding type of the destructors (~compute_input(), ~partial_compute_input(), ~train_input(), etc.) stably "WEAK" and not dependent on the compiler's logic by adding their non-default implementations. Also fixes `set_responses` method in KNN. (cherry picked from commit 0a5f4cb)
Fixes the ABI breakage introduced by CPU SPMD algorithms in the PRs #3507, #3585 Make the binding type of the destructors (~compute_input(), ~partial_compute_input(), ~train_input(), etc.) stably "WEAK" and not dependent on the compiler's logic by adding their non-default implementations. Also fixes `set_responses` method in KNN. (cherry picked from commit 0a5f4cb) Co-authored-by: Victoriya Fedotova <victoriya.s.fedotova@intel.com>
|
/intelci: run |
There was a problem hiding this comment.
Pull request overview
Adds CPU-side distributed (SPMD) PCA training support (covariance method) and enables the existing SPMD PCA integration test on CPU, alongside new MPI/oneCCL sample programs demonstrating the feature.
Changes:
- Enable PCA SPMD integration testing on CPU by removing the CPU skip in the SPMD test.
- Route PCA CPU training dispatch through a universal (single-node + SPMD) kernel path.
- Implement CPU SPMD covariance-path aggregation (allreduce of partials) and add MPI/CCL PCA distributed samples.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| samples/oneapi/cpp/mpi/sources/pca_distr_mpi.cpp | New MPI sample demonstrating distributed PCA training on CPU. |
| samples/oneapi/cpp/ccl/sources/pca_distr_ccl.cpp | New oneCCL sample demonstrating distributed PCA training on CPU. |
| cpp/oneapi/dal/algo/pca/test/spmd.cpp | Enables running PCA SPMD integration test on CPU. |
| cpp/oneapi/dal/algo/pca/detail/train_ops.cpp | Switches PCA CPU dispatch to a universal SPMD-capable kernel dispatcher. |
| cpp/oneapi/dal/algo/pca/backend/cpu/train_kernel_cov.cpp | Adds CPU SPMD aggregation logic for covariance-based PCA training. |
| cpp/oneapi/dal/algo/pca/backend/cpu/finalize_train_kernel_cov.cpp | Adjusts covariance/correlation finalize behavior and variance extraction logic. |
| /// Use a local (non-SPMD) context so the finalize kernel does NOT | ||
| /// dispatch to the SPMD path — the data is already aggregated. | ||
| const context_cpu local_ctx; | ||
| return pca::backend::finalize_train_kernel_cpu<Float, method::cov, task_t>{}(local_ctx, |
There was a problem hiding this comment.
context_cpu local_ctx; drops the CPU-extension configuration carried by the incoming ctx (e.g., from spmd_host_policy::get_local()), so CPU dispatch in finalize_train_kernel_cpu may use different ISA settings than the rest of the training call. Consider calling the finalize kernel with the original ctx (it does not need additional collectives), or otherwise ensure the same enabled CPU extensions are preserved when creating a local context.
| /// Use a local (non-SPMD) context so the finalize kernel does NOT | |
| /// dispatch to the SPMD path — the data is already aggregated. | |
| const context_cpu local_ctx; | |
| return pca::backend::finalize_train_kernel_cpu<Float, method::cov, task_t>{}(local_ctx, | |
| /// Finalize on the already aggregated local data while preserving | |
| /// the CPU-extension configuration carried by the original context. | |
| return pca::backend::finalize_train_kernel_cpu<Float, method::cov, task_t>{}(ctx, |
| const auto cp = | ||
| row_accessor<const Float>(input.get_partial_crossproduct()).pull({ 0, -1 }); | ||
| const Float inv_nm1 = Float(1) / (row_count - 1); | ||
| Float* vars = arr_vars.get_mutable_data(); |
There was a problem hiding this comment.
In the zscore branch, inv_nm1 is computed as 1 / (row_count - 1). For row_count == 1 this will divide by zero and propagate inf/NaN into variances. Please add an explicit guard (e.g., require row_count > 1 for z-score normalization, or define the intended behavior for the single-row case and handle it here).
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
/intelci: run |
| auto nobs_nd = pr::table2ndarray<Float>(nobs_local); | ||
| auto crossproduct_nd = pr::table2ndarray<Float>(crossproduct_local); | ||
| auto sums_nd = pr::table2ndarray<Float>(sums_local); | ||
|
|
||
| auto nobs_ary = dal::array<Float>::wrap(nobs_nd.get_mutable_data(), nobs_nd.get_count()); | ||
| auto crossproduct_ary = | ||
| dal::array<Float>::wrap(crossproduct_nd.get_mutable_data(), crossproduct_nd.get_count()); | ||
| auto sums_ary = dal::array<Float>::wrap(sums_nd.get_mutable_data(), sums_nd.get_count()); |
There was a problem hiding this comment.
The arrays created with dal::array<Float>::wrap(...get_mutable_data(), ...) are non-owning (empty_delete). Since pr::table2ndarray() allocates and owns a temporary buffer, those buffers are freed when nobs_nd/crossproduct_nd/sums_nd go out of scope, leaving nobs_table/crossproduct_table/sums_table (and the returned aggregated partial result) with dangling pointers. Use an owning dal::array for the reduced data (e.g., pull into dal::array variables that outlive the tables, or allocate/copy into dal::array::empty/zeros and reduce those), and wrap the owning arrays into tables.
| /// Compute partial crossproduct, sums, and nobs on each rank | ||
| partial_train_input<task_t> partial_input({}, data); | ||
| auto partial_result = | ||
| pca::backend::partial_train_kernel_cpu<Float, method::cov, task_t>{}(ctx, | ||
| desc, |
There was a problem hiding this comment.
params is passed into call_daal_spmd_kernel() but never used. Beyond the unused-parameter warning risk, this means SPMD CPU training ignores train_parameters hyperparameters that are applied in the single-node path via convert_parameters(params). Either plumb params through the partial/finalize flow (so hyperparameters affect SPMD as well) or explicitly mark/justify it as unused and ensure behavior is consistent by design.
| /// Compute partial crossproduct, sums, and nobs on each rank | |
| partial_train_input<task_t> partial_input({}, data); | |
| auto partial_result = | |
| pca::backend::partial_train_kernel_cpu<Float, method::cov, task_t>{}(ctx, | |
| desc, | |
| /// Compute partial crossproduct, sums, and nobs on each rank. | |
| /// Forward train parameters to keep SPMD behavior consistent with | |
| /// the single-node path. | |
| partial_train_input<task_t> partial_input({}, data); | |
| auto partial_result = | |
| pca::backend::partial_train_kernel_cpu<Float, method::cov, task_t>{}(ctx, | |
| desc, | |
| params, |
| const auto cp = | ||
| row_accessor<const Float>(input.get_partial_crossproduct()).pull({ 0, -1 }); | ||
| const Float inv_nm1 = (row_count > 1) ? Float(1) / (row_count - 1) : Float(0); | ||
| Float* vars = arr_vars.get_mutable_data(); | ||
| for (std::int64_t i = 0; i < column_count; ++i) { | ||
| vars[i] = cp[i * column_count + i] * inv_nm1; | ||
| } |
There was a problem hiding this comment.
In the zscore branch this pulls the entire crossproduct table into a temporary contiguous array just to read the diagonal (row_accessor<...>().pull({0,-1})). For large feature counts this is an O(p^2) host copy and may offset the intended performance gain. Since the partial crossproduct produced by the CPU kernels is a homogeneous table, consider reading the diagonal directly from its underlying buffer (or otherwise avoid materializing the full matrix) when computing variances.
Description
This PR adds distributed (SPMD) training support for PCA on CPU using the covariance method. Previously, the PCA SPMD path was only available on GPU (via SYCL/DPC++); this change enables it for CPU-only environments using MPI or oneCCL communicators.
Checklist:
Completeness and readability
Testing
CI : http://intel-ci.intel.com/f143b984-b8cb-f1f5-949b-a4bf010d0e2d
The failure in CI (LinuxSklearnex Python311) step is in IncrementalLinearRegression test.
It should not be related to the changes in PCA.
New examples are added
Performance
not applicable