Skip to content

[shap-waterfall] SHAP Waterfall Plot for Feature Attribution #5237

@MarkusNeusinger

Description

@MarkusNeusinger

Description

A waterfall-style chart showing how each feature contributes to pushing a model prediction from a base value (expected output) to the final predicted value. Bars extend left (negative SHAP value) or right (positive SHAP value), stacking cumulatively. This is a core ML explainability visualization complementing the existing SHAP summary plot.

Applications

  • Explaining individual predictions in credit scoring models
  • Debugging unexpected model outputs in healthcare ML
  • Communicating feature impact to non-technical stakeholders
  • Regulatory compliance (model explainability requirements)

Data

  • feature (str) — feature names
  • shap_value (float) — SHAP contribution per feature
  • base_value (float) — expected model output
  • final_value (float) — actual prediction
  • Size: 10–20 features typical

Notes

  • Features ordered by absolute SHAP value magnitude
  • Cumulative bar segments from base_value to final_value
  • Color: red for positive, blue for negative contributions
  • Show base value and final prediction as reference lines

Metadata

Metadata

Assignees

No one assigned

    Labels

    approvedApproved for implementationspec-readySpecification merged to main

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions