Original Repository: https://github.com/KaHIP/KaHIP
KaHIP (Karlsruhe High Quality Partitioning) is a family of graph partitioning programs that tackle the balanced graph partitioning problem: given an undirected graph G = (V, E) with optional node and edge weights, partition the node set into k disjoint blocks of roughly equal weight while minimizing the total weight of edges crossing block boundaries (the edge cut).
The balance constraint ensures each block's weight does not exceed (1 + imbalance) * ceil(total_weight / k).
KaHIP uses a multilevel approach with coarsening, initial partitioning, and local search refinement. CHSZLabLib exposes three KaHIP routines:
| Method | Purpose |
|---|---|
Decomposition.partition() |
Balanced k-way graph partitioning |
Decomposition.node_separator() |
Balanced node separator computation |
Decomposition.node_ordering() |
Fill-reducing nested dissection ordering |
Partition a graph into k balanced blocks minimizing the edge cut.
Decomposition.partition(
g: Graph,
num_parts: int = 2,
mode: str = "eco",
imbalance: float = 0.03,
seed: int = 0,
suppress_output: bool = True,
) -> PartitionResult| Parameter | Type | Default | Description |
|---|---|---|---|
g |
Graph |
required | Input graph (undirected, optionally weighted). Must be finalized. |
num_parts |
int |
2 |
Number of blocks (must be >= 2). |
mode |
str |
"eco" |
Quality/speed trade-off (see table below). |
imbalance |
float |
0.03 |
Allowed weight imbalance as a fraction (0.03 = 3%). |
seed |
int |
0 |
Random seed for reproducibility. |
suppress_output |
bool |
True |
Suppress C++ stdout/stderr output. |
| Mode | Speed | Quality | Best For |
|---|---|---|---|
"fast" |
Fastest | Good | Large-scale exploration |
"eco" |
Balanced | Very good | Default choice |
"strong" |
Slowest | Best | Final production partitions |
"fastsocial" |
Fastest | Good | Social / power-law networks |
"ecosocial" |
Balanced | Very good | Social / power-law networks |
"strongsocial" |
Slowest | Best | Social / power-law networks |
PartitionResult
| Field | Type | Description |
|---|---|---|
edgecut |
int |
Total weight of edges crossing block boundaries. |
assignment |
np.ndarray (int32) |
Block ID for each node (0-indexed). |
| Exception | Condition |
|---|---|
ValueError |
num_parts < 2 or imbalance < 0. |
InvalidModeError |
mode is not one of the valid choices. |
from chszlablib import Graph, Decomposition
g = Graph.from_metis("mesh.graph")
result = Decomposition.partition(g, num_parts=8, mode="strong", imbalance=0.01)
print(f"Edge cut: {result.edgecut}")
print(f"Block assignments: {result.assignment}")Compute a balanced node separator that splits the graph into two components with no edges between them.
Decomposition.node_separator(
g: Graph,
num_parts: int = 2,
mode: str = "eco",
imbalance: float = 0.03,
seed: int = 0,
suppress_output: bool = True,
) -> SeparatorResult| Parameter | Type | Default | Description |
|---|---|---|---|
g |
Graph |
required | Input graph (undirected, optionally weighted). |
num_parts |
int |
2 |
Number of blocks (must be >= 2). |
mode |
str |
"eco" |
Quality/speed trade-off (same modes as partition()). |
imbalance |
float |
0.03 |
Allowed weight imbalance (0.03 = 3%). |
seed |
int |
0 |
Random seed for reproducibility. |
suppress_output |
bool |
True |
Suppress C++ stdout/stderr output. |
SeparatorResult
| Field | Type | Description |
|---|---|---|
num_separator_vertices |
int |
Number of nodes in the separator. |
separator |
np.ndarray (int32) |
Array where each entry is 0 (block A), 1 (block B), or 2 (separator). |
from chszlablib import Graph, Decomposition
g = Graph.from_metis("mesh.graph")
result = Decomposition.node_separator(g, mode="strong")
print(f"Separator size: {result.num_separator_vertices}")
separator_nodes = [i for i, v in enumerate(result.separator) if v == 2]Compute a fill-reducing nested dissection ordering for sparse matrix factorization.
Decomposition.node_ordering(
g: Graph,
mode: str = "eco",
seed: int = 0,
suppress_output: bool = True,
) -> OrderingResult| Parameter | Type | Default | Description |
|---|---|---|---|
g |
Graph |
required | Input graph representing the sparsity pattern. |
mode |
str |
"eco" |
Quality/speed trade-off (same modes as partition()). |
seed |
int |
0 |
Random seed for reproducibility. |
suppress_output |
bool |
True |
Suppress C++ stdout/stderr output. |
OrderingResult
| Field | Type | Description |
|---|---|---|
ordering |
np.ndarray (int32) |
Permutation array. Node i should be placed at position ordering[i]. |
from chszlablib import Graph, Decomposition
g = Graph.from_metis("sparse_matrix.graph")
result = Decomposition.node_ordering(g, mode="strong")
perm = result.orderingThis Python interface wraps the KaHIP C++ library via pybind11. While convenient for prototyping and integration into Python workflows, there is inherent overhead from the Python/C++ boundary and data conversion. For maximum performance on large-scale instances or in production HPC pipelines, use the original C++ implementation directly from the KaHIP repository.
@inproceedings{DBLP:conf/wea/SandersS13,
author = {Peter Sanders and Christian Schulz},
title = {Think Locally, Act Globally: Highly Balanced Graph Partitioning},
booktitle = {Experimental Algorithms, 12th International Symposium, {SEA} 2013},
series = {Lecture Notes in Computer Science},
volume = {7933},
pages = {164--175},
publisher = {Springer},
year = {2013},
doi = {10.1007/978-3-642-38527-8\_16}
}
@article{DBLP:journals/tpds/MeyerhenkeSS17,
author = {Henning Meyerhenke and Peter Sanders and Christian Schulz},
title = {Parallel Graph Partitioning for Complex Networks},
journal = {{IEEE} Trans. Parallel Distributed Syst.},
volume = {28},
number = {9},
pages = {2625--2638},
year = {2017},
doi = {10.1109/TPDS.2017.2671868}
}