This document describes how to use the public API provided by the qml package.
All notebooks in this repository act as thin clients of these APIs.
The design goal is:
• reusable workflows
• deterministic experiments
• consistent outputs
• minimal configuration
Install the package in editable mode:
pip install -e .Install development tools:
pip install -e ".[dev]"Run the default fast local test pass:
pytest -m "not slow"Run the slower end-to-end coverage as well:
pytestTests marked with @pytest.mark.slow are used for heavier CLI and determinism
checks. CI mirrors this split:
• fast tests run across the Python version matrix
• the full suite runs on Python 3.12
Linting remains separate:
ruff check .Train a minimal variational quantum classifier on a synthetic dataset:
from qml.classifiers import run_vqc
result = run_vqc(
n_samples=200,
noise=0.1,
test_size=0.25,
seed=123,
n_layers=2,
steps=50,
step_size=0.1,
plot=True,
save=False,
)| parameter | description | default |
|---|---|---|
| n_samples | dataset size | 200 |
| noise | dataset noise level | 0.1 |
| test_size | test fraction | 0.25 |
| seed | random seed | 123 |
| n_layers | ansatz depth | 2 |
| steps | optimisation steps | 50 |
| step_size | Adam learning rate | 0.1 |
| shots | finite-shot sampling | None |
| plot | show plots | False |
| save | save JSON + plots | False |
Typical fields:
{
"model",
"dataset",
"seed",
"n_qubits",
"n_layers",
"steps",
"step_size",
"loss_history",
"train_accuracy",
"test_accuracy",
"params",
"y_train",
"y_test",
"y_train_pred",
"y_test_pred",
"train_probabilities",
"test_probabilities",
}Train a variational quantum regressor:
from qml.regression import run_vqr
result = run_vqr(
n_samples=200,
seed=123,
n_layers=2,
steps=50,
plot=True,
)| parameter | description | default |
|---|---|---|
| n_samples | dataset size | 200 |
| noise | dataset noise | 0.1 |
| test_size | test fraction | 0.25 |
| seed | random seed | 123 |
| n_layers | ansatz depth | 2 |
| steps | optimisation steps | 50 |
| step_size | Adam learning rate | 0.1 |
| shots | finite-shot sampling | None |
| plot | show plots | False |
| save | save outputs | False |
{
"train_mse",
"test_mse",
"train_mae",
"test_mae",
"loss_history",
}Train a hierarchical quantum classifier on a synthetic dataset:
from qml.qcnn import run_qcnn
result = run_qcnn(
n_samples=200,
noise=0.1,
test_size=0.25,
seed=123,
steps=50,
step_size=0.1,
plot=True,
save=False,
)| parameter | description | default |
|---|---|---|
| n_samples | dataset size | 200 |
| noise | dataset noise level | 0.1 |
| test_size | test fraction | 0.25 |
| seed | random seed | 123 |
| steps | optimisation steps | 50 |
| step_size | Adam learning rate | 0.1 |
| shots | finite-shot sampling | None |
| plot | show plots | False |
| save | save JSON + plots | False |
Typical fields:
{
"model",
"dataset",
"seed",
"n_qubits",
"steps",
"step_size",
"loss_history",
"train_accuracy",
"test_accuracy",
"params",
"embedding_params",
"conv1_params",
"conv2_params",
"dense_params",
"y_train",
"y_test",
"y_train_pred",
"y_test_pred",
"train_probabilities",
"test_probabilities",
}Train a quantum autoencoder on a structured family of four-qubit states:
from qml.autoencoder import run_quantum_autoencoder
result = run_quantum_autoencoder(
n_samples=200,
noise=0.05,
test_size=0.25,
seed=123,
n_layers=2,
latent_qubits=2,
steps=50,
step_size=0.1,
family="correlated",
plot=True,
save=False,
)| parameter | description | default |
|---|---|---|
| n_samples | dataset size | 200 |
| noise | family perturbation level | 0.05 |
| test_size | test fraction | 0.25 |
| seed | random seed | 123 |
| n_layers | autoencoder ansatz depth | 2 |
| latent_qubits | retained latent qubits | 2 |
| steps | optimisation steps | 50 |
| step_size | Adam learning rate | 0.1 |
| family | state family | "correlated" |
| plot | show plots | False |
| save | save JSON + plots | False |
Typical fields:
{
"model",
"family",
"seed",
"n_qubits",
"latent_qubits",
"trash_qubits",
"n_layers",
"steps",
"step_size",
"loss_history",
"train_compression_fidelity",
"test_compression_fidelity",
"train_reconstruction_fidelity",
"test_reconstruction_fidelity",
"params",
}Compute a quantum kernel matrix and train an SVM:
from qml.kernel_methods import run_quantum_kernel_classifier
result = run_quantum_kernel_classifier(
n_samples=200,
seed=123,
plot=True,
)| parameter | description | default |
|---|---|---|
| n_samples | dataset size | 200 |
| noise | dataset noise | 0.1 |
| test_size | test fraction | 0.25 |
| seed | random seed | 123 |
| shots | finite-shot kernel estimation | None |
| plot | show kernel plots | False |
| save | save outputs | False |
{
"train_accuracy",
"test_accuracy",
"kernel_matrix_train",
"kernel_matrix_test",
"y_train_pred",
"y_test_pred",
}Optimise feature-map parameters using kernel-target alignment.
from qml.trainable_kernels import run_trainable_quantum_kernel_classifier
result = run_trainable_quantum_kernel_classifier(
n_samples=200,
steps=50,
plot=True,
)| parameter | description | default |
|---|---|---|
| embedding | feature map type | "angle" |
| n_layers | circuit depth | 2 |
| steps | optimisation steps | 50 |
| step_size | learning rate | 0.1 |
| shots_train | shots used during optimisation | None |
| shots_kernel | shots used for kernel evaluation | None |
{
"train_accuracy",
"test_accuracy",
"final_alignment",
"loss_history",
"kernel_matrix_train",
}Train a supervised quantum embedding model using contrastive loss.
from qml.metric_learning import run_quantum_metric_learner
result = run_quantum_metric_learner(
samples=200,
test_size=0.25,
seed=123,
layers=2,
steps=50,
stepsize=0.05,
margin=0.5,
pairs_per_step=32,
plot=True,
)The model learns an embedding geometry such that:
• samples from the same class are mapped closer together
• samples from different classes are separated by a margin
Classification is performed using nearest-centroid prediction in the learned embedding space.
| parameter | description | default |
|---|---|---|
| dataset | dataset name ("moons", "circles", "blobs") | "moons" |
| samples | dataset size | 120 |
| test_size | test fraction | 0.25 |
| seed | random seed | 42 |
| layers | number of trainable embedding layers | 2 |
| steps | optimisation steps | 100 |
| stepsize | Adam learning rate | 0.05 |
| margin | contrastive separation margin | 0.5 |
| pairs_per_step | number of sampled training pairs per step | 32 |
| log_every | logging frequency | 10 |
| scale_data | standardise features before encoding | True |
| plot | display training loss | False |
| save | save JSON + plots | False |
| results_dir | override results output directory | None |
| images_dir | override images output directory | None |
Returns a dataclass:
QuantumMetricLearningResultKey attributes:
result.train_accuracy
result.test_accuracy
result.loss_history
result.params
result.train_embeddings
result.test_embeddings
result.train_centroidsWhen save=True, the workflow writes JSON results and generated figures. By
default these are stored under:
results/metric_learning/
images/metric_learning/
python -m qml metric-learning \
--samples 200 \
--layers 2 \
--steps 50 \
--plot \
--saveOptional arguments:
--margin 0.5
--pairs-per-step 32
--log-every 10
--no-scale-data
--saveFinite-shot simulation is supported across all quantum workflows.
Internally implemented via:
qml.set_shots(qnode, shots)Example:
run_vqc(shots=128)
run_quantum_kernel_classifier(shots=256)
run_trainable_quantum_kernel_classifier(
shots_train=64,
shots_kernel=256,
)When a seed is provided, runs remain deterministic.
Classical reference models:
from qml.classical_baselines import (
run_logistic_classifier,
run_svm_classifier,
run_mlp_classifier,
run_ridge_regression,
run_mlp_regressor,
)Example:
result = run_logistic_classifier(
n_samples=200,
seed=123,
)Compare models across multiple seeds.
from qml.benchmarks import compare_classification_models
result = compare_classification_models(
models=[
"vqc",
"qcnn",
"quantum_kernel",
"trainable_quantum_kernel",
"logistic_regression",
"svm_classifier",
"mlp_classifier",
],
seeds=[123, 456, 789],
n_samples=200,
)from qml.benchmarks import compare_regression_models
result = compare_regression_models(
models=[
"vqr",
"ridge_regression",
"mlp_regressor",
],
seeds=[123, 456],
)Per-model configuration can be passed via:
result = compare_classification_models(
models=[
"vqc",
"qcnn",
"quantum_kernel",
"trainable_quantum_kernel",
],
model_kwargs={
"vqc": {
"shots": 128,
"n_layers": 2,
},
"quantum_kernel": {
"shots": 256,
},
"trainable_quantum_kernel": {
"shots_train": 64,
"shots_kernel": 256,
"steps": 25,
},
},
)Benchmark results remain consistent in structure across models.
Run workflows directly:
python -m qml vqc --steps 50 --plot
python -m qml qcnn --steps 50 --plot
python -m qml autoencoder --steps 50 --plot
python -m qml regression --steps 50 --plot
python -m qml kernel --plot
python -m qml trainable-kernel --steps 50 --plot
python -m qml metric-learning --steps 50 --plotClassification:
python -m qml benchmark classification \
--models vqc qcnn quantum_kernel logistic_regression svm_classifier \
--seeds 123 456Regression:
python -m qml benchmark regression \
--models vqr ridge_regression mlp_regressor \
--seeds 123 456All workflows support deterministic execution via:
seed
Reproducibility applies to:
• dataset generation • parameter initialisation • optimisation trajectories • finite-shot sampling
Outputs can optionally be saved:
results/
images/
These directories are gitignored.
Notebooks should import from the package:
from qml.classifiers import run_vqcrather than defining circuits inline.
This ensures:
• consistent behaviour • reproducible outputs • shared infrastructure • minimal duplication
Execute:
pytestFormat code:
black .
ruff check .Run module:
python -m qmlSid Richards
LinkedIn: https://www.linkedin.com/in/sid-richards-21374b30b/
GitHub: https://github.com/SidRichardsQuantum
MIT License — see LICENSE