Hi,
I’m building a verification system with a strict separation between:
- normalized verification interface (stable output shape)
- pluggable cryptographic backend (including ML-DSA experimental path)
Current properties:
- deterministic outputs (valid + reason_code + fail_path)
- backend-agnostic interface (can swap mock → real PQ backend)
- no leakage of raw backend errors into public interface
Question:
From your experience with PQ integrations (e.g. liboqs),
what usually breaks first when moving from:
- mock / transitional implementations
→ real PQ verification backends?
Specifically:
- is it interface shape assumptions?
- failure semantics?
- signature/input normalization?
Goal is to keep integration contract stable while evolving backend.
Would appreciate any guidance or pitfalls to watch for.