Every risk score, every migration playbook, every runtime block carries the source evidence with it. We do not ship un-cited model output.
Applied to our own AI first.
Quanterios sells AI governance. We have to live the discipline ourselves. The principles below shape how the AI Decision Engine is built, deployed, and audited.
Risk scoring is rule-based + XGBoost, auditable, reproducible, never hallucinated. LLM reasoning sits on top, not underneath.
We do not fine-tune our models on customer content. Our migration-outcomes corpus uses anonymised aggregate signals only.
Crypto Agility API algorithm swaps require explicit policy authorisation. AI Runtime denials are auditable and customer-overridable.
We classified the platform's own AI components against the EU AI Act risk tiers and meet the obligations for the tier we sit in.
Third-party LLMs (Claude, OpenAI fallback) with fully signed contracts, EU residency where available, no-training agreements. Versions pinned in our internal AIBOM.
Quanterios's own AI components, what they do, the risk-tier classification we hold ourselves to, and the obligations we meet.
Need the responsible-AI package?
Email trust@quanterios.com for the technical documentation, classification rationale, and post-market monitoring summary.