Principles translated into operational design rules
Fairness, accountability, transparency, robustness, privacy, human oversight, and environmental responsibility translated into concrete design rules engineering teams apply at the point of decision — not into a slide deck. Each principle is paired with the artefacts that evidence it, the gates that enforce it, and the metrics that demonstrate it. Principles without enforcement mechanism are out of scope.
Risk classification and intake
An intake process that classifies every new AI system or material change to an existing one against the firm's risk taxonomy — drawing on the EU AI Act categorisation, NIST AI RMF profile, sectoral risk frameworks, and the firm's own internal model risk policy. The classification drives the depth of the controls, the gates that apply, and the seniority of the review. Most programmes do not have this front door — and the absence of one is the failure pattern engagements address first.
Lifecycle gates and the artefacts they produce
Defined gates across the AI lifecycle — use-case intake, design review, pre-deployment, post-deployment, and decommission — each with the artefacts the gate requires: model cards, system cards, data sheets, AI impact assessments, evaluation reports, fairness reports, privacy assessments, security reviews, and operational runbooks. The artefacts are version-controlled, queryable, and produced as part of delivery rather than assembled retrospectively.
Explainability and transparency engineering
Explainability designed in at the model level — interpretable models where the workflow demands it, post-hoc explanation (SHAP, LIME, integrated gradients, counterfactuals, attention attribution) where the model class requires it, retrieval citations and chain-of-evidence for generative outputs, and the user-facing transparency mechanisms (notices, disclosures, right-to-explanation responses) regulators and customers expect. Transparency is treated as an engineered surface, not as a product communication.
Human oversight, accountability, and redress
Human oversight engineered as a real control rather than as a sentence in the policy — defined human-in-the-loop checkpoints, override mechanisms, escalation paths, named owners on every system, redress processes for affected individuals, and the documentation regulators specifically examine when they ask whether oversight is meaningful. Saiph engagements reject the failure pattern in which "human oversight" is satisfied by an unread alert sent to a shared mailbox.
Lifecycle governance and operating cadence
The committees, working groups, review boards, RACI, lifecycle gates, evidence repository, model registry, escalation procedures, and reporting cadence that turn the framework into a running organisational practice. The cadence is the deliverable — the framework that has no operating rhythm decays inside two quarters. Saiph engagements design the rhythm in alongside the controls and exercise it before handover.