This is a field notes paper: a structured conceptual contribution grounded in direct practitioner observation, prior to the formal development of a working paper. It is the second article of a series on AI Integrity Management as an emerging enterprise discipline, produced by the AI Integrity Management working group at The Integral Management Society — a Swiss non-profit association bringing together senior specialists from adaptive systems, complex systems, artificial intelligence, mission-critical operations and governance. The operational and research arm of the working group is Tegrity.AI.
This series is addressed to enterprise architects, CIOs, CDOs, AI governance leads, risk and compliance officers, and transformation executives who are navigating the practical challenge of deploying AI systems that remain reliable, controllable and aligned under real operational conditions.
Artificial intelligence is generating a proliferation of oversight disciplines — AI ethics, AI safety, AI compliance, AI governance, AI cybersecurity, operational AI reliability — each with its own frameworks, its own practitioners, and its own claim on the enterprise agenda. The question this article asks is a simple one: are these disciplines converging, and if so, is there a single function that can hold them together? From the perspective of the Tegrity.ai working group at The Integral Management Society, the answer to the first question is probably yes. The answer to the second is what we are trying to work out.
The landscape: too many disciplines, not enough integration
Anyone working inside a large organisation on AI-related risk will recognise the fragmentation. Compliance teams are busy mapping AI systems against the EU AI Act and ISO/IEC 42001. Ethics boards are reviewing algorithmic fairness and human-in-the-loop policies. Cybersecurity functions are expanding their remit to cover model poisoning and adversarial attacks under what is now being called AI Security Posture Management (AISPM). Site reliability engineers are concerned with model drift, catastrophic forgetting, and the kind of operational consistency that mission-critical systems demand. Researchers at Harvard Law School’s Program on Corporate Governance have noted that board oversight of AI is fast becoming a non-delegable responsibility, comparable to financial reporting or cybersecurity risk.
These disciplines are not yet talking to each other fluently. Each has its own language, its own KPIs, and its own reporting line. A survey cited by Domino Data Lab found that while 97 % of organisations now set responsible AI goals and 95 % plan to revise their governance frameworks, nearly half acknowledge they lack the resources to implement those frameworks fully. Only around one in four organisations has a formal AI model governance programme in place. The ambition is there. The integration is not.
What each discipline brings — and where it falls short
It is worth being precise about what each domain contributes and where its limits lie, because any case for integration has to be honest about what would be gained and what would be risked.
AI ethics and responsible AI have produced more than 160 published guidelines from governments, international bodies, and corporations, according to a review of the governance landscape. The principles — fairness, transparency, accountability, non-maleficence — are broadly shared. The persistent problem is operationalisation: translating «be fair» into a measurable engineering constraint is genuinely hard, and ethics teams frequently find themselves isolated from the engineers actually building the systems they are supposed to govern.
AI compliance and governance, anchored in frameworks such as the NIST AI Risk Management Framework (AI RMF) and the EU AI Act, offer the most institutionalised structures. The NIST AI RMF organises trustworthy AI around four functions — Govern, Map, Measure, Manage — and provides explicit expectations around robustness, safety, fairness, privacy, and transparency. ISO/IEC 42001, the first international management-system standard for AI, goes further by offering a certification scheme that mirrors what ISO 27001 did for information security. These frameworks are becoming de facto requirements in regulated supply chains. Their limitation, as practitioners note, is the risk of «governance theatre»: organisations that produce compliant documentation without changing operational behaviour.
AI cybersecurity is moving fast. The category of AI Security Posture Management (AISPM) has emerged to continuously scan AI workloads for misconfigurations, insecure integrations, and compliance gaps. Cybersecurity firms such as DigiCert have forecast that «model integrity is the new data integrity» for 2026, predicting that securing AI models against poisoning, extraction, and provenance loss will become a top boardroom priority. What cybersecurity struggles with is that AI failures are often probabilistic rather than adversarial: a model that hallucinates a policy during a live customer interaction is not a hacked endpoint, but neither is it purely an ethics failure — it is an integrity failure in a more technical sense.
Operational AI integrity — the domain of MLOps, site reliability engineering, and AI platform teams — focuses on uptime, consistency, and behavioural predictability: the 99.9x availability that mission-critical deployments assume. Maintaining data integrity throughout the AI pipeline demands robust controls on training data, model versioning, drift monitoring, and incident response tailored to AI failures rather than generic IT outages. This domain tends to be technically fluent but sometimes disconnected from the ethical and regulatory dimensions of the same systems it operates.
Is integration actually happening?
There are visible signs that these domains are pulling towards each other, even without a common name for where they are heading.
Enterprise AI governance guides published in 2025 already describe multi-tier operating models that look like integrated oversight functions: an AI Governance Committee at executive level, AI review boards at operational level, and technical working groups covering bias testing, security, and monitoring. Implementation guides emphasise that policy, risk assessment, compliance alignment, technical controls, ethical guidelines, and runtime monitoring must be orchestrated together rather than managed as separate checklists. This is not, in those guides, described as AI Integrity Management. But structurally, it is the same thing.
At the conceptual level, a 2025 paper published on arXiv («AI Integrity: Defining and Measuring the Consistency and Verifiability of AI Reasoning,» arXiv:2604.11065) proposed AI integrity as a distinct governance paradigm — not a subset of ethics, safety, or alignment, but a complementary concept focused on whether the reasoning process from evidence to conclusion is transparent, consistent, and auditable. The paper introduces the notion of an «Authority Stack» (the values, epistemic standards, source preferences, and data-selection criteria that an AI system applies) and proposes PRISM-style metrics for measuring consistency across repeated scenarios and value hierarchies. A related piece (arXiv:2604.11216) explores what the authors call «integrity hallucination» — cases where a model produces responses that are internally inconsistent with its own stated values, distinct from factual hallucination. These are early-stage contributions, not established standards. But they point in an interesting direction: towards measurable, auditable integrity as a technical foundation on which the other disciplines can anchor.
Why «integrity» may be the right conceptual anchor
The word «integrity» carries useful connotations across all the domains in question, and this is not accidental.
In operational reliability, integrity means the system does what it is supposed to do, consistently and verifiably. In data and information systems, data integrity is already a well-established concept with measurable properties. In cybersecurity, integrity is one of the three pillars of the CIA triad — the property that data and systems have not been altered in unauthorised ways. In ethics and governance, integrity refers to the coherence between stated values and actual behaviour — the same coherence that the Authority Stack concept attempts to make technically verifiable. In asset management, the concept of asset integrity management — managing the condition and reliability of critical physical assets — offers an established precedent for a discipline that integrates technical monitoring, risk management, and governance under a single function.
The word does the work across all these domains without privileging any single one of them. That is a meaningful property for a term meant to serve as an integrating concept.
What AI Integrity Management would actually contain
If we take the integration argument seriously, a coherent AI Integrity Management function in an enterprise would need to cover at least four domains:
- Operational integrity: the technical reliability layer — 99.9x availability, data consistency, model versioning, drift detection, incident response tailored to AI failures, and protection against catastrophic forgetting and hallucination.
- Security integrity: defence of the AI asset against adversarial manipulation, model poisoning, unauthorised extraction, and provenance loss; continuous posture management across AI workloads (AISPM).
- Compliance and governance integrity: alignment with regulatory frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001, DORA), policy enforcement, risk tiering, documentation, and audit readiness.
- Ethical integrity: consistency between the system’s stated values and its actual behaviour; fairness, explainability, bias management, human-in-the-loop controls, and the kind of reasoning-process verification that the Authority Stack concept describes.
These four domains are not additive — they interact. A model that drifts operationally may produce ethically problematic outputs. A system that is compliant on paper but opaque in its reasoning is not, in any meaningful sense, ethically sound. A cybersecurity breach that corrupts training data affects all three other domains simultaneously. Managing them in silos means the connections between them go unmanaged.
The argument for integration is not primarily conceptual — it is managerial. When an AI system fails in a regulated environment, the question from the board or the regulator is not «which team was responsible?» It is «did you have adequate oversight of this system?» A fragmented answer across four different functions is a weak answer.
The obstacles are real
This is a reasonable point at which to be honest about what stands in the way.
First, the terminological landscape is crowded. Responsible AI, trustworthy AI, AI assurance, AI governance, AI safety — each of these labels already has institutional sponsors, frameworks, training programmes, and budget allocations behind it. Introducing a new term implies a reorganisation of authority. The compliance function, which has invested heavily in AI governance under the EU AI Act, is unlikely to surrender that territory voluntarily. Cybersecurity vendors who have built product categories around AISPM have commercial reasons to keep that category distinct. The risk is that existing functions «compete for ownership, making it politically easier to relabel existing structures than to create a new discipline with its own mandate.»
Second, there is a measurement problem. A discipline called AI Integrity Management needs integrity metrics — and those metrics do not yet exist in standardised, auditor-friendly form. The PRISM-style metrics proposed in arXiv:2604.11065 are promising but preliminary. Without accepted measurement instruments, organisations may hesitate to create departments named after a property they cannot yet routinely quantify.
Third, there is a talent and culture challenge. Bringing together compliance officers, MLOps engineers, ethicists, and operational resilience specialists into a coherent function requires not just an organisational chart but a shared working language, reconciled incentives, and sustained leadership attention. Research on cross-functional AI governance consistently finds that silo breakdown requires deliberate design — governance boards with explicit mandates to reconcile competing priorities, shared platforms that give all teams visibility into the same data, and conflict-resolution protocols for when regulatory, ethical, and operational requirements pull in different directions. None of that happens automatically.
A measured assessment of probability
Where does this leave the probability assessment? The Tegrity.ai working group would put it roughly as follows.
In heavily regulated, AI-intensive sectors — finance, healthcare, critical infrastructure, government — some form of integrated AI integrity oversight function looks close to inevitable within three to five years. The combination of regulatory pressure (EU AI Act, DORA, sector-specific AI rules), board-level scrutiny, and the operational stakes of mission-critical AI deployment creates strong incentives to consolidate accountability. The function may initially be labelled AI governance or model risk management, but its scope will expand to cover operational reliability, security, and ethics in a way that is effectively integrated.
In the broader enterprise market, the picture is more contingent. Integration is likely to happen where AI use cases are sufficiently material to justify a dedicated function, where existing GRC and risk platforms can be extended to cover AI-specific controls, and where leadership has both the appetite and the authority to force cross-functional collaboration.
Whether the resulting function will be called AI Integrity Management specifically is a separate and more uncertain question. The case for it rests on whether «integrity» proves to be the conceptual anchor that professionals across compliance, security, ethics, and operations can all work with — or whether it remains one term among several competing for the same institutional space.
Our institutional position
The Tegrity.ai working group at The Integral Management Society is not in the position of knowing how this will resolve. What we bring is a combined background in decision-support systems, operational intelligence, complex systems governance, and multi-jurisdictional practice that has, over decades, encountered each of these disciplines separately — and has seen, repeatedly, the cost of managing them without integration.
Our institutional bet is that AI Integrity Management is the right name for the integrated function: it works across operational, security, compliance, and ethical dimensions without privileging any one of them; it connects to established concepts in systems engineering and asset management; and it suggests something that can be measured, audited, and reported on — which is what boards and regulators ultimately need.
We hold that view with appropriate humility. It is a bet on a direction, not a prediction of a certainty. The discipline may emerge under a different name. The integration may prove harder to sustain than the arguments for it suggest. But the question of whether to manage AI integrity as a coherent function or as a collection of siloed sub-disciplines is, in our view, one of the more consequential organisational design choices that enterprises will make in the next decade. It is worth thinking carefully about.
The Tegrity.ai working group is an initiative of The Integral Management Society, a Swiss non-profit association bringing together senior specialists from adaptive systems, complex systems, artificial intelligence, mission-critical operations, and governance. The group’s current focus is Regime Awareness for Operational Integrity in Adaptive Systems as a foundational capability for AI Integrity Management.