This is a field notes paper: a structured conceptual contribution grounded in direct practitioner observation, prior to the formal development of a working paper. It is the fourth article of a series on AI Integrity Management as an emerging enterprise discipline, produced by the AI Integrity Management working group at The Integral Management Society — a Swiss non-profit association bringing together senior specialists from adaptive systems, complex systems, artificial intelligence, mission-critical operations and governance. The operational and research arm of the working group is Tegrity.AI.
This article builds on three prior contributions in this series: AI Integrity: a Critical Frontier (Paper 1), which establishes the convergence thesis and the case for AI Integrity as a cross-cutting enterprise condition; The Case for AI Integrity Management as a Formal Enterprise Function (Paper 2), which examines the organisational and institutional implications of that convergence; and One Function or Many? The Case For and Against Unified AI Integrity Management (Paper 3), which interrogates the structural question of whether a unified function is feasible or desirable.
This series is addressed to enterprise architects, CIOs, CDOs, AI governance leads, risk and compliance officers, and transformation executives who are navigating the practical challenge of deploying AI systems that remain reliable, controllable and aligned under real operational conditions.
The previous articles in this series have laid progressively deeper foundations. AI Integrity: a Critical Frontier established why AI integrity is becoming to intelligent systems what cybersecurity became to software. The Case for AI Integrity Management as a Formal Enterprise Function mapped the landscape of existing disciplines — ethics, compliance, cybersecurity, operational reliability — and argued that the structural logic of convergence is real. One Function or Many? examined whether unification would produce genuine operational benefits or merely cosmetic reorganisation. This fourth article takes a step back and examines the question that precedes organisational design: in a landscape already crowded with competing frameworks, toolchains, and institutional sponsors, whose language wins? And how likely is it that the winning term turns out to be AI Integrity Management?
The question is not merely semantic. In enterprise contexts, the name of a function shapes its budget, its reporting line, its talent pipeline, and its regulatory interface. When cybersecurity won the naming contest over «information assurance» in the early 2000s, it also won the CISO role, the board committee, and the vendor ecosystem. The same dynamic is playing out now in the AI oversight space — and the outcome is genuinely uncertain.
The factions: who is competing, and for what
To understand the semantic contest, it helps to be precise about what each discipline is actually claiming — and where its framing runs out.
The legal and compliance vanguard: AI Governance and Model Risk
Driven by the EU AI Act, the NIST AI Risk Management Framework, and ISO/IEC 42001, corporate legal and compliance teams have moved quickly to expand their remit. Their terminology — AI Governance, AI Compliance, Model Risk Management — frames AI primarily as a regulatory and liability problem. Their tooling is documentation-heavy: risk registers, use-case inventories, impact assessments, conformity declarations. This faction has significant institutional momentum: the Big Four consulting firms have built AI governance practices around these terms, and regulatory bodies have invested in the associated vocabulary.
The limitation is well-known to anyone who has tried to run an AI governance programme: compliance frameworks tend to ossify into documentation exercises. They answer the question «does this system have a policy?» far more readily than the question «is this system behaving consistently with that policy right now?» A model that drifts, hallucinates, or degrades operationally may remain perfectly compliant on paper. The gap between policy and runtime behaviour is precisely where the compliance framing runs out.
The defenders: AI Security and AISPM
Chief Information Security Officers and cybersecurity teams argue that AI is fundamentally a software and data system vulnerable to adversarial attack. Their terminology — AI Security Posture Management (AISPM), adversarial defence, model provenance — frames AI as another attack surface. Security vendors are investing heavily in this framing: DigiCert explicitly forecast that «model integrity is the new data integrity» for 2026, and AISPM platforms are being positioned as essential infrastructure for any serious AI deployment.
The limitation is structural. Traditional cybersecurity tools are designed to detect and block intentional malicious activity. They are not designed to measure or manage probabilistic failures: a model that produces inconsistent outputs under adversarial prompting, hallucinates a policy clause, or exhibits value drift over time is not a hacked endpoint. The security framing handles the adversarial dimension well and the probabilistic-reliability dimension poorly — a significant blind spot given that most AI failures are not malicious breaches but operational inconsistencies.
The moral compass: AI Ethics and Responsible AI
Ethics boards, ESG teams, and responsible AI specialists frame AI as a problem of values, fairness, and societal impact. More than 160 published guidelines and frameworks from governments, international bodies, and corporations have emerged from this tradition. The principles are broadly shared: fairness, transparency, accountability, non-maleficence. The persistent limitation — acknowledged even by ethics specialists themselves — is operationalisation. Translating «be fair» into a measurable engineering constraint or a runtime monitoring signal is genuinely difficult, and ethics teams frequently find themselves isolated from the engineers building the systems they are supposed to govern.
The engine room: Operational Reliability and MLOps
Site reliability engineers, MLOps practitioners, and AI platform teams frame AI as a performance and uptime problem. Their terminology — continuous validation, model drift detection, data pipeline integrity, operational resilience — is technically precise and directly measurable. This faction has built real tooling — model registries, drift monitoring pipelines, canary deployment frameworks — but it tends to operate in isolation from the governance, ethics, and security layers. Operational reliability teams are expert at measuring whether the system works but less equipped to evaluate whether the system is behaving in accordance with its stated values, or whether it is compliant under regulatory scrutiny.
The structural problem these framings share
Each of these four factions frames the AI oversight problem in a way that captures something real and misses something important. The compliance framing captures regulatory exposure but misses runtime behaviour. The security framing captures adversarial risk but misses probabilistic failure. The ethics framing captures value alignment but misses operational tractability. The reliability framing captures technical performance but misses normative and regulatory dimensions.
The deeper problem is that real-world AI failures do not respect these boundaries. When an enterprise AI agent begins producing erratic financial recommendations, is that a compliance violation, a cybersecurity breach, an ethical failure, or an operational breakdown? In most cases, the honest answer is: it depends on the cause, and identifying the cause requires expertise from all four domains simultaneously. When no single function owns the full integrity picture, the spaces between the functions go unmanaged.
The proposition: why «integrity» may be the integrating term
The Tegrity.ai working group at The Integral Management Society is testing the hypothesis that AI Integrity Management can serve as an integrating umbrella for these four domains — not by replacing their specialist methodologies, but by providing a common accountability structure, a shared measurement vocabulary, and a unified reporting line to the board.
The word «integrity» is central to this hypothesis. In systems engineering, integrity means the state of being whole, undivided, and uncorrupted. In the CIA triad of cybersecurity, integrity is a named technical property. In corporate governance, integrity describes coherence between stated values and actual behaviour. In asset management, asset integrity management is an established discipline covering the reliability and condition of critical physical assets. The working group’s contention is that this word does the conceptual work across all four domains without privileging any one of them.
Under this proposed nomenclature, AI Integrity Management would integrate: operational integrity (system reliability, consistency, freedom from catastrophic forgetting and hallucination); security integrity (defence against adversarial manipulation, model poisoning, and provenance loss); governance and compliance integrity (auditable policy enforcement, alignment with the EU AI Act, NIST AI RMF, ISO/IEC 42001); ethical integrity (consistency between stated values and actual behaviour, fairness, explainability, and human-in-the-loop controls); and organisational integrity — the cultural and change-management dimension of human-AI coexistence, which is often the least discussed and perhaps the most consequential domain for long-term success.
An honest assessment of naming probability
The case for adoption
The strongest advantage of «AI Integrity Management» is its pragmatic neutrality. «AI Ethics» reads as academic to board members. «AISPM» is too narrow and too vendor-specific. «AI Governance» is already heavily colonised by the compliance framing. «Integrity,» by contrast, resonates across corporate governance contexts — data integrity, financial integrity, business integrity are established concepts in every major company. It implies something that can be measured, audited, and reported on. If the term can be attached to concrete, auditor-friendly metrics — integrity hallucination rate, model-provenance latency, Authority-Stack consistency score — it has a credible path to becoming the vocabulary that C-suites reach for when they want a single, comprehensible indicator of AI system health.
The case against adoption
The primary structural risk is the institutional inertia behind «AI Governance.» The EU AI Act, the NIST AI RMF, ISO/IEC 42001, and the Big Four consulting firms have all committed to governance vocabulary. Corporate inertia may simply absorb all technical and operational disciplines under «AI Governance,» with «integrity» relegated to a sub-category rather than recognised as the overarching organising concept.
A second risk is vendor capture. Cybersecurity vendors are spending significant marketing budgets to establish «AI Security» and «AISPM» as the dominant operational framing. If they succeed, the space that AI Integrity Management would need to occupy may already be branded and budgeted before the concept gains traction. A third risk is cosmetic adoption: enterprises rename a governance committee «AI Integrity Committee» but continue operating in the same silos, discrediting the concept rather than advancing it.
What genuine adoption would require
The cybersecurity precedent is instructive. «Cybersecurity» won not simply because it was a good term, but because catalysing incidents made the cost of fragmented ownership undeniable, because a standards infrastructure (ISO 27001, PCI-DSS) created external mandates requiring a single accountable function, and because a vendor ecosystem built products explicitly positioned around the unified concept. The CISO role emerged from this combination — not from a single organisation’s design decision.
For AI Integrity Management, the analogous conditions would be: a high-profile AI failure in a regulated sector that cannot be cleanly attributed to any single existing function; regulatory guidance explicitly requiring an integrated AI oversight function rather than siloed programmes; and a tools ecosystem positioning itself as «AI Integrity Management infrastructure» rather than as a compliance, security, or ethics tool. None of these conditions is implausible. None is certain.
The working group’s position
The Tegrity.ai working group at The Integral Management Society does not claim to know which term will prevail. What the group does claim is that the structural argument for an integrated AI oversight function — whatever it is eventually called — is sound, and that «AI Integrity Management» has stronger conceptual properties as an integrating term than the current alternatives. It is neutral between domains, it carries existing resonance in governance and engineering vocabulary, and it points towards a measurable, auditable property rather than a set of principles or a vendor category.
Whether the broader industry adopts this exact terminology or defaults to an expanded «Enterprise AI Governance» label, the structural hypothesis holds: the silos must eventually collapse, because the failures that result from managing AI ethics, compliance, security, and operational reliability in isolation will become too costly to ignore. The name matters less than the function. The function is coming.
The Tegrity.ai working group is an initiative of The Integral Management Society, a Swiss non-profit association bringing together senior specialists from adaptive systems, complex systems, artificial intelligence, mission-critical operations, and governance. This article is part of a series examining AI Integrity Management as an emerging enterprise discipline.