Governing AI in Motion

Fixed Intent, Adaptive Execution, and Human‑Centred Transformation

by

Bolutito Ayobami Iyanda
Tolulope Adegbemile
4 weeks ago

Artificial intelligence did not knock politely before entering our lives. It slipped in quietly, rearranged the furniture, learned our habits, and by the time we noticed, it had already made itself comfortable. The law, meanwhile, is still standing at the doorway, checking its notes. AI governance is now being asked to govern motion, not a static machine.

This is not another alarmist piece about how AI is “here to stay” or how regulation is “struggling to keep up.” We already know that. What is more interesting, and more urgent, is what happens when speed becomes a governance problem, and when organisations must choose between rigid control and adaptive responsibility.

Executive map (read this, then decide if you want the rest):

  • What’s broken: static rules are being applied to systems that change through use, feedback, and iteration.
  • What I’m arguing: governance must move from one-time rulemaking to an operating capability.
  • What the solution requires: fixed intent (purpose, risk appetite, non‑negotiables) plus adaptive execution (a rhythm for review and change).
  • What it looks like in practice: decision rights, a cadence, triggers for escalation, and versioned governance updates.
  • What leaders track: faster, clearer decisions; fewer post‑deployment surprises; and higher trust from users and regulators.

There is a quiet assumption embedded in many conversations about AI governance: that if we write the right rules, we will regain control. With enough foresight, structure, and regulation, the uncertainty surrounding AI can be neatly contained.

But governance was never meant to eliminate uncertainty. It exists to help societies and organisations live responsibly within it.

The trap you recognise: rigidity creates risk; flexibility creates drift

Most organisations do not fail because they ignore governance. They fail because their governance cannot move at the speed of their product decisions.

You can usually tell you are in trouble by the same symptoms:

  • Decisions stall because no one owns escalation; every edge case becomes a committee meeting.
  • Reviews happen after deployment; the only tool left is damage control.
  • Compliance writes rules that engineers cannot implement; workarounds become the real policy.
  • “Ethics theatre” fills calendars; paperwork grows; systems don’t change.
  • Incidents repeat because nobody updates the playbook; they only update the slide deck.

This is why the popular binary, control versus innovation, is unhelpful. The real choice is between governing motion deliberately or governing it accidentally.

The model: fixed intent plus adaptive execution

If governance is going to move without losing legitimacy, you need a clean separation between what stays fixed and what can adapt.

Fixed intent is the anchor; adaptive execution is the mechanism. One without the other produces either rigidity or drift.

Fixed Intent (what must not move):

  • Purpose: what the system exists to do, and for whom.
  • Risk appetite: the risks you accept; the risks you do not.
  • Non‑negotiables: privacy boundaries, fairness commitments, safety limits, auditability requirements.
  • Accountability: who owns outcomes, who can approve exceptions, who answers when harm happens.

Adaptive Intent (what must move, on purpose):

  • Thresholds and controls: what triggers review, pause, rollback, or human override.
  • Workflows: how teams design, test, deploy, monitor, and respond.
  • Oversight design: which forums exist, who sits in them, and what evidence do they require?
  • Escalation rules: what goes where, how fast, and with what decision authority.

A practical way to think about “grey areas.”

Ambiguity in AI governance is not one thing. It comes in recognisable patterns, and each pattern needs a different response. Here is a lightweight map you can use:

 Regulatory gaps: the law does not clearly cover a use case.

Response: provisional classification, conservative guardrails, and clear review triggers.

Interpretive uncertainty: rules exist, but the words are broad.

Response: operationalize terms into testable standards, document your rationale, then iterate based on evidence.

Jurisdictional confusion: it is unclear who owns the decision.

Response: decision-rights mapping, escalation paths, and time-boxed temporary ownership to avoid vacuums.

Value conflicts: legitimate goals collide (accuracy vs fairness; transparency vs security).

Response: make the trade‑off explicit, set a provisional balance, and revisit on a schedule.

This turns “grey areas” from paralysis into disciplined, accountable action.

The mechanism: governance as an operating rhythm

Adaptive execution is simple to define; it is the discipline of changing how you govern, based on evidence, without changing what you stand for.

Treat governance like product management: versioning, release notes, triggers, and measurable outcomes. Here is the operating kit:

  1. Decision rights, written down: who can approve deployment, who can approve exceptions, who can stop the line.
  2. A cadence, not a crisis response: fortnightly reviews for active models; monthly control reviews; quarterly intent checks.
  3. Triggers that force attention: drift beyond a set band, complaint spikes, new data sources, new user groups, or regulatory signals.
  4. Versioning and release notes: when thresholds or workflows change, publish a shortchange log (what changed, why, who approved, what you will measure next).
  5. Evidence that matches the risk: higher stakes require stronger proof, deeper monitoring, and a clear human override path.

One-page accountability: for each system, capture owner, purpose, affected users, risks, controls, metrics, and escalation contacts.

A proof point: what adaptation looks like when done well

Consider a bank scaling a conversational AI assistant across multiple markets. The first hurdle is rarely the model itself; it is governance at scale. Language, customer expectations, edge cases, and operational risk all rise with adoption.

Nordea’s chatbot, Nova, illustrates the direction of travel. The bank has described the challenge of scaling and the shift toward using AI to make banking safer, easier, and more personal, including reporting that 76% of chat sessions are handled by chatbots and that AI has contributed to a reduction in fraud losses. The headline numbers are not the point; the pattern is. The organisation treats the AI capability as something that evolves through monitoring, iteration, and change management, while keeping human support for complex needs.

What makes this a governance story is not that AI exists in the channel. It is that the organisation builds a repeatable way to review, adjust, and scale without losing accountability.

What success looks like

Leaders often ask for a dashboard. That is sensible, but only if you track outcomes that reveal learning and control, not theatre.

Track a small set of signals that force clarity:

  1. Decision cycle time: how long it takes to approve changes or pause use, by risk tier.
  2. Pre‑deployment completion: the share of releases meeting agreed testing and documentation before launch.
  3. Post‑deployment surprises: incident rates, complaint themes, and repeat failures.
  4. Override health: whether humans can meaningfully intervene, and how often they do.
  5. Trust indicators: regulator questions answered on first pass, fewer escalations driven by confusion, higher user confidence.

This is the upgrade: governance stops being a blocker and becomes a capability that lets good systems move faster, and bad systems stop sooner.

From Control to Capability

The future of AI governance does not lie in rigid control or unchecked flexibility. It lies in building the capability to adapt responsibly while remaining anchored in fixed intent.

Fixed intent plus adaptive execution offers a practical path: the anchor stays steady; the mechanism learns in motion. That combination is how organisations govern technology in ways that serve people rather than displace them.

As AI continues to reshape institutions and societies, the question is no longer whether we can govern it. It is whether we can govern it with clarity, adaptability, and humanity at the core.

Empowering the People Who Move the World Forward

How can we assist you?

We value the opportunity to connect with you. Please submit your inquiries and feedback, and our experienced professionals are ready to assist you.