A recent discussion with the Queensland University of Technology (QUT) Data Governance Board reinforced a simple point. It was a thoughtful, well-informed discussion, and a strong example of the kind of governance engagement institutions need as AI adoption expands.

AI governance is not primarily a technology question.

It is an accountability question.

When an AI-related incident hits the news, the story is usually framed as an AI failure. More often, the failure is familiar: an AI system accessing production data without restriction, a service account with excessive permissions, no separation between development and production, or no audit trail for decisions.

That is not an AI failure.

It is a control failure.

The Wrong Lesson From AI Incidents

Recent reporting, such as Codewall’s write-up on McKinsey’s AI platform, makes the point clearly. The problem is not that AI is impossible to secure. The problem is that poor security hygiene becomes more dangerous when attached to a system that is fast, connected, and influential.

AI amplifies existing weaknesses.

It does not invent them.

If an organisation runs weak access controls, blurred approval boundaries, or rushed deployments, adding AI will not create a new class of governance problem. It will expose the weakness faster and at greater scale.

The controls required are not exotic:

  • Clear ownership of systems and data
  • Least-privilege access
  • Strong authentication and secrets management
  • Separation between development, testing, and production
  • Logging, traceability, and reviewable audit evidence
  • Formal change control and risk acceptance

When those basics are weak, AI increases the blast radius.

Explainability Without Accountability Is Not Enough

One of the strongest themes in the QUT discussion was explainability. That focus was well placed. But explainability is not useful without accountability.

Knowing how a decision was made is not enough.

You also need to know who owns it.

If an AI-enabled system influences a decision, an organisation should be able to answer immediately:

  • What system made the decision?
  • What data did it use?
  • Who approved it?
  • Who owns the outcome?

Those are not technical questions.

They are governance questions.

If an organisation cannot answer them, the issue is not that AI is opaque. The issue is that governance has not kept pace with deployment.

Governance Has To Be Enforced

AI governance is not created by publishing a policy.

It has to be enforced through identity, security, architecture, and oversight.

That means knowing which identities, agents, applications, and services can act. It means defining what they can access, what they can trigger, and what evidence remains afterward. It means approval paths, review points, and meaningful human oversight where risk justifies it.

Governance without controls is theatre.

Security without governance is reactive.

The goal is not to slow AI adoption. The goal is to make it defensible.

The Board-Level Test

The question for leadership teams is no longer whether AI is in use. It already is, through formal programs, vendor platforms, automation tools, and shadow adoption.

The real question is whether the organisation can prove that AI-enabled systems are operating inside clear, enforceable, reviewable boundaries.

If the answer is unclear, the priority is not another strategy session. It is governance uplift, identity discipline, and security hygiene.

That was one of the practical strengths of the QUT discussion. It also reflects the broader lesson from every so-called AI breach that turns out to be a familiar control weakness in new packaging.

AI does not create governance problems.

It exposes the ones you already have.

You may also be interested in:

Safeguard your business in the ever evolving threat landscape with UNIFY’s dedicated cybersecurity services.
UNIFYConnect
Identity
UNIFYConnect automates identity lifecycle management between HR and target systems, reducing manual provisioning effort, delays, and access risk.
UNIFY Solutions’ position on how identity, governance, and protection form the foundation for trusted AI systems.