Newsletter

I write about Responsible AI, governance frameworks, sustainability, and the systems shaping our collective future.
This is a space for practitioners, policymakers, and researchers working through complex decisions, and looking for grounded insight they can use.

Expect original analysis, practical frameworks, and occasional book reviews that help make sense of what matters now.

📬 Subscribe via Substack

Receive new articles, frameworks, and book reviews directly.

AI Agent Governace

Governance is not optional in AI, it’s structural.
Most AI agents are built to act. Few are built to be accountable. This article breaks down why governance must be embedded from the ground up, not retrofitted, and how to design agentic systems that don’t drift over time. If you want AI that holds up under scrutiny, governance isn’t a layer, it’s the spine.
👉 Read the full article

MCP AI Governance

Control in AI means nothing without structure.
We talk a lot about managing AI, but without embedded design patterns like MCP (Model Context Protocol), control becomes illusion. This article shows how MCP enables agentic AI systems to retain memory, follow policy, and remain grounded over time, not just optimised for the next token.
👉 Read the full article

Biocomputing

What if our future computers don’t just simulate the brain, but are the brain?
This article explores how biological neurons integrated with silicon chips could reshape AI. Beyond metaphor, we’re entering an era where computing is no longer just digital, it’s alive.
👉 Read the full article

NIST AI RMF vs ISOIEC 42001
NIST AI RMF vs ISO/IEC 42001: From Risk Principles to Auditable Practice
Most teams use frameworks. Few know how to align them.
This article breaks down where the NIST AI Risk Management Framework and ISO/IEC 42001 overlap, diverge, and complement each other. It’s not theory, it’s a crosswalk for those who need to move from governance goals to operational evidence.
Includes a downloadable table mapping every NIST function to ISO clauses.
👉 Read the full article

5 Uncomfortable Truths About Responsible AI in 2025

5 Uncomfortable Truths About Responsible AI in 2025
Public trust in AI is collapsing. In Australia, 71% of consumers now say they distrust AI systems, triple the rate of just two years ago. Behind the headlines, new scandals and case studies reveal why most “Responsible AI” efforts remain performative. This article explores five uncomfortable truths, from billion-euro fines to transparency turning into competitive advantage, and outlines the governance models that are setting the new gold standard.
👉 Read the full article

Scroll to Top