As I think about how best to scale out GenAI across our organisation, I find myself navigating a familiar tension, the duality of risk and opportunity.
Traditionally, most organisations I have been involved in take a risk averse approach. Introducing new technologies with heavy-handed access controls, long-running policy programs, and rigid governance models. And while these efforts stem from a well-meaning place, they often create bottlenecks that stall momentum and stifle innovation.
I am asking myself the question. What if we flipped the script?
What if, instead of deferring the introduction of GenAI out of fear, we engage the people in our organisation who are positive and curious about GenAI? What if we treated early, low-risk “missteps” not as failures, but as learning opportunities? Opportunities not just to educate people, but to identify and remediate exposures.
This mindset unlocks something powerful and often ignored. Trust.
When we empower our people to use tools like Copilot, we’re not handing over the keys to the castle. We’re inviting them to become co-designers in how we roll AI out safely and sensibly. We’re encouraging them to closely interrogate outputs, check sources, and decide for themselves whether to share what Copilot returns, as well as inform IT on gaps that may need to be filled.
This isn’t about ignoring risk. It’s about shifting from control to collaboration. Instead of spending months building theoretical fences, we let real-world use highlight where the true issues are and then fix them organically, together.
Scaling AI isn’t a technical challenge. It’s a cultural one. And I am wondering if trust can be the bridge between risk and opportunity?