UK AI Leader Carl Dalby: How to Harness AI Responsibly 

Aug 28, 2025 - 11:00
 0
UK AI Leader Carl Dalby: How to Harness AI Responsibly 

Carl Dalby is a leading voice in the field of generative AI, currently serving as Head of AI/Digital for a UK Government agency. 

With over three decades of experience across sectors ranging from defence to digital start-ups, Carl is also a respected Guest Lecturer at the University of Aberdeen and the visionary founder of Istimor. 

As a seasoned speaker for The Cyber Security Speakers Agency, he brings a wealth of practical insight and forward-thinking strategy to every stage he steps on.

In this interview, Carl shares his expert advice on successfully implementing AI in business, explores the ethical considerations that must guide its use, and offers strategic guidance on how organisations can stay competitive in an increasingly digital world.

 

Q1: As AI adoption increases across businesses, what advice would you give for implementing it successfully?

Carl Dalby: “Now, the concept of AI is still being defined and, in my opinion, will continue to be defined for many decades to come. At this stage, AI needs to be treated more as a discovery. What it is? What it could be capable of within an organisation.

As with any new discovery, start small. Choose a specific use case, whether it’s a function, a process, or somewhere with a good amount of data, and focus on that. That’s the fundamental and critical piece when beginning the journey.

Also, accept that AI has been running across many organisations, whether we like it or not, for decades. It isn’t new. It’s been around a long time, whether defined as machine learning or other deep learning capabilities around data. So, AI isn’t necessarily the new kid on the block, but it has captured the imagination of organisations, employees, groups, and, consequently, leadership. A useful tip is to reaffirm understanding of what AI can and can’t do across leadership teams and to get buy-in. That’s an important step.

From my perspective, with a background in risk management, it’s vital to evaluate the risks of using AI carefully. Look at issues such as security, data bias, errors, and legal compliance, and work out how to manage these within your organisation. Integration of AI with tools and processes is also a major challenge, but one that’s worth addressing. Start small, focus on specific use cases, and begin with a proof of concept or a pilot – however you choose to define it.

In my experience, starting with a proof of concept based on a real business challenge pays dividends. If that challenge is common across many functions, it becomes a valuable tool, even as a kind of sales pitch, to spread the AI message.

Talent is another challenge. Start by identifying people internally who have a genuine passion and interest, set skills aside for a moment. Are they evangelising AI techniques today? Are they deeply interested in the journey? Are they willing to challenge some of the misconceptions about AI in the public domain? That’s important for building the right team.

Lastly, it’s a never-ending journey. As with all good software and technology, it needs continual iteration. It feels like the major tech players are releasing phenomenal leaps in AI techniques daily.

So, a final tip: stay on top of trends. Not as a single person, but by involving multiple people in sharing their experiences and insights into where AI might go next in the organisation. That’s a powerful step.

In summary: start small, get buy-in, set realistic expectations about what AI can and can’t do – and keep iterating.”

 

Q2: What ethical concerns should businesses be aware of when using AI?

Carl Dalby: “Big question. My personal opinion, and it is a personal opinion, is that this is a fascinating challenge. I was talking to someone recently about how we’re deeply engaged in ethical debates around AI, yet we never paused to consider the ethical challenges of social media. It’s an interesting place to be.

Many organisations, including mine, are creating AI ethics committees and interest groups. That’s critical for governance. But it’s worth noting that we never applied the same scrutiny to social media, including platforms like LinkedIn, despite their massive impact across businesses, democracies, and society at large.

To answer the question directly, here are some key areas to consider:

  • Bias: Most people are aware of the potential for bias in data. AI systems can amplify this, bringing it to the forefront of people’s minds. It’s a challenge that must be addressed.
  • Explainability: I strongly believe that AI must be explainable in simple terms to all end users. If you can’t explain what the algorithm is trying to do, that’s a problem.
  • Accountability: While I won’t bore anyone with RACI charts, they do work. When an AI system generates an output, someone needs to be accountable. This is especially important with large language models, which often make mistakes. Who takes responsibility for those outputs?
  • Data privacy: This is always a priority. The data used to train AI models needs to be examined from a privacy standpoint. Ask those questions early on.
  • Security: AI systems remain vulnerable to cyber-attacks and will be for a long time. Robust security measures, and ethical use of them, are essential.
  • Legal compliance: UK AI law is currently being written. I’ve been part of the government AI group for nearly seven years, and we’re still forming advisory positions. The EU AI Act provides a good template. Recent developments around AI standards and safety institutes in the UK and US are also positive steps forward.
  • Unrealistic expectations: I spend a lot of time educating colleagues on what AI cannot do. Trust in AI is a real dilemma. People tend to believe what they see, especially when it looks and feels believable. But AI does not always produce factual or correct outputs.

Managing expectations and avoiding over-trust in AI systems is a major ethical hurdle. It’s something we must tackle head-on.”

 

Q3: How can organisations stay ahead in the digital era?

Carl Dalby: “Firstly, more than ever, organisations need to encourage, support, and incentivise a culture of innovation and experimentation. Create an environment where business users can build proof-of-concepts. That’s not just about having the platforms – it’s about providing easy access to a sandbox environment where they can try out ideas. Encourage innovation and the freedom to experiment. It’s more important now than it’s ever been.

Continual training is also vital. Provide access to new digital skills for employees at all levels – from graduates to veterans. Do it in a way that’s sensitive and non-judgemental, so people feel comfortable learning without fear of exposing a lack of knowledge.

Then, focus on tech trends. Double down on identifying which trends are relevant to your organisation’s mission and functions. There are billions of brilliant companies creating fantastic solutions, and others producing nonsense. Awareness of both is important. A trend today may not be one tomorrow, so remain realistic.

A good example is the hype around the metaverse during the pandemic. Many companies invested billions, but now the conversation has shifted. ‘AI’ seems to have replaced ‘metaverse’ in the spotlight, but AI isn’t metaverse, nor is it NFTs or cryptocurrency. AI is different. It’s not one thing, it’s trillions of things, and it’s going to be embedded in organisations for decades to come.

Running proof-of-concepts is the only way to stay ahead. Try, test, fail fast – at low cost. Get into low-code, no-code, or even now-code thinking. It’s all here now. 

Build partnerships. Work with large and small tech partners and build an ecosystem of collaboration. Engage with academia; they’re far ahead on many AI and machine learning techniques. Partner with them. It’s vital.

Finally, share the story – the good, bad, and ugly. Create communities within the business. These should include people who are curious, critical, even fearful. Avoid rooms full of ‘yes people.’ The best insights come from diverse viewpoints and honest conversations.”

This exclusive interview with Carl Dalby was conducted by Mark Matthews, Senior Keynote Speaker and Entertainment Manager.

The post UK AI Leader Carl Dalby: How to Harness AI Responsibly  appeared first on European Business & Finance Magazine.