Cutting Through the AI Hype: Henry Ajder on Strategy, Ethics & the Risks CIOs Can’t Ignore


Henry Ajder is a globally recognised expert on generative AI, deepfakes, and digital ethics. He has advised major organisations, including Meta, the BBC, and the UK Government, on the safe and ethical use of emerging technologies.
Formerly Head of Policy at Deeptrace, Henry’s insights have been featured on BBC News, CNN, and The New York Times. He is one of the UK’s leading artificial intelligence speakers, known for making complex AI issues accessible and actionable.
In this exclusive interview with The Champions Speakers Agency, Henry explores the future of generative AI—from ethics and regulation to cybersecurity and authenticity in the digital age.
Q: With generative AI adoption accelerating, what’s the most important advice you’d offer to organisations looking to implement it effectively?
Henry Ajder: “Many organisations are trying to figure out how to embed and utilise generative AI in their businesses. They’ve heard of the power of ChatGPT, and they’ve seen the amount of people who are using it.
“I really feel that we need to resist that knee-jerk reaction to use it for the sake of using it and actually build a comprehensive generative AI strategy across your business—to make sure your data is being used effectively, to make sure that your customers are getting the value from potential applications, and also that your employees are using it in a way which actually benefits them and makes them more productive and isn’t just a novelty that actually is a solution looking for a problem.
“So my top tip would be to interrogate your organisational structures and resist that knee-jerk to just put something in place for the sake of putting it in place.”
Q: As the hype around generative AI grows, what steps should businesses take to ensure their use of AI remains ethical and responsible?
Henry Ajder: “At the moment, we’re seeing this rush of excitement and hype around generative AI and synthetic content. This has really led, in some respects, to a bit of a Wild West dynamic about how people are using these tools. Are they using them ethically? Are they being used in a way where consumers are being protected, and their customers are being protected?
“And so for me, as someone who’s been working on responsible AI—and responsible generative AI in particular—for about six years now, I’m used to working with organisations, helping them understand how they can do this in the right way and how they can make sure that both reputationally and also legally they’re protected for the long term.
“So my top tip again would be, when it comes to incorporating responsible AI in your business, would be to look at the end-to-end pipeline of how that’s going to impact your data, your employees, your customers and indeed your compliance with certain legal frameworks.
“This is such a fast-moving space, it can be hard to understand what the legislative landscape might look like in two months’ time or what consumer attitudes might look like in two months’ time. But if we look at these key, core responsible AI practices and messages, we can go a long way to future-proofing AI in your business.”
Q: Across industries, generative AI is enabling new use cases—what applications have you found most innovative or impactful so far?
Henry Ajder: “One of the most exciting use cases I’ve seen is the ability for generative AI to be used to really open up a world of content for audiences in a way that feels personal and specific to them. So often, people who don’t speak English—or maybe Spanish or Mandarin—often have to deal with dubbing on content that they watch: clumsy lip-syncing or not even lip-syncing, just having their language dubbed over the top.
“Whereas now, we’re seeing tools being developed which can automatically generate the right lip movements for a person speaking in that language. That means that content no longer feels like it’s kind of being bodged together just for that person—but actually, this is really sort of for them. It makes it feel more authentic.
“And indeed, this is an area that I find really fascinating with generative AI. It’s something that a lot of businesses and organisations, I think, are trying to understand: what does authenticity mean in the age of generative media and AI-generated content?
“And indeed, for me, authenticity no longer is opposed to the synthetic. In fact, we’re seeing experiences like these—like being able to create much more personalised content for people—helping people engage with that content in a way that they weren’t able to before.
“Or people interacting with, for example, virtual influencers and building these kind of slightly strange but interesting relationships with these characters. Or indeed, living in kind of virtual worlds, as we’re seeing with certain extended reality and virtual reality applications—and generative AI will be the engine for a lot of those applications moving forward.
“So for me, there are so many exciting use cases in art and entertainment, inaccessibility, and in communications and advertising and marketing as well. But I think, for me, the ones that really interest me—and I think are really exciting—are the ones where we can make experiences that we have every day better for the individual and more authentic and more relatable.”
Q: How is generative AI reshaping the cybersecurity landscape, and what emerging risks should organisations be preparing for?
Henry Ajder: “Generative AI is a term that really kind of exploded onto the scene about a year or so ago. But before then, many of my colleagues and I have been referring to very similar technologies that use voice cloning or generate non-existent images that are very realistic as deepfakes.
“And indeed, deepfakes, for a long time now, have been causing quite significant challenges for businesses, for governments, and for other organisations. One of the key points of concern is cybersecurity.
“We’ve seen deepfakes open up a new way of not just hacking humans, but also machines—when it comes to, for example, using synthetic voice cloning to clone the voices of CEOs to get financial controllers to send money to a bank account, or using voice cloning to clone a loved one’s voice to make it sound like they’re in danger and need help or financial assistance.
“These technologies can both fool the individual—they can hack the human mind, so to speak—into believing that it’s that person, but they can also compromise biometric authentication. We’ve seen reports, for example, of people being able to access bank accounts using a synthetically cloned voice of another person.
“So when it comes to cybersecurity, we’re so used to trusting audio-visual media in particular as something that can’t be faked or can’t be manipulated. But we need to radically reassess the landscape of cyber threats with the advance of generative AI and deepfakes to make sure that we understand what it is we’re up against now.
“And this is not theoretical. This is something that’s happening right now as we speak. Security is a huge new frontier—and one of the most challenging ones for biometrics and indeed for cybersecurity and business security procedures, which now might be outdated based on these new developments.”
Q: How are governments balancing AI innovation with regulation—and what global trends are shaping the future of AI governance?
Henry Ajder: “Legislation is one of the big questions when it comes to AI. How do we go about making sure that this technology’s benefits are felt by all—and they’re felt fairly—but also that we counter the harms and make it as difficult as possible to misuse these technologies?
“From the US to China, the UK to France to India—all over the world, countries are trying to build AI strategies to both embrace innovation and build both economic success and also the power of AI within their own governments. But also, they’re looking to make sure that those harms aren’t proliferating and that the companies that are developing are doing so responsibly.
“That balancing act is really hard—particularly as we see this geopolitical dimension of this AI arms race between the big powers emerging, where everyone is rushing to build the most powerful systems they can. And so the harms are indisputable, and they’re something we should be really worried about.
“At the same time, we also want to be careful that we aren’t completely squashing innovation—and that we’re not just allowing other organisations or other businesses in other countries to get ahead.
“So the regulation question right now is occupying much of the brain space of the governments around the world, trying to understand: how can we make AI work for us, without it also potentially biting us on the other hand?”
This exclusive interview with Henry Ajder was conducted by Mark Matthews of The Motivational Speakers Agency.
The post Cutting Through the AI Hype: Henry Ajder on Strategy, Ethics & the Risks CIOs Can’t Ignore appeared first on European Business & Finance Magazine.