Shaping a Wasteless Future: Richard Foster-Fletcher on AI, Ethics and the New Business Imperative

Jun 11, 2025 - 05:00
 0  0
Shaping a Wasteless Future: Richard Foster-Fletcher on AI, Ethics and the New Business Imperative

Richard Foster-Fletcher is a leading voice in the world of responsible AI and digital ethics. As the Founder of NeuralPath and Chair of MKAI (Milton Keynes Artificial Intelligence), he has advised governments, startups, and major corporations on the ethical deployment of emerging technologies.

Recognised among the UK’s top artificial intelligence speakers, Richard brings a powerful combination of technical insight and social foresight—making him a sought-after commentator on the future of work, AI inclusivity, and digital transformation.

In this exclusive interview with The Champions Speakers Agency, Richard shares his candid views on job displacement, global AI equity, and why businesses must urgently rethink their approach to ethical innovation.

Q: With so much hype surrounding AI’s potential, what realistic outcomes should businesses anticipate when it comes to the future of work and employment?

Richard Foster-Fletcher: “This is such a hot topic for me, and if I can extend it to the future of jobs, I’m really concerned about—let’s be honest—a crap narrative that we’re seeing from our leaders, both in civil society and in business. 

“It wasn’t that long ago I heard the CEO of LinkedIn speak at a conference that I attended at their offices, and he said, “Of course AI will create more jobs—the internet did, therefore AI will.” And that was all he had to say on this. And it’s become this narrative where people say, “Of course it will create new jobs.” 

“Well, let’s take a step back from that. Let’s do something very simple. Let’s go into ChatGPT and ask, “What jobs are at risk?” And let’s have a look at that list. Then let’s ask, “Great, what jobs are going to be created?” 

“Now, I would encourage anybody to go and do this. Look at those two lists and ask yourselves: are they the same level of jobs? And here’s the answer: no, they’re not. Jobs will go and jobs will be created. In my opinion, there will be far more jobs at risk—likely to go—than there will be created. 

“Second of all, the jobs that are being created, I think, have got a much higher level of skills and education needed to do those jobs. They’re, in effect, niche jobs and they’re technical jobs. So should colleges and universities and governments be upskilling? Absolutely. But that does not mean that the majority of people will cross the chasm. 

“HR managers are not going to retrain to be AI model verifiers or AI ethicists. So we’ve got to think very carefully about that. 

“What I think we can see in terms of the future of work and AI is that it’s not going to create more jobs in the existing companies. 

“Logically, it can’t—because the generative AI that we’re talking about makes companies more productive. What does that mean? It means they can do more with less. Less being people. 

“So we cannot rely on these companies needing more people—apart from a few niche, highly skilled technical people. What we can expect is that AI creates opportunities—not just opportunities in AI, but opportunities in new platforms and businesses that it makes possible. 

“So our focus, I believe, as a society—particularly at the governmental level—should be: how do we therefore create more businesses? 

“If we want more jobs from AI, we’ve simply got to create more businesses from AI. And that means incentivising people, incentivising companies to go out and innovate—sponsoring that innovation, sponsoring companies to allow their employees to spend some of their time innovating on things that have nothing to do with the business that they’re in. 

“You’ve got to find a way to do that.”

 

Q: How do you see the adoption of AI technologies playing out in emerging global markets—and what challenges might these regions face when aligning with Western-built platforms?

Richard Foster-Fletcher: “I’ve been travelling quite a lot recently, presenting and working with governments in places like Tunisia and Turkey. 

“In Tunisia, it was quite interesting to see not only have they established an AI university from one of the management schools, but they’re actually launching it in English rather than their usual French—which is an indication of how they want to connect more with the global market and the work that they’re doing. 

“The worst thing they can do is get left behind on these sorts of technologies. 

“If we look at the US, 70% apparently of businesses are now using ChatGPT. But let’s pause that thought for a second, because a lot of the talk that we hear in places like Tunisia and Turkey and others is about the cutting edge. They’re excited about the sorts of breakthroughs that they can be a part of in areas like health, and in agriculture and climate change, and industry and manufacturing. 

“But my message to those leaders is, let’s not forget that when we talk about the majority of AI implementations, the overwhelming majority is going to be everyday companies—small companies—using platforms like ChatGPT, along with Gemini, along with other options like Claude and Perplexity, just to mention those as well. 

“And so what are the issues around that? There’s a tremendous potential uplift in productivity from those organisations jumping in and using those low-cost and no-cost tools. But let’s look at some of the data behind that: 55% of websites are in English. 50% of all internet traffic goes to US companies. 

“So, it’s not just asking how do we deliver cutting-edge research and AI? It’s not just asking how do we get companies empowered to be using these tremendously useful platforms like ChatGPT? 

“But asking, hold up—if it’s been built on websites and on traffic that’s got nothing to do with Tunisia, Turkey, other places—how relevant is it? How useful is it? And what are the risks? 

“How could it impact our culture, our sovereignty, our morality, our customers in this country if we’re using platforms that were built on data that is simply not aligned to the way that we think and the way that we work? 

“So, can they leapfrog? Absolutely. Can they be a big part of the AI story? Absolutely. But I think to some extent, it needs to be on their terms, and we’ve got to work out how to do that.”

 

Q: As AI becomes integral to organisational strategy, how do you foresee frameworks for inclusivity and digital ethics evolving across sectors—from cutting-edge tech firms and regulated industries to everyday businesses?

Richard Foster-Fletcher: “I see three distinct categories of businesses actually working around AI and ethics. 

“The first is the tech companies. They’re moving at the speed of light and their challenge is to harness the latest and greatest hardware and people. So I think it’s interesting for them to try and incorporate the ethics into that too, but to some extent they’re working at the absolute cutting edge of what’s happening in the sector. So I think they’re a great challenge. 

“The second group are the regulated industries—think about finance and health and so on—and for them I think the main focus is staying legal. It’s understanding the regulation that’s coming through and changing, and how do they run their models and manage their data in terms of privacy and security and ethics around that. 

“And the third is this bucket that’s everybody else. I want to talk about that specifically because that’s most businesses. And they’re not at the bleeding edge, they’re not in regulated industries, so why do they care? 

“Well, they care because we’re moving into an age now where leaders need to understand digital ethics and they need to be leaders in the age of AI. And in the age of this ethical use of AI, you simply cannot ignore this anymore—it’s not ‘nice to have’. 

“So these leaders need to be able to look at the decisions and the outcomes in a business and be able to have the kind of processes that can reverse-engineer that and say, “Wow, we didn’t get what we were thinking we would get there,” or, “We got something that was harmful or damaging to people or our brand.” 

“So how do we go back and change that? They’ve got to understand what’s happened in terms of the data and the people and the processes and the models to an extent that they can say, “We need to modify the way we did that to get the output that we want.” 

“Finally, I think the inclusivity and data ethics evolution in business needs to understand that the public’s perception of trust has changed. 

“If we go back a decade or so, everybody put everything on social media. It’s almost like we went into that with our eyes closed. But people are not going into AI with their eyes closed. They’re very concerned about the data that’s being shared into platforms like ChatGPT. 

“So, we’ve got a very different narrative now. We used to have people—and I’ve heard very high-standing people say to me in the past—”I am absolutely fine with sharing my data with large technology companies as long as it benefits me.” But they’re not saying that anymore. 

“They’re now saying questions like, “Can I trust these autonomous systems not to exploit me?” So the rules have changed. People are much more wary about what you’re doing with their data because we’ve seen what happened in social media. We’ve seen the harms, we’ve seen the damage, and we don’t want to live through that again—or have an extrapolation of that where it’s potentially even worse with AI. 

“So, leaders have got a lot on their plate. They need to think very carefully about that.”

This exclusive interview with Richard Foster-Fletcher was conducted by Mark Matthews of The Motivational Speakers Agenc

The post Shaping a Wasteless Future: Richard Foster-Fletcher on AI, Ethics and the New Business Imperative appeared first on European Business & Finance Magazine.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0