Following the release of the KPMG / University of Melbourne report Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025, automation strategist Raihan Islam has issued a warning that organisations relying too heavily on AI to communicate may weaken their credibility and public trust.
The research, covering more than 48,000 respondents in 47 nations, shows that while adoption of AI is accelerating, confidence in its outputs is not keeping pace. Worldwide, only 46% of respondents say they are prepared to trust AI systems.
“AI-First must mean Human-First,” said Raihan Islam, founder of High-Velocity X and architect of the Velocity OS™ operations platform. “If what you publish doesn’t sound like you, your credibility drops instantly. Use AI to accelerate work to 60%, then rely on human judgement, context and expertise to take you to 80% and beyond. We sometimes hear the phrase ‘AI-first means human-first’, and this is what it really means.”
The findings specific to the UK are even more pointed. Just 42% of UK participants expressed trust in AI, with 72% stating they are uncertain whether online content is genuine or machine-generated. This signals a reputational risk for companies whose messaging fails to sound recognisably human.
Raihan Islam, advisory board Certified Chair™, argues that overuse of AI-generated content is leading to uniformity and loss of distinctive voice. “When ChatGPT arrived, everyone tried it. Businesses and LinkedIn influencers rushed to use it for quick, polished text. Three years later the hype has faded. The recycled phrases, the ‘not X but Y’ clichés, the sterile, packaged tone – it all adds up. I used long dashes for years. It was my trademark. Now I have to delete them so people don’t think I’m a bot.”
According to Islam, when organisations deploy AI for content or analysis without human oversight they expose themselves to reputational, legal and operational risk. “If people allow AI to speak for them without checking the statements, they open themselves up to potential defamation claims, or possibly getting fined for relying on AI-generated analysis when they shouldn’t have.”