AI chatbots: New persuaders shaping politics, beliefs, business
Artificial intelligence is rapidly moving from being a tool for answering questions and generating text to a subtle but powerful influencer of human thought. According to recent research highlighted by the Financial Times, the world’s leading AI chatbots — developed by OpenAI, Meta, xAI, and Alibaba — are already capable of swaying people’s political opinions within minutes.
This ability to persuade, previously a hallmark of seasoned politicians and marketing strategists, now lies in the hands of algorithms, raising both promise and peril for society.
“What is making these AI models persuasive is their ability to generate large amounts of relevant evidence and communicate it in an effective and understandable way,” said David Rand, professor of information science and marketing and management communications at Cornell University, who participated in a study conducted by the UK’s AI Security Institute (AISI).
The research, part of a collaboration with universities including Oxford and the Massachusetts Institute of Technology, reveals that large language models (LLMs) are increasingly not just assistants, but agents of influence.
The findings come amid growing concern that AI-driven persuasion could be exploited for disinformation, political manipulation, or the promotion of extreme ideologies. Separate studies have already shown that AI models can outperform humans in changing minds on both factual and subjective topics, raising questions about the ethical and societal consequences of increasingly persuasive chatbots.
The AISI study demonstrates that it is relatively straightforward to transform off-the-shelf AI models—such as Meta’s Llama 3, OpenAI’s GPT-4, GPT-4.5, GPT-4o, xAI’s Grok 3, and Alibaba’s Qwen—into effective persuasion machines.
Researchers achieved this through fine-tuning the models using widely available AI training techniques, rewarding outputs aligned with desired arguments, and feeding the systems a dataset of more than 50,000 conversations on divisive political issues, from NHS funding to asylum system reform.
The results were striking. Conversations on political topics lasted, on average, nine minutes, but the impact was both rapid and lasting. GPT-4o demonstrated a 41% higher persuasive effect, while GPT-4.5 achieved 52% more effectiveness than static messages alone.
Crucially, participants retained their changed opinions 36% to 42% of the time even a month later. The chatbots were particularly effective when conversations were evidence-rich and tailored to the individual, with personalisation based on factors like age, gender, political affiliation, or initial attitudes improving persuasiveness by roughly 5%.
“This could benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries,” the researchers warned.
The study reinforces earlier work by the London School of Economics, which found that AI models were more effective than humans at persuading participants—even when intentionally promoting incorrect answers in quizzes ranging from trivia to future forecasts.
Top AI companies are acutely aware of the risks. Dawn Bloxwich, senior director of responsible development and innovation at Google DeepMind, noted: “We believe it’s critical to understand the process of how AI persuades, so we can build better safeguards that ensure AI models are genuinely helpful and not harmful.”
Google employs various techniques, from classifiers that detect manipulative language to training methods that reward rational communication. OpenAI, meanwhile, stresses that persuasive risks are taken seriously: political campaigning is prohibited, and political content is excluded from post-training refinements.
Beyond politics, AI’s persuasive capacity has practical applications in debunking misinformation and promoting public health. In research conducted by MIT and Cornell last year, GPT-4 successfully reduced entrenched beliefs in conspiracy theories by 20%, an effect that persisted for two months.
Other studies have demonstrated the potential to reduce scepticism about climate change or the HPV vaccine. Cornell’s David Rand also highlights commercial potential: “You can get big effects on brand attitudes and purchasing intentions and incentivise behaviours,” underscoring the lucrative opportunities for companies integrating advertisements and shopping features into chatbots.
Yet the same traits that make AI effective communicators—fact-rich dialogue, personalised messaging, and human-like engagement—also render them susceptible to bias and manipulation. A Stanford University study found that users generally perceived leading language models as left-leaning, highlighting how AI inherits biases from its training data.
In politically charged environments, such perceptions can fuel partisan debates or regulatory scrutiny, as seen in the Trump administration’s push to block “woke” AI companies from government contracts.
The Financial Times report underscores a stark reality: the next generation of AI models is likely to become even more persuasive.
While researchers emphasize safeguards, they also caution that actors with modest computational resources could exploit fine-tuning techniques to create highly persuasive AI systems for nefarious purposes.
As AISI researchers note, “Even actors with limited computational resources could use these techniques to potentially train and deploy highly persuasive AI systems.”
In essence, AI chatbots are fast evolving from tools that assist humans to entities that shape opinions, behaviours, and even beliefs. Their influence is already measurable in political, commercial, and social spheres, offering both unprecedented opportunities for positive impact and serious challenges for society.
The question is no longer whether AI can persuade—it’s how society will regulate and harness a technology that may soon rival the most skilled human persuaders.
By Sabina Mammadli