The Idea In 60 Seconds

  • AI-powered conversational interfaces are likely to become central to how we interact with technology soon.
  • An MIT Sloan study (details and link below) reveals that personalized AI conversations can significantly alter even deeply held beliefs, in just 10 minutes.
  • I have personally worked on similar AI prototypes.
  • While AI offers valuable tools for efficiency and productivity, it also carries the risk of influencing peoples’ courses of action or even, in extreme cases, radicalizing users through conversational manipulation.
  • Yuval Noah Harari has warned that AI could be used to hack human behaviour and control individuals’ decisions, presenting an existential challenge to human autonomy.
  • Solutions might include focus on regulation, ethical AI development, critical thinking, and non-violent responses to disagreement.
  • There’s no substitute for critical thinking.

The MIT Example: AI’s Power to Change Minds

An MIT Sloan study showed recently that even conspiracy theorists, whose views are notoriously difficult to sway, experienced significant attitude shifts after just 10 minutes of personalized, persuasive conversation with an AI.

The change in their thinking was substantial and lasting, highlighting AI’s ability to subtly and effectively influence beliefs. In this case, the tool was used for good, helping to deprogram individuals from harmful conspiracy theories.

Which raises a question : If AI can persuade someone to abandon deeply held beliefs in mere minutes, can it also radicalize or manipulate them just as quickly? Given the rapid approach of conversational interfaces, in my view, it’s a question worth asking.

The Context: Conversational Interfaces & AI Assistants

As I’ve written about before, we are on the brink of a shift from the old WIMP interfaces (Windows, Icons, Menus, Pointers) to AI-driven conversational interfaces.

Separately, AI assistants like Siri, Alexa, and Google Assistant, using these – now much better conversational interfaces will become increasingly integrated into our lives, handling tasks for us. Our AI Assistants will help us with everything from shopping to helping us remember things to being a friend and even a therapist for us.

These interfaces and AI assistants won’t just respond to commands. In order to personalise for us the services we want, they will engage in conversations, learn about us, and adapt to our needs as our feelings evolve. Very soon we will talk to them more than anyone else in our lives.

While this may seem like a leap forward in convenience and efficiency, it also opens new avenues for influence.

A Small Part Of The History Of Data and Influence

Social media platforms like Facebook, YouTube, and TikTok have already demonstrated how AI-driven algorithms can manipulate us, by amplifying polarizing content.

Famously, Facebook was involved in a data privacy scandal with Cambridge Analytica. The company harvested data for millions of users on the social media platform and used it to personalise persuasive ads to influence an election.

The danger is that similar algorithms, operating through ubiquitous conversational interfaces, could fuel either commercial interests (make us buy things) or even cause people to harmful beliefs at scale (make us hate and even take action against things like people or governments.)

There are many other examples, including QAnon and it’s contribution to the January 6th riots in Washington D.C.

My Personal Experience With Conversational Interfaces Designed To Influence People

I recently built a simple AI prototype, designed ostensibly, as a chatbot which tried to sell a fictitious product ‘Widgets’. The hypothesis for the prototype was that in a world of conversational interfaces, charm (adapting conversational responses to influence customers based on their personality needs.)

While a user discussed the widgets they wanted to buy, with a chatbot, the AI would analyse the users word choice, grammar, focus, questions, speed of response and other variables to construct a psychological profile of the customer.

Each user was then put in to a segment. A simplified way to explain it is to polarise customer types in to two groups, an emotional buyer and a rational buyer. (The segmentation model actually had 7 segments).

The chatbot would tell the emotional buyer how it would f’eel’ to own the product. To the rational buyer the chatbot would provide specifications, volume discounts, engineering information and delivery time frames.

What surprised me most, working on the profile, was the quality of the psychological profile I could ascertain from a small amount of information. Just 200-300 words could give me a Myers Briggs breakdown, OCEAN personality diagnosis and segment the user for the sales model I’ve described. I’ve done some research in to this and that number – around 250 words, is a pretty standard benchmark for this sort of work.

Imagine if, instead of 250 words, the AI Assistant you were working with had 3 years of conversation with you.

Yuval Harari’s Prescient Thoughts

Few thinkers have articulated the dangers of conversational AI as clearly as Yuval Noah Harari. In his writings (and more accessible YouTube videos) Harari warns in detail about precisely this.

His concern is that AI will know individuals better than they know themselves, predicting and shaping their decisions in ways that undermine personal autonomy.

Harari goes on to generalise at scale, envisaging a future where AI could create digital dictatorships, in which the entities controlling AI systems—whether governments or corporations—could exert control over entire populations.

His warning about AI challenging human free will is important in a world where algorithms are already influencing what we buy, how we vote, and what we believe.

As AI assistants become more widely distributed, Harari’s concern that humans may lose their ability to make independent decisions seems realistic.

What Can We Do?

1. Regulation and Transparency:

    • AI systems must be more transparent. Users should know when they are interacting with AI and understand how these systems profile and influence them.
    • Governments and tech companies should implement ethical guidelines for AI, ensuring that systems are not designed to manipulate users without their informed consent.
    • Stronger data ownership laws are essential. Individuals should have control over their personal data and how it is used by AI systems. Limiting the power of data-hungry AI systems could reduce the risk of manipulation.

    2. Ethical AI Development :

    • AI developers should adopt frameworks that prioritize ethical behaviour over profit-driven engagement metrics. (And be required by law to do that.)
    • AI systems should be regularly audited to ensure they aren’t being used for harmful purposes.
    • Companies should invest in de-radicalization efforts, using AI as a tool for good by fostering dialogue that promotes understanding and counters harmful ideologies.

    There is No Substitute For Critical Thinking

    We are already subject to influence all day. It’s the purpose of brand and of successful sales people. Its why we sleep with one person and not another. Successful Instagram posters are literally called influencers.

    TV stations (CNN, Fox) hold ideological positions they fervently deny. Politicians lie and obfuscate. Companies put values on the wall and act in ways diametrically opposed to them.

    in such a world, there’s no substitute for critical thinking. And therein lies the problem.

    What proportion of the people you know critically interpret the information they’re provided, go to the source data and form their own views?

    What proportion are self aware enough to consider their own biases, know their own subconscious desires define strongly held personal values and act in accordance with them?

    It is the people who don’t which will be most subject to manipulation by AI as by the other sources of influence I’ve mentioned. (Just to be clear, I am well aware I could be better at all of these things.)

    At a more grass roots level, while paying for an AI assistant may not solve all these problems, it could offer a way to avoid the most dangerous manipulations that come with ad-funded, free AI tools.

    After all, whoever controls the AI controls the future—and I’d rather pay for an assistant that serves my best interests, rather than one that seeks to manipulate them.