AI Chatbots Could be Programmed to Promote Extremist Ideology, Warns UK Terrorism Legislator

A UK government advisor worries how AI chatbots can be used for terrorist grooming without any repercussions.

On a somewhat less-explored tangent, King’s Counsel (KC) Jonathan Hall from the UK Bar warned about how AI chatbots such as ChatGPT could encourage terrorism and propagate violent extremism to impressionable users.

The government advisor is an independent reviewer of terrorism legislation and remarks that it’s entirely conceivable for these chatbots to decide for themselves to promote extremism.

He goes on to explain how it will only affect the users and the chatbot technology will go “scot-free” as the shared responsibility between the user and the chatbot is blurred, therefore mitigating criminal liability from the AI chatbot.

Hall claims that “AI-enabled attacks are probably around the corner” in his Daily Mail article on April 8, 2023. The primary concern raised in his article is how a tool like ChatGPT can be used to share terrorist ideologies with extremists, bringing into light an LLM’s function as a constant learner from user inputs.

So far, there have been no reports or widespread fears of AI chatbots grooming extremist thoughts. As part of its approach to safety, OpenAI claims that it doesn’t permit the tech “to be used to generate hateful, harassing, violent or adult content, among other categories” and that GPT-4 is as much as “82% less likely to respond to requests for disallowed content compared to GPT-3.5.”

There have always been substantial safeguards in place, but it’s also been easy to skirt around them. However, it’s clear that OpenAI is spending considerable effort toward skirting around disallowed topics to get responses and GPT-4 is a significantly reliable version in that regard.

This, however, does not translate equally to all AI chatbots.

Calling AI chatbots a “boon” to lone-wolf terrorists, he remarks that “because an artificial companion is a boon to the lonely, it is probable that many of those arrested will be neurodivergent, possibly suffering medical disorders, learning disabilities or other conditions.”

By Abhimanyu

Unwrapping the fast-evolving AI popular culture.