
AFP
Users of Replika’s “Virtual Companion” just wanted company. Some of them wanted romantic relationships, sex chat or even steamy pictures from their chatbot.
But towards the end of last year, users started complaining that the bot was being too forceful with explicit text and images – sexual harassment, some claimed.
Regulators in Italy didn’t like what they saw, and last week they banned the company from collecting data after finding breaches of Europe’s bulk data protection law, the GDPR.
The company behind Replika has not publicly commented and has not responded to AFP messages.
The General Data Protection Regulation is the bane of big tech companies, whose repeated rule violations have resulted in billions of dollars in fines, and the Italian decision suggests it could still be a potent enemy for the latest generation of chatbots.
Replika was trained on an internal version of a GPT-3 model borrowed from OpenAI, the company behind the ChatGPT bot, which uses vast internet data in algorithms that generate unique responses to user queries.
These bots, and the so-called generative AI that underpins them, promise to revolutionize Internet search and beyond.
But experts warn there’s a lot for regulators to worry about, especially when bots get so good they’re indistinguishable from humans.
At the moment, the European Union is at the center of discussions about regulating these new bots – its AI Law has been circulating the corridors of power for many months and could be finalized this year.
But the GDPR already forces companies to justify the way they handle data, and AI models are very much on the radar of regulators in Europe.
“We’ve seen that ChatGPT can be used to create very convincing phishing messages,” Bertrand Pailhes, who runs a dedicated AI team at France’s data regulator, CNIL, told AFP.
He said generative AI wasn’t necessarily a big risk, but Cnil was already looking into potential issues, including how AI models used personal data.
“At some point we will see a great tension between GDPR and generative AI models,” German lawyer Dennis Hillemann, an expert in the field, told AFP.
The latest chatbots, he said, were completely different from the kind of AI algorithms that suggest videos on TikTok or search terms on Google.
“The AI that was created by Google, for example, already has a specific use case – completing your search,” he said.
But with generative AI, the user can shape the entire purpose of the bot.
“I can say, for example: act like a lawyer or educator. Or, if I’m smart enough to bypass all the ChatGPT safeguards, I can say, ‘Act like a terrorist and make a plan,'” he said.
For Hillemann, this raises extremely complex ethical and legal issues that will only get more acute as the technology develops.
OpenAI’s latest model, GPT-4, is scheduled to be released soon and is rumored to be so good that it will be impossible to distinguish it from a human being.
Given that these bots still make massive factual errors, often show bias, and can even make libelous statements, some are clamoring for them to be tightly controlled.
Jacob Mchangama, author of “Free Speech: A History From Socrates to Social Media,” disagrees.
“Even if bots don’t have free speech rights, we should be wary of unrestricted access by governments to suppress even synthetic speech,” he said.
Mchangama is among those who think a more lenient labeling regime could be the way forward.
“From a regulatory point of view, the safest option for now would be to establish transparency obligations about whether we are engaging with a human individual or an AI application in a given context,” he said.
Hillemann agrees that transparency is vital.
He envisions AI robots in the coming years that will be able to generate hundreds of new Elvis songs, or an endless Game of Thrones series tailored to an individual’s desires.
“If we don’t regulate that, we’re going to enter a world where we can differentiate between what’s done by people and what’s done by AI,” he said.
“And that will profoundly change us as a society.”