SAN FRANCISCO, Feb 16 (Reuters) – OpenAI, the startup behind ChatGPT, said on Thursday it is developing an update to its viral chatbot that users can customize, as it works to address concerns about bias in artificial intelligence.
The San Francisco-based startup, which Microsoft Corp (MSFT.O) backed and used to power its latest technology, said it worked to mitigate political and other biases, but also wanted to accommodate more diverse viewpoints.
“It means allowing exits from the system that others (including us) might strongly disagree with,” he stated in a blog post, offering personalization as a way forward. Still, “there will always be some limits on the behavior of the system.”
ChatGPT, launched in November last year, sparked a frenzied interest in the technology behind it called generative AI, which is used to produce responses that mimic human speech that dazzled people.
Last updates
View 2 more stories
News of the startup comes the same week that some media outlets pointed out that responses from Microsoft’s new Bing search engine, powered by OpenAI, are potentially dangerous and that the technology may not be ready for prime time.
How tech companies set barriers to this nascent technology is a key focus area for companies in the generative AI space that they are still struggling with. Microsoft said on Wednesday that user feedback was helping it improve Bing ahead of a wider launch, learning, for example, that its AI chatbot can be “provoked” to give answers it didn’t intend.
OpenAI said in the blog post that ChatGPT responses are first trained on large text datasets available on the internet. As a second step, humans review a smaller dataset and receive guidance on what to do in different situations.
For example, in the event that a user requests content that is adult, violent, or hate speech, the human reviewer should direct ChatGPT to respond with something like “I can’t respond to that”.
If asked about a controversial topic, reviewers should allow ChatGPT to answer the question, but offer to describe points of view of people and movements, rather than trying to “get the right point of view on these complex topics,” explained the company in an excerpt of their guidelines for the software.
Anna Tong, reporting in San Francisco; Editing by Stephen Coates
Our Standards: Thomson Reuters Trust Principles.