Categories: Tech

ChatGPT: New AI chatbot has everyone talking to it

A new chatbot has passed one million users in less than a week, the project behind it says.

ChatGPT was publicly released on Wednesday by OpenAI, an artificial intelligence research firm whose founders included Elon Musk.

But the company warns it can produce problematic answers and exhibit biased behaviour.

Open AI says it's "eager to collect user feedback to aid our ongoing work to improve this system".

ChatGPT is the latest in a series of AIs which the firm refers to as GPTs, an acronym which stands for Generative Pre-Trained Transformer.

To develop the system, an early version was fine-tuned through conversations with human trainers.

The system also learned from access to Twitter data according to a tweet from Elon Musk who is no longer part of OpenAI's board. The Twitter boss wrote that he had paused access "for now".

The results have impressed many who've tried out the chatbot. OpenAI chief executive Sam Altman revealed the level of interest in the artificial conversationalist in a tweet.

This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on TwitterThe BBC is not responsible for the content of external sites.Skip twitter post by Sam Altman

Allow Twitter content?

This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.

Accept and continueThe BBC is not responsible for the content of external sites.End of twitter post by Sam Altman

The project says the chat format allows the AI to answer "follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests"

A journalist for technology news site Mashable who tried out ChatGPT reported it is hard to provoke the model into saying offensive things.

Image source, Getty Images

Mike Pearl wrote that in his own tests "its taboo avoidance system is pretty comprehensive".

However, OpenAI warns that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers".

Training the model to be more cautious, says the firm, causes it to decline to answer questions that it can answer correctly.

  • What is AI and is it dangerous?
  • Why the rise of AI art stirs fierce debate
  • How human-like are the most sophisticated chatbots?
  • Google engineer says AI system may have feelings

Briefly questioned by the BBC for this article, ChatGPT revealed itself to be a cautious interviewee capable of expressing itself clearly and accurately in English.

Did it think AI would take the jobs of human writers? No – it argued that "AI systems like myself can help writers by providing suggestions and ideas, but ultimately it is up to the human writer to create the final product".

Asked what would be the social impact of AI systems such as itself, it said this was "hard to predict".

Had it been trained on Twitter data? It said it did not know.

Only when the BBC asked a question about HAL, the malevolent fictional AI from the film 2001, did it seem troubled.

Image source, OpenAI/BBCImage caption, A question ChatGPT declined to answer – or maybe just a glitch

Although that was most likely just a random error – unsurprising perhaps, given the volume of interest.

Its master's voice

Other firms which opened conversational AIs to general use, found they could be persuaded to say offensive or disparaging things.

Many are trained on vast databases of text scraped from the internet, and consequently they learn from the worst as well as the best of human expression.

Meta's BlenderBot3 was highly critical of Mark Zuckerberg in a conversation with a BBC journalist.

In 2016, Microsoft apologised after an experimental AI Twitter bot called "Tay" said offensive things on the platform.

And others have found that sometimes success in creating a convincing computer conversationalist brings unexpected problems.

Google's Lamda was so plausible that a now-former employee concluded it was sentient, and deserving of the rights due to a thinking, feeling, being, including the right not to be used in experiments against its will.

Jobs threat

ChatGPT's ability to answer questions caused some users to wonder if it might replace Google.

Others asked if journalists' jobs were at risk. Emily Bell of the Tow Center for Digital Journalism worried that readers might be deluged with "bilge".

This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on TwitterThe BBC is not responsible for the content of external sites.Skip twitter post 2 by emily bell

Allow Twitter content?

This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.

Accept and continueThe BBC is not responsible for the content of external sites.End of twitter post 2 by emily bell

One question-and-answer site has already had to curb a flood of AI-generated answers.

Others invited ChatGPT to speculate on AI's impact on the media.

This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on TwitterThe BBC is not responsible for the content of external sites.Skip twitter post 3 by Dominic Ligot

Allow Twitter content?

This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.

Accept and continueThe BBC is not responsible for the content of external sites.End of twitter post 3 by Dominic Ligot

General purpose AI systems, like ChatGPT and others, raise a number of ethical and societal risks, according to Carly Kind of the Ada Lovelace Institute.

Among the potential problems of concern to Ms Kind are that AI might perpetuate disinformation, or "disrupt existing institutions and services – ChatGDT might be able to write a passable job application, school essay or grant application, for example".

There are also, she said, questions around copyright infringement "and there are also privacy concerns, given that these systems often incorporate data that is unethically collected from internet users".

However, she said they may also deliver "interesting and as-yet-unknown societal benefits".

ChatGPT learns from human interactions, and OpenAI chief executive Sam Altman tweeted that those working in the field also have much to learn.

AI has a "long way to go, and big ideas yet to discover. We will stumble along the way, and learn a lot from contact with reality.

"It will sometimes be messy. We will sometimes make really bad decisions, we will sometimes have moments of transcendent progress and value," he wrote.

Share

Recent Posts

US scrambles as drones shape the landscape of war: ‘the future is here’

close Video U.S. Army buys 12,000 drones from Red Cat's Teal Drones U.S. Army beefs…

16 minutes ago

Fox News AI Newsletter: Mr. Miyagi’s dramatic return

IN TODAY’S NEWSLETTER: - ‘Cobra Kai’ used AI to bring back ‘Karate Kid’ character in…

36 minutes ago

Who is Pam Bondi, Trump’s new pick for attorney general?

Just hours after former Florida Congressman Matt Gaetz withdrew his name from consideration to be…

46 minutes ago

How to easily record phone calls on your Android

Have you ever wished you could save that important conversation or hilarious chat with your…

3 hours ago

Trump’s sway over Republicans stronger than ever, but Sununu says GOP still a ‘big-tent party’

MARCO ISLAND, Fla. — With his convincing White House victory this month, President-elect Donald Trump's…

3 hours ago

Blue state to shutter over a dozen migrant shelters as Trump’s set to implement deportation agenda

close Video Left wing demonstrators disrupt NYC Council hearing, demand city stops migrant shelter evictions…

5 hours ago