OpenAI, The Company Behind ChatGPT, Claims It Is Intensifying Efforts To Stop AI From “Going Rogue”

OpenAI, the company that developed ChatGPT, announced on Wednesday that it will devote major resources to the effort and establish a new research team to examine how to make artificial intelligence that eventually controls itself safe for people.

“The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,” OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote in a blog post. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

The writers of the blog post projected that superintelligent AI, or systems smarter than humans, would develop this decade. As a result, the authors argue that advances in so-called “alignment research,” which aims to make sure AI remains beneficial to humans, are required if humans are to be able to control the superintelligent AI.

Microsoft-supported OpenAI will devote 20% of the computational capacity it has secured over the following four years to resolving this issue, they stated. Additionally, the business is creating a brand-new team dubbed the Superalignment team that will unite around this endeavour.

The team wants to build an AI alignment researcher that is “human-level” and then scale it using a lot of computing power.

According to OpenAI, this implies that they will train AI systems using feedback from humans, train AI systems to support human evaluation, and then train AI systems to carry out the alignment research themselves.

Connor Leahy, an advocate for AI safety, claimed that the idea was fundamentally faulty since the early human-level AI may generate chaos and run amok before it was forced to address AI safety issues.

“You have to solve alignment before you build human-level intelligence, otherwise by default you won’t control it,” he said in an interview. “I personally do not think this is a particularly good or safe plan.”

Both AI experts and the general public have been particularly concerned about the potential risks of AI. A number of executives and professionals in the AI sector wrote an open letter in April asking for a six-month moratorium on the creation of systems stronger than OpenAI’s GPT-4, citing potential hazards to society. More than two-thirds of Americans are worried about the potential negative effects of AI, and 61% think it may endanger society, according to a May Reuters/Ipsos poll.

(Adapted from MarketScreener.com)



Categories: Creativity, Economy & Finance, Entrepreneurship, Regulations & Legal, Strategy, Uncategorized

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.