“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far off.
In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.
Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.
Related Posts
More than 20,000 Kurds protest against IS in German city Duesseldorf
SE Asian foreign ministers voice concerns on South China Sea
Dustin Brown has come a long way since his days driving a camper van… he’s the talk of Wimbledon after upsetting Rafael Nadal
Celeb safety is big business at World Cup – CNNMoney
When Social Media Attention Goes Wrong
Details of West Ham’s move to the Olympic Stadium should be made public, the Information Commissioner rules