Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.
In time, those limitations could be fixed.
“People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
Where do A.I. systems learn to misbehave?
A.I. systems like ChatGPT are built on neural networks, mathematical systems that can learns skills by analyzing data.
Around 2018, companies like Google and OpenAI began building neural networks that learned from massive amounts of digital text culled from the internet. By pinpointing patterns in all this data, these systems learn to generate writing on their own, including news articles, poems, computer programs, even humanlike conversation. The result: chatbots like ChatGPT.
Related Posts
England and Pakistan are playing the same game under different rules in Abu Dhabi… it’s crazy there is no Hot Spot or Snicko
Team Sky rider Geraint Thomas remains second after stage three of Volta ao Algarve as leader Luis Leon Sanchez crashes out
Jose Mourinho screamed ‘son of a b***h’ at Eva Carneiro as foul-mouthed Chelsea boss’s rant is revealed by lip reader
5 Social Media Growth Hacks To Boost Productivity
Lawrence: LeBron’s historic departure leaves Riley, Bosh and Heat out in cold
Rangers are back in the Premiership after four-year absence, but what happened to those who jumped ship in 2012?