document.write(''); Jeffrey Hinton tells us why he now fears the technology he helped build - Simo Baha

Jeffrey Hinton tells us why he now fears the technology he helped build

It took until the 2010s for the power of back-propagation trained neural networks to really take effect. Working with several graduate students, Hinton showed that his technique was better than others at identifying objects in images with a computer. They also trained a neural network to predict the next letter in a sentence, which is a sentence for today’s big language models.

One of these graduate students was Ilya Sutskver, who went on to co-found OpenAI and lead the development of ChatGPT. “We got the first hints that these things could be amazing,” Hinton says. “But it’s taken a long time to sink in because it has to be done on a massive scale to be good.” Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols such as words or numbers.

But Hinton was not convinced. He worked on neural networks, software abstractions of the brain in which neurons and the connections between them are represented in code. By changing the way those neurons are wired, by changing the numbers used to represent them, the neural network can be tuned quickly. In other words, it is possible to learn.

“My dad was a biologist, so I thought in biological terms,” ​​Hinton says. “And symbolic reasoning is clearly not at the core of biological intelligence.

“Crows can solve puzzles, but they don’t have a language. They don’t do this by saving strings of characters and manipulating them. They do this by changing the strength of connections between neurons in their brains. And so it should be possible to learn complex things by changing the strengths of connections in an artificial neural network.”

New intelligence

For 40 years, Hinton viewed artificial neural networks as a poor attempt to mimic biological networks. Now he thinks that has changed. by trying to mimic what biological brains do, he thinks we’ve come up with something better. “It’s scary when you see it,” he says. “It’s a sudden distraction.”

Hinton’s fears will strike many as the stuff of science fiction. But here is his case.

As their name suggests, large language models are made of massive neural networks with a huge number of connections. But they are small compared to the brain. “Our brain has 100 trillion connections,” Hinton says. “Large language models have up to half a trillion, a trillion at most. However, GPT-4 knows hundreds of times more than a single human. So maybe it actually has a much better learning algorithm than we do.”

Source link