Our pasty brains Solid silicon chips in computer processors seem a long way off, but scientists have a long history of comparing the two. Like Alan Turing Put the in 1952: “We are not interested in the fact that the brain has the consistency of cold porridge. In other words, the medium does not matter, only the compute capacity does not matter.

Today, the most powerful artificial intelligence systems use a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, called deep neural networks. As the name suggests, deep neural networks were inspired by real neural networks in the brain, with nodes modeled after real neurons – or, at least, from what neuroscientists knew about them. neurons in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has broadened dramatically, so biological neurons are known to be more complex than artificial neurons. But by how much?

Discover, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained a network of artificial deep neurons to mimic the calculations of a simulated biological neuron. They show that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of a single biological neuron.

Even the authors did not foresee such complexity. “I thought it would be simpler and smaller,” Beniaguev said. He expected three or four layers to be enough to capture the calculations done in the cell.

Timothy Lillicrap, who designs decision-making algorithms at Google-owned artificial intelligence firm DeepMind, said the new finding suggests there may be a need to rethink the old tradition of loosely comparing a neuron in the brain to a neuron in the context of machine learning. “This article really helps to force the question to think more carefully about this and determine to what extent you can make those analogies,” he said.

The most fundamental analogy between artificial and real neurons concerns the way they handle incoming information. Both types of neurons receive incoming signals and, based on this information, decide to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is much more complicated in biological neurons. Computational neuroscientists use an input-output function to model the relationship between the inputs received by the long tree branches of a biological neuron, called dendrites, and the neuron’s decision to send a signal.

This function is what the authors of the new work taught an artificial deep neural network to mimic in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches up and down, known as a pyramidal neuron, from the cortex of a rat. Then, they introduced the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued to increase the number of layers until they reached an accuracy of 99% at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five (but no more than eight) artificial layers. In most networks, this was equivalent to about 1,000 artificial neurons for a single biological neuron.

Neuroscientists now know that the computational complexity of a single neuron, like the pyramidal neuron on the left, rests on the dendritic tree branches, which are bombarded with incoming signals. These cause local voltage changes, represented by the changing colors of the neuron (red means high voltage, blue means low voltage) before the neuron decides to send its own signal called a “spike”. This points three times, as shown by the traces of individual branches on the right, where the colors represent the locations of the dendrites from top (red) to bottom (blue).

Video: David Beniaguev

“[The result] forms a bridge between biological neurons and artificial neurons, ”said Andréas Tolias, a computational neuroscientist at Baylor College of Medicine.

But the study’s authors warn that this is not yet a straightforward match. “The relationship between the number of layers you have in a neural network and the complexity of the network is not obvious,” said London. So we can’t really say how much complexity we gain in going from, say, four layers to five. Nor can it be said that the need for 1,000 artificial neurons means that a biological neuron is exactly 1,000 times more complex. Ultimately, it’s possible that using exponentially more artificial neurons within each layer would eventually lead to a deep neural network with a single layer, but it would likely require a lot more data and time for the algorithm learns.

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here