微波EDA网,见证研发工程师的成长!
首页 > 研发问答 > 微电子和IC设计 > 微电子学习交流 > Nanoscale connections for brain-like circuits

Nanoscale connections for brain-like circuits

时间:12-12 整理:3721RD 点击:
http://www.nature.com/nature/journal/v521/n7550/full/521037a.html
Computer science: Nanoscale connections for brain-like circuits
Robert Legenstein
Nature 521, 37–38 (07 May 2015) doi:10.1038/521037a
Published online 06 May 2015
The human brain is a network of billions of neurons that communicate through some 1015 synaptic connections. Our cognitive abilities result from computations performed in this vast network, which is shaped by experience as learning drives changes in the strengths of synaptic coupling. Synthetic neuromorphic circuits use the same massively parallel architecture in complementary metal-oxide-semiconductor (CMOS) technology, which underpins much of the circuitry in conventional computers. But designing neuromorphic chips that approach the connectivity of the human brain remains challenging. On page 61 of this issue, Prezioso et al.1 report a major advance in the field — an artificial neural network that learns to solve a visual-recognition task on the basis of artificial synapses formed from devices called memristors.
Nearly all contemporary computational devices are based on a design known as the von Neumann architecture. The Achilles heel of this incredibly successful approach is the separation of computation and memory: although data are manipulated in the central processing unit, they are stored in a separate random-access memory. Any operation therefore involves the transfer of data between these components. Known as the von Neumann bottleneck2, this renders the computation inefficient.
An alternative model is offered by the architecture of the brain, in which computation and memory are highly intermingled. The 'program' — which includes previously observed data and memories — is stored in the strengths of synaptic connections directly adjacent to the neuronal processing units. Derivatives of this architecture, known as artificial neural networks, have been investigated since the inception of computer science3, 4.
Artificial neural networks are not programmed like conventional computers. Just as humans learn from experience, they acquire their function from data during a training phase. Human-like performance has recently been obtained for several tasks5 by using huge data sets to train large networks containing hundreds of millions of connections. This research has further fuelled interest in brain-inspired neuromorphic hardware that emulates neuronal computation more directly than conventional hardware in a massively parallel design. But communication between emulated neurons is a crucial factor, and so most of the chip area and power usage of neuromorphic hardware are inefficiently consumed by CMOS circuits that act as artificial synapses.
Memristors seem to offer an ideal solution to this problem6. These devices are resistors that have an analog memory conceptually similar to that of biological synapses. Memristor arrays can be fabricated at extremely high density, operate at ultra-low power, and capture key aspects of biological synaptic plasticity (the ability of synaptic connections to strengthen or weaken as a function of the connected neurons' activity). But using memristors as artificial synapses has proved difficult because of high device-to-device variability — even when two devices are fabricated with identical parameters, their actual behaviour can be quite different.
Enter Prezioso and colleagues. They have fabricated a memristive crossbar array consisting of a grid formed by 12 horizontal and 12 vertical metal wires, with connectors made of layers of aluminium oxide and titanium dioxide between the wires at the crosspoints. A memristor is thereby formed at every intersection of wires. The authors used this array to implement artificial synaptic connections for a simple neural network (Fig. 1).
Figure 1: A memristive neural network.
A memristive neural network.
The cartoon depicts a fragment of Prezioso and colleagues' artificial neural network1, which consists of crossing horizontal and vertical wires that have memristor devices (yellow) at the junctions. Input voltages V1 to V3 (the network inputs) drive currents through the memristors, and these currents are summed up in the vertical wires. Artificial neurons (triangles) process the difference between currents in neighbouring wires to produce outputs f1 and f2. The plus and minus symbols on the neurons indicate that the output depends on current differences.
Full size image (185 KB)
Prezioso et al. used junction parameters (such as the layer thicknesses) that they had previously determined in exhaustive tests to minimize memristor variability. This allowed the authors to produce a crossbar without the need for additional transistors at crosspoints to compensate for variability: avoiding the use of compensatory transistors is a prerequisite for high network connectivity.
Because neural networks adapt their synaptic strengths during training, it is important that the conductance of memristors can also change during operation. Prezioso and colleagues demonstrated that their system has this capability in an experiment in which the network quickly learned to report which letter of the alphabet was shown in a 'noisy' image. Although a simple task, this achievement is remarkable because the continuous conductance changes of memristors are notoriously noisy and non-symmetric — that is, increases in conductances often have different amplitudes from analogous decreases, which causes problems for learning algorithms. The authors used two memristors for each synaptic connection, such that the strength of a synapse was given by the difference between two memristor conductances. This differential implementation of synaptic strengths has several benefits — for example, it reduces the impact of non-symmetric conductance changes because each synaptic update involves the change of two conductances in opposite directions.
Prezioso and co-workers' result is a proof-of-concept for hybrid CMOS–memristor neuromorphic circuits. If this design can be scaled up to large network sizes, it will affect the future of computing. Computer scientists have struggled to design algorithms for jobs that humans perform easily, such as visual tasks (distinguishing objects in a scene, for example), speech recognition and coordinating muscles and limbs to perform a motor task. Large neural networks can learn such tasks from massive data sets5. Brain-inspired hardware would therefore complement the strengths of conventional computers. In the future, laptops, mobile phones and robots could include ultra-low-power neuromorphic chips that process visual, auditory and other types of sensory information.
Of course, more research is necessary to achieve these goals. With an area of 200 × 200 nanometres, the memristive devices used by Prezioso et al. are still relatively large compared with other state-of-the-art memristors, and the network described is quite simple. Much larger networks will need to be created, with higher numbers of memristors per unit area, for applications to be realized. Also, the researchers used a batch-learning set-up, in which the whole training data set had to be processed for each update of memristive conductances. This training set-up would therefore require many extra circuit components to provide large amounts of memory outside the memristive crossbar array. Future research must explore how efficient learning procedures for memristive crossbar arrays can be achieved without the need for external memory.

Copyright © 2017-2020 微波EDA网 版权所有

网站地图

Top