Hardware Annealing in Analog VLSI Neurocomputing, 1991
The Springer International Series in Engineering and Computer Science Series, Vol. 127

Authors:

Language: English

Approximative price 105.49 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Publication date:
234 p. · 15.5x23.5 cm · Paperback
Rapid advances in neural sciences and VLSI design technologies have provided an excellent means to boost the computational capability and efficiency of data and signal processing tasks by several orders of magnitude. With massively parallel processing capabilities, artificial neural networks can be used to solve many engineering and scientific problems. Due to the optimized data communication structure for artificial intelligence applications, a neurocomputer is considered as the most promising sixth-generation computing machine. Typical applica­ tions of artificial neural networks include associative memory, pattern classification, early vision processing, speech recognition, image data compression, and intelligent robot control. VLSI neural circuits play an important role in exploring and exploiting the rich properties of artificial neural networks by using pro­ grammable synapses and gain-adjustable neurons. Basic building blocks of the analog VLSI neural networks consist of operational amplifiers as electronic neurons and synthesized resistors as electronic synapses. The synapse weight information can be stored in the dynamically refreshed capacitors for medium-term storage or in the floating-gate of an EEPROM cell for long-term storage. The feedback path in the amplifier can continuously change the output neuron operation from the unity-gain configuration to a high-gain configuration. The adjustability of the vol­ tage gain in the output neurons allows the implementation of hardware annealing in analog VLSI neural chips to find optimal solutions very efficiently. Both supervised learning and unsupervised learning can be implemented by using the programmable neural chips.
1. Introduction.- 1.1 Overview of Neural Architectures.- 1.2 VLSI Neural Network Design Methodology.- 2. VLSI Hopfield Networks.- 2.1 Circuit Dynamics of Hopfield Networks.- 2.2 Existence of Local Minima.- 2.3 Elimination of Local Minima.- 2.4 Neural-Based A/D Converter Without Local Minima.- 2.5 Traveling Salesman Problem.- 3. Hardware Annealing Theory.- 3.1 Simulated Annealing in Software Computation.- 3.2 Hardware Annealing.- 3.3 Application to the Neural-Based A/D Converter.- 4. Programmable Synapses and Gain-Adjustable Neurons.- 4.1 Compact and Programmable Neural Chips.- 4.2 Medium-Term and Long-Term Storage of Synapse Weight.- 5. System Integration for VLSI Neurocomputing.- 5.1 System Module Using Programmable Neural Chip.- 5.2 Application Examples.- 6. Alternative VLSI Neural Chips.- 6.1 Neural Sensory Chips.- 6.2 Various Analog Neural Chips.- 6.3 Various Digital Neural Chips.- 7. Conclusions and Future Work.- Appendixes.