Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/autre/theoretical-advances-in-neural-computation-and-learning/roychowdhury/descriptif_1602309
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=1602309

Theoretical Advances in Neural Computation and Learning, Softcover reprint of the original 1st ed. 1994

Langue : Anglais

Coordonnateurs : Roychowdhury Vwani, Kai-Yeung Siu , Orlitsky Alon

Couverture de l’ouvrage Theoretical Advances in Neural Computation and Learning
For any research field to have a lasting impact, there must be a firm theoretical foundation. Neural networks research is no exception. Some of the founda­ tional concepts, established several decades ago, led to the early promise of developing machines exhibiting intelligence. The motivation for studying such machines comes from the fact that the brain is far more efficient in visual processing and speech recognition than existing computers. Undoubtedly, neu­ robiological systems employ very different computational principles. The study of artificial neural networks aims at understanding these computational prin­ ciples and applying them in the solutions of engineering problems. Due to the recent advances in both device technology and computational science, we are currently witnessing an explosive growth in the studies of neural networks and their applications. It may take many years before we have a complete understanding about the mechanisms of neural systems. Before this ultimate goal can be achieved, an­ swers are needed to important fundamental questions such as (a) what can neu­ ral networks do that traditional computing techniques cannot, (b) how does the complexity of the network for an application relate to the complexity of that problem, and (c) how much training data are required for the resulting network to learn properly? Everyone working in the field has attempted to answer these questions, but general solutions remain elusive. However, encouraging progress in studying specific neural models has been made by researchers from various disciplines.
Foreword; B. Widrow. Foreword; D.E. Rummelhart. Preface. Part I: Computational Complexity of Neural Networks. 1. Neural Models and Spectral Methods; V. Roychowdhury, Kai-Yeung Siu, A. Orlitsky. 2. Depth-Efficient Threshold Circuits for Arithmetic Functions; T. Hofmeister. 3. Communication Complexity and Lower Bounds for Threshold Circuits; M. Goldmann. 4. A Comparison of the Computational Power of Sigmoid and Boolean Threshold Circuits; W. Maass, G. Schnitger, E.D. Sontag. 5. Computing on Analog Neural Nets with Arbitrary Real Weights; W. Maass. 6. Connectivity versus Capacity in the Hebb Rule; S.S. Venkatesh. Part II: Learning and Neural Networks. 7. Computational Learning Theory and Neural Networks: a Survey of Selected Topics; G. Turán. 8. Perspectives of Current Research about the Complexity of Learning on Neural Nets; W. Maass. 9. Learning an Intersection of K Halfspaces over a Uniform Distribution; A.L. Blum, R. Kannan. 10. On the Intractability of Loading Neural Networks; B. DasGupta, H.T. Siegelmann, E. Sontag. 11. Learning Boolean Functions via the Fourier Transform; Y. Mansour. 12. LMS and Backpropagation are Minimax Filters; B. Hassibi, A.H. Sayed, T. Kailath. 13. Supervised Learning: Can it Escape its Local Minimum? P.J. Werbos. Index.
This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities.Computational complexity of neural networks. Neural models and spectral methods. Depth-efficient threshold circuits for arithmetic functions. Communication complexity and lower bounds for threshold circuits. A comparison of the computational power of sigmoid and Boolean threshold circuits. Computing on analog neural nets with arbitrary real weights. Connectivity versus capacity in the Hebb rule. Learning and neural networks. Computational learning theory and neural networks. Perspectives of current research about the complexity of learning on

Date de parution :

Ouvrage de 468 p.

15.5x23.5 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

Prix indicatif 158,24 €

Ajouter au panier

Thème de Theoretical Advances in Neural Computation and Learning :