Abstract
Neural network models for artificial intelligence offer an approach fundamentally different from conventional symbolic approaches, but the merits of the two paradigms cannot be fairly compared until neural network models with large numbers of ”neurons” are implemented. Despite the attractiveness of neural networks for computing applications which involve adaptation and learning, most of the published demonstrations of neural network technology have involved relatively small numbers of ”neurons”. One reason for this is the poor match between conventional electronic serial or coarse-grained multiple-processor computers and the massive parallelism and communication requirements of neural network models. The self-pumped optical neural network (SPONN) described here is a fine-grained optical architecture which features massive parallelism and a much greater degree of interconnectivity than bus-oriented or hypercube electronic architectures. SPONN is potentially capable of implementing neural networks consisting of 105-106 neurons with 109-1010 interconnections. The mapping of neural network models onto the architecture occurs naturally without the need for multiplexing neurons or dealing with contention, routing, and communication bottleneck problems. This simplifies the programming involved compared to electronic implementations.
© 1989 Optical Society of America
PDF ArticleMore Like This
Bernard H. Softer, Yuri Owechko, and G.J. Dunning
WC3 Spatial Light Modulators and Applications (SLM) 1990
Gilmore J. Dunning, Y. Owechko, and B. H. Soffer
WU3 OSA Annual Meeting (FIO) 1989
Taiwei Lu, Kyusun Choi, Shudong Wu, Xin Xu, and Francis T. S. Yu
WJ4 OSA Annual Meeting (FIO) 1989