Abstract
Extensions of a neural network model for human motion perception are described [Sereno, J. Opt. Soc. Am. A 3(13), p 72 (1986)]. The task is to calculate the true pattern velocity of a group of elements in a moving image from ambiguous local motion measurements–that is, to solve the aperture problem for motion. A parallel network is used with two layers of units patterned after the primate visual cortical areas V1 and MT. Units in the first layer extract components of motion perpendicular to the orientation of a moving edge using V1-like tuning curves. A second layer contains units tuned to different pattern velocities. The network learns to estimate patterned object velocity from image data by example. The fundamental computation performed by the model is a rapid disambiguation given simultaneous input information from adjacent portions of the velocity field. To evaluate the performance of the model, a weighted average of activity produced by units in the output layer determines values of speed and direction for individual translating patterns. When the model is tested with a set of patterns translating with different velocities, it produces results comparable to human performance.
© 1989 Optical Society of America
PDF ArticleMore Like This
Margaret E. Sereno
WI4 OSA Annual Meeting (FIO) 1986
John H. R. Maunsell
TUJ2 OSA Annual Meeting (FIO) 1989
Terrence J. Sejnowski
THF3 OSA Annual Meeting (FIO) 1988