Real-Time Active Robotic Vision using Biologically Inspired Neural Models Tyler Garaas Visual Attention Lab University of Massachusetts Boston The discipline known as computer vision strives to extract meaning from the patterns of light falling across a sensor. Many difficulties hinder the advancement of this cause, ranging from ambiguity as to what form best encapsulates the structure of the information to the intractable computational bandwidth required to process sensor images of sufficiently high resolution. In the present talk, I propose a system of neurons inspired by a subset of the visual areas of the primate brain in order to direct the “gaze” of a pan/tilt/zoom camera in real-time to areas of interest that are demarcated by one or more low-level visual features known to form the basis of visual abilities in primates. Results of the visual search performed by the neural vision system developed here are compared directly with the results of human subjects performing the same task. Finally, in order to overcome the high computational bandwidth required to model millions of neurons in real-time, I exploit the highly parallel nature of the neural model – as it exists in biological vision systems – to develop a computational software model where thousands of neurons can be modeled simultaneously on recently available parallel computing architecture known as the graphics processing unit (GPU).