Modeling the Control of Attention by Visual and Semantic Factors in Real-World Scenes Alex D. Hwang Department of Computer Science UMass Boston Recently, there has been great interest among vision researchers in developing computational models that predict the distribution of saccadic eye movements in various visual tasks. While it has been found that low-level visual features guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations is still unknown. Therefore, in order to derive a comprehensive model of human eye movements during real-world scene perception, more flexible ways of defining visual saliency and corresponding analytic methods to handle the eye-movement data are required. I propose a top-down model of visual attention which is based on similarity between the target and regions of the search scene, weighted by visual feature-wise informativeness. Furthermore, using the inter-disciplinary approach of combing empirical eye movement data and Latent Semantic Analysis (LSA) on objects descriptions, I analyze semantic guidance effects on eye movements during real-world scene perception to build the foundation of an improved, comprehensive computational model of attentional control.