Improving the Motion Processing Hierarchy for Attending to Visual Motion
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Visual motion has been studied for decades now. Attention to motion using Selective Tuning involves a top-down selection mechanism within a feed-forward motion hierarchy. Researchers have proposed various models for the motion hierarchy. In this thesis, we introduce a learnable hierarchy, based on fully convolutional networks, ST-Motion-Net. The Selective Tuning model for visual attention is demonstrated on ST-Motion-Net to localize motion patterns and segment moving objects. We create two datasets, Blender-MP and Blender-Complex, to evaluate ST-Motion-Net on motion pattern detection, localization, and motion segmentation tasks. ST-Motion-Net achieves excellent performance on motion pattern detection and localization for each area of ST-Motion-Net. For motion segmentation, we evaluate 2-Frame-Area-V1 of ST-Motion-Net on the task. 2-Frame-V1 contains neurons that respond to translation motion, given 2 most recent frames of a temporal sequence. 2-Frame-V1 achieves 86.84% IoU on Blender-MP-Test, which surpass some state-of-the-art models. On Blender-Complex-Test, 2-Frame-V1 reaches 52.61% IoU, which also achieves state-of-the-art performance.