Brubaker, Marcus2018-11-212018-11-212018-08-272018-11-21http://hdl.handle.net/10315/35588This thesis introduces a two-stream model for dynamic texture synthesis. The model is based on pre-trained convolutional networks (ConvNets) that target two independent tasks: (i) object recognition, and (ii) optical flow regression. Given an input dynamic texture, statistics of filter responses from the object recognition and optical flow ConvNets encapsulate the per-frame appearance and dynamics of the input texture, respectively. To synthesize a dynamic texture, a randomly initialized input sequence is optimized to match the feature statistics from each stream of an example texture. In addition, the synthesis approach is applied to combine the texture appearance from one texture with the dynamics of another to generate entirely novel dynamic textures. Overall, the proposed approach generates high quality samples that match both the framewise appearance and temporal evolution of input texture. Finally, a quantitative evaluation of the proposed dynamic texture synthesis approach is performed via a large-scale user study.enAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.Artificial intelligenceTwo-Stream Convolutional Networks for Dynamic Texture SynthesisElectronic Thesis or Dissertation2018-11-21Computer scienceComputer visionArtificial intelligenceDeep learningMachine learningTexture synthesisDynamic texture synthesisTextureNeural artStyle transfer