Show simple item record

dc.contributor.advisorBrubaker, Marcus
dc.creatorTesfaldet, Matthew
dc.description.abstractThis thesis introduces a two-stream model for dynamic texture synthesis. The model is based on pre-trained convolutional networks (ConvNets) that target two independent tasks: (i) object recognition, and (ii) optical flow regression. Given an input dynamic texture, statistics of filter responses from the object recognition and optical flow ConvNets encapsulate the per-frame appearance and dynamics of the input texture, respectively. To synthesize a dynamic texture, a randomly initialized input sequence is optimized to match the feature statistics from each stream of an example texture. In addition, the synthesis approach is applied to combine the texture appearance from one texture with the dynamics of another to generate entirely novel dynamic textures. Overall, the proposed approach generates high quality samples that match both the framewise appearance and temporal evolution of input texture. Finally, a quantitative evaluation of the proposed dynamic texture synthesis approach is performed via a large-scale user study.
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectArtificial intelligence
dc.titleTwo-Stream Convolutional Networks for Dynamic Texture Synthesis
dc.typeElectronic Thesis or Dissertation Science - Master of Science's
dc.subject.keywordsComputer science
dc.subject.keywordsComputer vision
dc.subject.keywordsArtificial intelligence
dc.subject.keywordsDeep learning
dc.subject.keywordsMachine learning
dc.subject.keywordsTexture synthesis
dc.subject.keywordsDynamic texture synthesis
dc.subject.keywordsNeural art
dc.subject.keywordsStyle transfer

Files in this item


This item appears in the following Collection(s)

Show simple item record

All items in the YorkSpace institutional repository are protected by copyright, with all rights reserved except where explicitly noted.