Evolving Network Representation Learning Based on Random Walks

dc.contributor.advisorPapangelis, Emmanouil
dc.contributor.authorHeidari, Farzaneh
dc.date.accessioned2019-11-22T18:56:56Z
dc.date.available2019-11-22T18:56:56Z
dc.date.copyright2019-07
dc.date.issued2019-11-22
dc.date.updated2019-11-22T18:56:56Z
dc.degree.disciplineComputer Science
dc.degree.levelMaster's
dc.degree.nameMSc - Master of Science
dc.description.abstractLarge-scale network mining and analysis is key to revealing the underlying dynamics of networks, not easily observable before. Lately, there is a fast-growing interest in learning low-dimensional continuous representations of networks that can be utilized to perform highly accurate and scalable graph mining tasks. A family of these methods is based on performing random walks on a network to learn its structural features before feeding the sequence of random walks in a deep learning architecture to learn a network embedding. While these methods perform well, they can only operate on static networks. However, in real-world, networks are evolving, as nodes and edges are continuously added or deleted. As a result, any previously obtained network representation will now be outdated having an adverse effect on the accuracy of the network mining task at stake. The naive approach to address this problem is to re-apply the embedding method of choice every time there is an update to the network. But this approach has serious drawbacks. First, it is inefficient, because the embedding method itself is computationally expensive. Then, the network mining task outcome obtained by the subsequent network representations are not directly comparable to each other, due to the randomness involved in the new set of random walks involved each time. In this research, we propose a random-walk based method for learning representations of evolving networks EvoNRL. The key idea of our approach is to maintain a set of random walks that are consistently valid with respect to the updated network topology. That way we are able to continuously learn a new mapping from the evolving network to a low-dimension network representation, by only updating a small number of random walks required to re-obtain an embedding. Moreover, we present an analytical method for determining the right time to obtain a new representation of the evolving network balancing accuracy and time performance. A thorough experimental evaluation is performed that demonstrates the effectiveness of our method against sensible baselines, for a varying range of conditions.
dc.identifier.urihttp://hdl.handle.net/10315/36788
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectComputer science
dc.subject.keywordsMachine learning
dc.subject.keywordsNetwork representation
dc.subject.keywordsDynamic networks
dc.subject.keywordsRandom walks
dc.titleEvolving Network Representation Learning Based on Random Walks
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Heidari_Farzaneh_2019_Masters.pdf
Size:
2.14 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.83 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.38 KB
Format:
Plain Text
Description: