Enriching Word Representation Learning for Affect Detection and Affect-Aware Recommendations

Loading...
Thumbnail Image

Date

2021-03-08

Authors

Dehaki, Nastaran Babanejad

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The role of detecting affects from text is to detect affective states such as mood, sentiment and emotions from textual data. The main affective tasks, including sentiment analysis, emotion classification and sarcasm detection have been popular in recent years due to a broad range of relevant applications in various domains. Traditionally, recommendations deal with applications having only two types of entities, users and items, and do not put them into a context when providing recommendations. Recently, contextual recommendations provide more accurate recommendation by considering more contextual information. However, little attention has been paid to affective context and its relation to recommendations. In this dissertation, we first investigate the impact of using affective information on the quality of recommendations, and then seek to improve affect detection in text by enhancing word representation learning. We enrich word representations in two ways: one by effective pre-processing of training word embeddings and second by incorporating both affective and contextual features deeply into text representations. We demonstrate the benefits of enriched word representations in both affect detection and affect-aware recommendation tasks. This dissertation consists of five contributions. First, we investigate whether, and to what extent emotion features can improve recommendations. Towards that end, we derive a number of emotion features that can be attributed to both items/users in the domain of news and music. Then, we devise state-of-the-art emotion-aware recommendation models by systematically leveraging these features. Second, we study the problem of pre-processing in word representation learning for affective tasks. Most early models of affective tasks employed pre-trained word embedding. While pre-processing in affective systems is well-studied, text pre-processing for training word embeddings is not. To address this limitation, we conduct a comprehensive analysis of the role of text pre-processing techniques in word representation learning for affective analysis by applying each pre-processing technique first at embedding training phase, commonly ignored in pre-trained word vector models, then at the downstream task phase. Third, we investigate the usefulness of customized pre-processing for word representation learning for affective tasks. We argue that using numerous text pre-processing techniques at once as a general combination for all affective tasks decreases the performance of affect detection. Therefore, we conduct extensive experiments, showing that, an appropriate combination of text pre-processing methods for each affective task can significantly enhance the classifiers performance. The fourth contribution is to study the role of affective and contextual embeddings with deep neural network models for affect detection. Early word embedding methods, such as Word2vec, are non-contextual, meaning that a word has the same embedding vector independent of its context and sense. Contextual embedding techniques, such as BERT solve this problem but do not incorporate affect information in their word representations. We propose two novel deep neural network models that extend BERT to incorporate both affective and contextual features in text representations. Lastly, we show the usefulness of our proposed affective and contextual embedding models by applying them to affect-aware recommendations.

Description

Keywords

artificial intelligence

Citation

Collections