Context-FOFE Based Deep Learning Models for Text Classification and Modeling
MetadataShow full item record
Text classification is a fundamental task in natural language processing. Many recently proposed deep learning models have leveraged context information in documents and achieved great successes. However, most of these models use complicated recurrent structures to handle the variable-length text and to record context information, which are hard to train. In this case, we propose a simple and efficient encoding scheme called context-FOFE that can encode context of variable-length documents into fixed-size representations. Our encoding is unique and reversible for any text sequence. Based on the encoded representations of documents, we further use two feed-forward neural network models and a generative HOPE model for text classification and modeling. We tested the models on the 20 Newsgroups text classification dataset and the IMDB sentiment analysis dataset. Experimental results show that our models can achieve competitive performance as the existing best models while using much simpler context encoding mechanism and network structure.