Dual Fixed-Size Ordinally Forgetting Encoding (FOFE) For Natural Language Processing
dc.contributor.advisor | Jiang, Hui | |
dc.contributor.author | Watcharawittayakul, Sedtawut | |
dc.date.accessioned | 2020-05-11T12:53:27Z | |
dc.date.available | 2020-05-11T12:53:27Z | |
dc.date.copyright | 2019-11 | |
dc.date.issued | 2020-05-11 | |
dc.date.updated | 2020-05-11T12:53:27Z | |
dc.degree.discipline | Computer Science | |
dc.degree.level | Master's | |
dc.degree.name | MSc - Master of Science | |
dc.description.abstract | In this thesis, we propose a new approach to employ fixed-size ordinally-forgetting encoding (FOFE) on Natural Language Processing (NLP) tasks, called dual-FOFE. The main idea behind dual-FOFE is that it allows the encoding to be done with two different forgetting factors; this would resolve the original FOFEs dilemma in choosing between the benefits offered by having either small or large values for its single forgetting factor. For this research, we have conducted our experiments on two prominent NLP tasks, namely, language modelling and machine reading comprehension. Our experiment results shown that the dual-FOFE provide a definite improvement over the original FOFE by approximately 11% in perplexity (PPL) for language modelling task and 8% in Exact Match (EM) score for machine reading comprehension task. | |
dc.identifier.uri | https://hdl.handle.net/10315/37464 | |
dc.language | en | |
dc.rights | Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests. | |
dc.subject | Computer science | |
dc.subject.keywords | Natural language processing | |
dc.subject.keywords | Deep learning | |
dc.title | Dual Fixed-Size Ordinally Forgetting Encoding (FOFE) For Natural Language Processing | |
dc.type | Electronic Thesis or Dissertation |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Watcharawittayakul_Sedtawut_2019_Masters.pdf
- Size:
- 15.73 MB
- Format:
- Adobe Portable Document Format