Large amounts of publicly available textual data made possible the rapid developments of neural natural language processing (NLP) models. One of the NLP tasks that particularly benefited from large amounts of text, but which at the same time holds promise for solving the problem of data overabundance, is automated text summarization. In particular, the goal of extractive text summarization is the production of a short but informationally rich and con- densed subset of the original text. The topic of this thesis is to research and (re)implement a new approach within the extractive summarization field, which focuses on framing extractive summarization as a semantic text-matching problem. As of now, most of the neural extractive summarization models follow the same paradigm: extract sentences, score them and pick the most salient ones. However, by choosing the most salient sentences, we are often left with a summary where most sentences are redundant. Zhong et al. (2020) proposed a novel paradigm where instead of choosing the salient sentences individually (sentence-level summarization), the focus is on simultaneously generating and picking the most salient summaries (summary-level summarization). The objective of the thesis is to reimplement this novel paradigm, research the flaws of previous models, and potentially improve the capabilities of this new summarization approach.
The model is split in two modules - candidate generation module and semantic text matching module.
The details for model execution are detailed in the main.py file.
This project is licensed under the MIT License - see the LICENSE.md file for details