SYMBOLIC AND NEURAL APPROACHES TO NATURAL LANGUAGE INFERENCE

Loading...
Thumbnail Image
Date
2021-06
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
[Bloomington, Ind.] : Indiana University
Abstract
Natural Language Inference (NLI) is the task of predicting whether a hypothesis text is entailed (or can be inferred) from a given premise. For example, given the premise that two dogs are chasing a cat, it follows that some animals are moving, but it does not follow that every animal is sleeping. Previous studies have proposed logic-based, symbolic models and neural network models to perform inference. However, in the symbolic tradition, relatively few systems are designed based on monotonicity and natural logic rules; in the neural network tradition, most work is focused exclusively on English. Thus, the first part of the dissertation asks how far a symbolic inference system can go relying only on monotonicity and natural logic. I first designed and implemented a system that automatically annotates monotonicity information on input sentences. I then built a system that utilizes the monotonicity annotation, in combination with hand-crafted natural logic rules, to perform inference. Experimental results on two NLI datasets show that my system performs competitively to other logic-based models, with the unique feature of generating inferences as augmented data for neural-network models. The second part of the dissertation asks how to collect NLI data that are challenging for neural models, and examines the cross-lingual transfer ability of state-of-the-art multilingual neural models, focusing on Chinese. I collected the first large-scale NLI corpus for Chinese, using a procedure that is superior to what has been done with English, along with four types of linguistically oriented probing datasets in Chinese. Results show the surprising transfer ability of multilingual models, but overall, even the best neural models still struggle on Chinese NLI, exposing the weaknesses of these models.
Description
Thesis (Ph.D.) - Indiana University, Department of Linguistics, 2021
Keywords
natural language inference, symbolic reasoning, neural modeling, monotonicity, natural language understanding
Citation
DOI
Link(s) to data and video for this item
Relation
Type
Doctoral Dissertation