Show simple item record

dc.contributor.advisor Moss, Lawrence
dc.contributor.author Hu, Hai
dc.date.accessioned 2021-07-13T06:47:44Z
dc.date.available 2021-07-13T06:47:44Z
dc.date.issued 2021-06
dc.identifier.uri http://hdl.handle.net/2022/26642
dc.description Thesis (Ph.D.) - Indiana University, Department of Linguistics, 2021 en
dc.description.abstract Natural Language Inference (NLI) is the task of predicting whether a hypothesis text is entailed (or can be inferred) from a given premise. For example, given the premise that two dogs are chasing a cat, it follows that some animals are moving, but it does not follow that every animal is sleeping. Previous studies have proposed logic-based, symbolic models and neural network models to perform inference. However, in the symbolic tradition, relatively few systems are designed based on monotonicity and natural logic rules; in the neural network tradition, most work is focused exclusively on English. Thus, the first part of the dissertation asks how far a symbolic inference system can go relying only on monotonicity and natural logic. I first designed and implemented a system that automatically annotates monotonicity information on input sentences. I then built a system that utilizes the monotonicity annotation, in combination with hand-crafted natural logic rules, to perform inference. Experimental results on two NLI datasets show that my system performs competitively to other logic-based models, with the unique feature of generating inferences as augmented data for neural-network models. The second part of the dissertation asks how to collect NLI data that are challenging for neural models, and examines the cross-lingual transfer ability of state-of-the-art multilingual neural models, focusing on Chinese. I collected the first large-scale NLI corpus for Chinese, using a procedure that is superior to what has been done with English, along with four types of linguistically oriented probing datasets in Chinese. Results show the surprising transfer ability of multilingual models, but overall, even the best neural models still struggle on Chinese NLI, exposing the weaknesses of these models. en
dc.language.iso en en
dc.publisher [Bloomington, Ind.] : Indiana University en
dc.rights.uri https://creativecommons.org/licenses/by-nc/4.0/ en
dc.subject natural language inference en
dc.subject symbolic reasoning en
dc.subject neural modeling en
dc.subject monotonicity en
dc.subject natural language understanding en
dc.title SYMBOLIC AND NEURAL APPROACHES TO NATURAL LANGUAGE INFERENCE en
dc.type Doctoral Dissertation en


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUScholarWorks


Advanced Search

Browse

My Account

Statistics