Terminology extraction using contextual information

Pao Yue-kong Library Electronic Theses Database

Terminology extraction using contextual information

 

Author: Ji, Luning
Title: Terminology extraction using contextual information
Degree: M.Phil.
Year: 2007
Subject: Hong Kong Polytechnic University -- Dissertations.
Chinese language -- Terms and phrases -- Data processing.
Natural language processing (Computer science)
Chinese language -- Data processing.
Department: Dept. of Computing
Pages: xii, 146 leaves : ill. ; 30 cm.
Language: English
InnoPac Record: http://library.polyu.edu.hk/record=b2145953
URI: http://theses.lib.polyu.edu.hk/handle/200/2233
Abstract: This work investigates different algorithms of automatic terminology extraction. The investigation considers two characteristics of terminology - unithood and termhood corresponding to two steps in terminology extraction, namely, term extraction and terminology verification. In the first step for term extraction, two statistic-based measurements considering the internal and contextual relationship are used to estimate the soundness of an extracted string pattern being a valid term. In the second step for terminology verification, window-based contextual information within a logical sentence is used. Two window-based approaches using on domain knowledge and syntax of the contextual information are proposed. After evaluating the merits and problems of each approach, a hybrid approach is designed to combine both the syntactic information and domain specific knowledge to verify the extracted candidate terms as terminology or not. Furthermore, a component-based composition algorithm is proposed to help verify the extracted terms as valid terminology. Experiments show that the hybrid approach can achieve significant improvement with the best F-measure, not only maintaining a good precision but also a good recall. Due to the special nature of Chinese, this work investigates details of the effect of word segmentation in terminology extraction through the comparisons of two preprocessing models - a character-based model and a word-based model. Limitations of segmentation and some feasible suggestions for dealing with these limitations are also provided. Furthermore, this work investigates methods to construct a core lexicon for a specific domain from an existing domain lexicon. The core lexicon contains the most fundamental terms used in a domain through which other terms in the domain can be constructed. Three different approaches considering four characteristics of core lexicon are proposed and implemented. Evaluations show that the automatic extracted core lexicon can have good coverage of the domain lexicon as well as being minimal with on redundant terms. The use of a core lexicon can reduce program runtime and memory usage in real applications.

Files in this item

Files Size Format
b21459538.pdf 8.016Mb PDF
Copyright Undertaking
As a bona fide Library user, I declare that:
  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

     

Quick Search

Browse

More Information