Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorJi, Luning-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/2233-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic University-
dc.rightsAll rights reserveden_US
dc.titleTerminology extraction using contextual informationen_US
dcterms.abstractThis work investigates different algorithms of automatic terminology extraction. The investigation considers two characteristics of terminology - unithood and termhood corresponding to two steps in terminology extraction, namely, term extraction and terminology verification. In the first step for term extraction, two statistic-based measurements considering the internal and contextual relationship are used to estimate the soundness of an extracted string pattern being a valid term. In the second step for terminology verification, window-based contextual information within a logical sentence is used. Two window-based approaches using on domain knowledge and syntax of the contextual information are proposed. After evaluating the merits and problems of each approach, a hybrid approach is designed to combine both the syntactic information and domain specific knowledge to verify the extracted candidate terms as terminology or not. Furthermore, a component-based composition algorithm is proposed to help verify the extracted terms as valid terminology. Experiments show that the hybrid approach can achieve significant improvement with the best F-measure, not only maintaining a good precision but also a good recall. Due to the special nature of Chinese, this work investigates details of the effect of word segmentation in terminology extraction through the comparisons of two preprocessing models - a character-based model and a word-based model. Limitations of segmentation and some feasible suggestions for dealing with these limitations are also provided. Furthermore, this work investigates methods to construct a core lexicon for a specific domain from an existing domain lexicon. The core lexicon contains the most fundamental terms used in a domain through which other terms in the domain can be constructed. Three different approaches considering four characteristics of core lexicon are proposed and implemented. Evaluations show that the automatic extracted core lexicon can have good coverage of the domain lexicon as well as being minimal with on redundant terms. The use of a core lexicon can reduce program runtime and memory usage in real applications.en_US
dcterms.extentxii, 146 leaves : ill. ; 30 cm.en_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2007en_US
dcterms.educationalLevelAll Masteren_US
dcterms.educationalLevelM.Phil.en_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertations.en_US
dcterms.LCSHChinese language -- Terms and phrases -- Data processing.en_US
dcterms.LCSHNatural language processing (Computer science)en_US
dcterms.LCSHChinese language -- Data processing.en_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
b21459538.pdfFor All Users7.83 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/2233