File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/w04-2610_concl.xml
Size: 2,184 bytes
Last Modified: 2025-10-06 13:54:26
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-2610"> <Title>Support Vector Machines Applied to the Classification of Semantic Relations in Nominalized Noun Phrases</Title> <Section position="7" start_page="0" end_page="0" type="concl"> <SectionTitle> 4 Overview of Results </SectionTitle> <Paragraph position="0"> The f-measure results obtained so far are summarized in Table 4. They are divided in two categories, nominalizations, and adjective clauses since the feature vectors differ from one category to another. We have compared the performance of SVM with three other learning algorithms: (1) semantic scattering (Moldovan et al. 2004), (2) decision trees (a C4.5 implementation), and (3) Naive Bayes.</Paragraph> <Paragraph position="1"> We considered as baseline semantic scattering which is a new learning model (Moldovan et al. 2004) developed in-house for the semantic classification of noun-noun pairs in NP constructions. The semantic relation derives from the WordNet semantic classes of the two nouns participating in those constructions, as well as the surrounding context provided by the WSD module.</Paragraph> <Paragraph position="2"> As expected, the results vary from pattern to pattern.</Paragraph> <Paragraph position="3"> SVM and Naive Bayes seem to perform better than other models for the nominalizations and adjective clauses.</Paragraph> <Paragraph position="4"> Overall, these results are very encouraging given the complexity of the problem. By comparison with the baseline, the feature vector presented here gives better results. a20 on the overall performance; H-high (over 8%), M-medium(between 2% and 8%), and L-low (below 2%). Empty boxes indicate the absence of features. This explains in part our initial intuition that nominalization constructions at NP level have a different semantic behavior than the NP non-nominalization patterns.</Paragraph> <Paragraph position="5"> We studied the influence of each feature on the performance, and since there are too many cases to discuss we only show in Table 5 the average impact as High, Medium, or Low. This table also shows the features used in each case.</Paragraph> </Section> class="xml-element"></Paper>