File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/03/w03-1012_concl.xml
Size: 1,412 bytes
Last Modified: 2025-10-06 13:53:47
<?xml version="1.0" standalone="yes"?> <Paper uid="W03-1012"> <Title>Using LTAG Based Features in Parse Reranking</Title> <Section position="7" start_page="0" end_page="0" type="concl"> <SectionTitle> 6 Conclusions and Future Work </SectionTitle> <Paragraph position="0"> In this paper, we have proposed methods for using LTAG based features in the parse reranking task.</Paragraph> <Paragraph position="1"> The experimental results show that the use of LTAG based features gives rise to improvement over already nely tuned results. We used LTAG based features for the parse reranking task and obtain labeled recall and precision of 89:7%=90:0% on WSJ section 23 of Penn Treebank for sentences of length 100 words. Our results show that the use of LTAG 4In Model 1, we implicitly take every sub-tree of the derivation trees as a feature, but in Model 2, we only consider a small set of sub-trees in a linear kernel.</Paragraph> <Paragraph position="2"> based tree kernel gives rise to a 17% relative difference in f-score improvement over the use of a linear kernel without LTAG based features. In future work, we will use some light-weight machine learning algorithms for which training is faster, such as variants of the Perceptron algorithm. This will allow us to use larger training data chunks and take advantage of global optimization in the search for relevant features.</Paragraph> </Section> class="xml-element"></Paper>