File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/06/w06-1518_relat.xml

Size: 2,944 bytes

Last Modified: 2025-10-06 14:15:58

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-1518">
  <Title>Using LTAG-Based Features for Semantic Role Labeling</Title>
  <Section position="7" start_page="130" end_page="131" type="relat">
    <SectionTitle>
5 Related Work
</SectionTitle>
    <Paragraph position="0"> In the community of SRL researchers (cf. (Gildea and Jurafsky, 2002; Punyakanok, Roth and Yih, 2005; Pradhan et al, 2005; Toutanova et al., 2005)), the focus has been on two different aspects of the SRL task: (a) finding appropriate features, and (b) resolving the parsing accuracy problem by combining multiple parsers/predictions. Systems that use parse trees as a source of feature functions for their models have typically outperformed shallow parsing models on the SRL task. Typical features extracted from a parse tree is the path from the predicate to the constituent and various generalizations based on this path (such as phrase type, position, etc.). Notably the voice (passive or  active) of the verb is often used and recovered using a heuristic rule. We also use the passive/active voice by labeling this information into the parse tree. However, in contrast with other work, in this paper we do not focus on the problem of parse accuracy: where the parser output may not contain the constituent that is required for recovering all SRLs.</Paragraph>
    <Paragraph position="1"> There has been some previous work in SRL that uses LTAG-based decomposition of the parse tree and we compare our work to this more closely. (Chen and Rambow, 2003) discuss a model for SRL that uses LTAG-based decomposition of parse trees (as is typically done for statistical LTAG parsing). Instead of using the typical parse tree features used in typical SRL models, (Chen and Rambow, 2003) uses the path within the elementary tree from the predicate to the constituent argument. They only recover semantic roles for those constituents that are localized within a single elementary tree for the predicate, ignoring cases that occur outside the elementary tree. In contrast, we recover all SRLs regardless of locality within the elementary tree. As a result, if we do not compare the machine learning methods involved in the two approaches, but rather the features used in learning, our features are a natural generalization of (Chen and Rambow, 2003).</Paragraph>
    <Paragraph position="2"> Our approach is also very akin to the approach in (Shen and Joshi, 2005) which uses PropBank information to recover an LTAG treebank as if it were hidden data underlying the Penn Treebank.</Paragraph>
    <Paragraph position="3"> This is similar to our approach of having several possible LTAG derivations representing recovery of SRLs. However, (Shen and Joshi, 2005) do not focus on the SRL task, and in both of these instances of previous work using LTAG for SRL, we cannot directly compare our performance with theirs due to differing assumptions about the task.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML