File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/02/c02-1151_concl.xml

Size: 2,458 bytes

Last Modified: 2025-10-06 13:53:12

<?xml version="1.0" standalone="yes"?>
<Paper uid="C02-1151">
  <Title>Probabilistic Reasoning for Entity &amp; Relation Recognition/</Title>
  <Section position="6" start_page="13" end_page="13" type="concl">
    <SectionTitle>
5 Discussion
</SectionTitle>
    <Paragraph position="0"> The promising results of our preliminary experiments demonstrate the feasibility of our probabilistic framework. For the future work, we plan to extend this research in the following directions.</Paragraph>
    <Paragraph position="1"> The first direction we would like to explore is to apply our framework in a boot-strapping manner. The main difficulty in applying learning on NLP problems is not lack of text corpus, but lack of labeled data. Boot-strapping, applying the classifiers to autonomously annotate the  data and using the new data to train and improve existing classifiers, is a promising approach. Since the precision of our framework is pretty high, it seems possible to use the global inference to annotate new data. Based on this property, we can derive an EM-like approach for labelling and inferring the types of entities and relations simultaneously. The basic idea is to use the global inference output as a means to annotate entities and relations. The new annotated data can then be used to train classifiers, and the whole process is repeated again.</Paragraph>
    <Paragraph position="2"> The second direction is to improve our probabilistic inference model in several ways. First, since the results of the inference procedure we use, the loopy belief propagation algorithm, produces approximate values, some of the results can be wrong. Although the computational time of the exact inference algorithm for loopy network is exponential, we may still be able to run it given the small number of variables that are of interest each time in our case. Therefore, we can further check if the performance suffers from the approximation. Second, the belief network model may not be expressive enough since it allows no cycles. To fully model the problem, cycles may be needed. For example, the class labels of R12 and R21 actually depend on each other. (e.g. If R12 is born in, then R21 will not be born in or kill.) Similarly, the class labels of E1 and E2 can depend on the labels of R12. To fully represent the mutual dependencies, we would like to explore other probabilistic models that are more expressive than the belief network.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML