File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/96/c96-1010_intro.xml
Size: 8,138 bytes
Last Modified: 2025-10-06 14:05:57
<?xml version="1.0" standalone="yes"?> <Paper uid="C96-1010"> <Title>Parsing spoken language without syntax</Title> <Section position="3" start_page="47" end_page="49" type="intro"> <SectionTitle> 3. Semantic Priming </SectionTitle> <Paragraph position="0"> Any speech recognition system involves a high perplexity which requires the definition of top-down parsing constraints. This is why we based the microsemantic parsing on a priming process.</Paragraph> <Section position="1" start_page="47" end_page="47" type="sub_section"> <SectionTitle> 3.1. Priming process </SectionTitle> <Paragraph position="0"> The semantic priming is a predictive process where some already uttered words (priming words) are calling some other ones (primed words) through various meaning associations. It aims a double goal : * It constrains the speech recognition.</Paragraph> <Paragraph position="1"> * It characterizes the meaning dependencies inside the sentence.</Paragraph> <Paragraph position="2"> Each priming step involves two successive processes. At first, the contextual adaptation favors the priming words which are consistent with the semantic context. The latter is roughly modeled by two semantic fields: the task domain and the computing domain. On the other hand, the relational priming identifies the lexemes which share a microsemantic relation with one of the already uttered words. These relations issue directly from the subcategorization frames of these priming words.</Paragraph> </Section> <Section position="2" start_page="47" end_page="48" type="sub_section"> <SectionTitle> 3.2. Priming network </SectionTitle> <Paragraph position="0"> The priming process is carried out by an associative multi-layered network (figure 2) which results from the compilation of the lexicon. Each cell of the network corresponds to a specific lexeme. The inputs represent the priming words. Their activities are propagated up to the output layer which corresponds to the primed words. An additional layer (Structural layer S) handles furthermore the coordinations and the prepositions.</Paragraph> <Paragraph position="1"> We will now describe the propagation of the priming activities. Let us consider : * t current step of analysis * a;/(t) activity of the cell j of the layer i at stept (i e {1, 2, 3, 4, 5, 6, S} ) * ~J(t) synaptic weight between the cell k of the layer i and the cell I of the layer j.</Paragraph> <Paragraph position="2"> Temporal forgetting -- At first, the input activities are slightly modulated by a process of temporal forgetting : ail(t) =amax if i is to the current word.</Paragraph> <Paragraph position="4"> Although it favors the most recent lexemes, this process does not prevent long distance primings.</Paragraph> <Paragraph position="5"> Contextual adaptation -- Each cell of the second layer represents a peculiar semantic field.</Paragraph> <Paragraph position="6"> Its activity depends on the semantic affiliations of the priming words :</Paragraph> <Paragraph position="8"> Then, these contextual cells modulate the initial priming activities :</Paragraph> <Paragraph position="10"> The priming words which are consistent with the current semantic context are therefore favored.</Paragraph> <Paragraph position="11"> Relational Priming -- The priming activities are then dispatched mnong several sub-networks which perform parallel analyses on distinct cases (fig. 3). The dispatched activities represents therefore the priming power of the priming lexemes o51 each microselnantic case :</Paragraph> <Paragraph position="13"> The dispatching weights are dynamically adapted during the parsing (see section 4)+ Their initial values issue front the compilation ol the lexical subcategorization flames : ml4~J,,5,(t) : Cdegmin otherwise.</Paragraph> <Paragraph position="14"> The outputs of the case-based sub-networks, as well as the final priming excitations, are then calculated through a maximum heuristic :</Paragraph> <Paragraph position="16"> It II ~1 ..... spccifi catitm focusing dispatching prmung \] / collection contextual adaptation relational priming L I I:igure 2 --. Structure of lhe primb~g network ifi :/:j if the case (Z COITCS\[)onds to a compulsory argument of the lexeme i or if the latter shoukl fulfill c~ alone.</Paragraph> <Paragraph position="17"> if the case (z corresponds to an optional arg, tltllCllt of i or if the latter should fulfill c~ thanks to a preposition.</Paragraph> <Paragraph position="18"> otherwise.</Paragraph> <Paragraph position="19"> The inner synaptic weights of the case-based sub-networks represent the relations between the priming and the primed words : 031~'~,5<,(t) = mmax if iandj share a microscmanlic relation which corresponds io the case <z.</Paragraph> <Paragraph position="21"> primable words rejected words The primed words aims at constraining the speech recognition, thereby warranting the semantic coherence of the analysis. These constraints can be relaxed by considering the primable words. Every recognized word is finally handled by the parsing process with its priming relation (see section 4).</Paragraph> </Section> <Section position="3" start_page="48" end_page="49" type="sub_section"> <SectionTitle> 3.3. Prepositions </SectionTitle> <Paragraph position="0"> Prepositions restrict the microsemantic assignment of lhe objects they introduce. As a resttlt, the in'epositional cells of the structural layer tnodulate dynamically the case-based dispatching weights to prohibit any inconsistent priming. The rule (3') stands therefore for (3) :</Paragraph> <Paragraph position="2"> At last, the preposition is assigned the TAG argument of the introduced object.</Paragraph> </Section> <Section position="4" start_page="49" end_page="49" type="sub_section"> <SectionTitle> 3.4. Coordinations </SectionTitle> <Paragraph position="0"> The parser deals only for the moment being with logical coordinations (and, or, but...). In such cases, the coordinated elements must share the same microsemantic case. This constraint is worked out by the recall of the already fulfilled microsemantic relations, which were all previously stacked. The dispatching is thus restricted to the recalled relations every time a</Paragraph> <Paragraph position="2"> The coordinate words are finally considered the coo arguments of the conjunction, which is assigned to the shared microsemantic case.</Paragraph> <Paragraph position="3"> 3,5. Back priming Generally speaking, the priming process provides a set of words that should follow the already uttered lexemes. In some cases, a lexeme might however occur before its priming word : (a) I want to enlarge the small window Back priming situations are handled through the following algorithm : Evm~C/ time a new word occurs : 1. If this word was not primed, it is pushed it in a back priming stack.</Paragraph> <Paragraph position="4"> 2. Otherwise, one checks whether this word back primes some stacked ones. Back primed words are then popped out.</Paragraph> <Paragraph position="5"> 4. Microsemantic parsing 4.1. Unification The microsemantic parsing relies on the unification of the subcategorization frames of the lexemes that are progressively recognized. This unification must respect four principles : Unicity -- Any argume~B'~nust be at the most fulfilled by a unique lexeme or a coordination. Coherence- Any lexeme must fulfil at the most a unique argument.</Paragraph> <Paragraph position="6"> Coordination -- Coordinate lexemes must fulfil the same subcategorized argument. Relative completeness -- Any argument might remain unfulfilled although the parser must always favor the more complete analysis. The principle of relative completeness is motivated by the high frequency of incomplete utterances (ellipses, interruptions...) spontaneous speech involves. The parser aims only at extracting an unfinished microsemantic structure pragmatics should then complete. As noticed previously with the coordinations, these principles govern preventively the contextual adaptation of the network weights, so that any incoherent priming is excluded.</Paragraph> </Section> </Section> class="xml-element"></Paper>