File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/92/h92-1038_evalu.xml

Size: 9,811 bytes

Last Modified: 2025-10-06 14:00:07

<?xml version="1.0" standalone="yes"?>
<Paper uid="H92-1038">
  <Title>Recognition Using Classification and Segmentation Scoring*</Title>
  <Section position="4" start_page="198" end_page="200" type="evalu">
    <SectionTitle>
3. EXPERIMENTS
</SectionTitle>
    <Paragraph position="0"> Experiments have been conducted to determine the feasibility of the recognition approach described here. First, we wished to determine whether fixed-length measurements could be as effective in recognition as using the complete observation sequence, as is normally done in other SSM work and in HMMs. This test would tell whether the segmentation score can compensate for the use of fixed-length measurements. Second, we investigated the comparative performance of the two segmentation scoring mechanisms outlined in the previous section. null</Paragraph>
    <Section position="1" start_page="198" end_page="199" type="sub_section">
      <SectionTitle>
3.1. CIR Feasibility
</SectionTitle>
      <Paragraph position="0"> The feasibility of fixed-length measurements was investigated first in a phoneme classification framework.</Paragraph>
      <Paragraph position="1"> Since we planned to eventually test our algorithms in word recognition on the Resource Management (RM) database, our phone classification experiments were also run on this database. Since the RM database is not phonetically labeled, we used an automatic labeling scheme to determine the reference phoneme sequence and segmentation for each sentence in the database. The labeler, a context-dependent SSM, took the correct orthographic transcription, a pronunciation dictionary, and the speech for a sentence and used a dynamic programming algorithm to find the best phonetic alignment. The procedure used an initial labeling produced by the BBN BYBLOS system \[8\] as a guide, but allowed some variation in pronunciations, according to the dictionary, as well as in segmentation. The resulting alignment is flawed in comparison with carefully hand transcribed speech, as in the TIMIT database. However, our experience has shown that using comparable models and  analysis, there is only about a 4-6% loss in classification performance (e.g., from 72% to 68% correct for context-independent models) between the two databases, and the RM labeling is adequate for making preliminary comparisons of classification algorithms. The final test of any classification algorithm is made under the CIR formalism in word recognition experiments, for which the RM database is well suited.</Paragraph>
      <Paragraph position="2"> In classification, the observation vectors in each segment were linearly sampled to obtain a fixed number of vectors per segment, m = 5 frames. For observed segments of length less than five frames, the transformation repeated some vectors more than once. The feature vector for each frame consisted of 14 Mel-warped cepstral coefficients and their first differences as well as differenced energy. Each of the rn distributions of each segment were modeled as independent full covariance Gaussian distributions. Separate models were trained for males and females by iteratively segmenting and estimating the models using the algorithm described in \[4\]. The testing material came from the standard &amp;quot;Feb89&amp;quot; and &amp;quot;Oct89&amp;quot; test sets. In classification experiments using the Feb89 test set, the percent correct is reported over the complete set of phoneme instances, 11752 for our transcription.</Paragraph>
      <Paragraph position="3"> Several simplifying assumptions were made to facilitate implementation. Only context-independent models were estimated, and the labels and segments of the observation sequence were considered independent.</Paragraph>
      <Paragraph position="4"> On the Feb89 test set the classification results were 65.8% correct when the entire observation sequence was used and 66.4% correct when a fixed number of observations was used for each segment. This result indicates that, in classification, using fixed length measurements can work as well as using the entire observation.</Paragraph>
      <Paragraph position="5"> Having verified that fixed-length features are useful in classification, the next step was to evaluate their use in recognition with the CIR formalism. In recognition, we make use of the N-best formalism. Although originally developed as an interface between the speech and natural language components of a spoken language system \[9\], this mechanism can also be used to rescore hypotheses with a variety of knowledge sources \[10\]. Each knowledge source produces its own score for every hypothesis, and the decision as to the most likely hypothesis is determined according to a weighted combination of scores from all knowledge sources. The algorithm reduces the search of more computationally expensive models, like the SSM, by eliminating very unlikely sentences in the first pass, performed with a less expensive model, such as the HMM. In this work, the BBN BYBLOS system \[8\] is used to generate 20 hypotheses per sentence.</Paragraph>
      <Paragraph position="6"> Using the N-best formalism, an experiment was run comparing the CIR recognizer to an SSM recognizer that uses all observations. The classifier for the CIR system was the same as that used in the previous experiment.</Paragraph>
      <Paragraph position="7"> The joint probability of segmentation and observations, p(X, S), was computed as in Equation (3), using a version of the SSM that considered the complete observation sequence for a segment. That is, not just m, but all observation vectors in a segment were mapped to the distributions and used in finding the score. The weights for combining scores in the N-best formalism were trained on the Feb89 test set. In this case the scores to be combined were simply the SSM score, the number of words and the number of phonemes in a sentence.</Paragraph>
      <Paragraph position="8"> In evaluating performance using the N-best formalism, the percent word error is computed from the highest-ranked of the rescored hypotheses. On the Feb89 test set the word error for both the classification-in-recognition method and the original recognition approach was 9.1%.</Paragraph>
      <Paragraph position="9"> To determine if these results were biased due to training the weights for combining scores on the same test data, this experiment was repeated on the Oct89 test set using the weights developed on the Feb89 test set.</Paragraph>
      <Paragraph position="10"> The performance for the CIR recognizer was 9.4% word error (252 errors in a set of 2684 reference words) and the performance for the original approach using the complete observation sequence was 9.1% word error (244 errors). The performance of the new recognition formalism is thus very close to that of the original scheme, and in fact the difference between them could be attributed to differences associated with suboptimal N-best weight estimation techniques \[11\].</Paragraph>
    </Section>
    <Section position="2" start_page="199" end_page="200" type="sub_section">
      <SectionTitle>
3.2. Segmentation Score
</SectionTitle>
      <Paragraph position="0"> As mentioned previously, some current systems use a classification scheme with no explicit probability of segmentation. We attempted to simulate this effect with the classification recognizer by simply suppressing the score for the joint probability of segmentation and observations. This is equivalent to assuming that the segmentation probabilities are equally likely for all hypotheses considered. Scores were computed for the utterance with and without the p(X, S) term on the Feb89 test set. When just the classification scores were used, word error went from from 9.1% to 10.8%, an 18% degradation in performance. Apparently, the joint probability of segmentation and observations has a significant effect in normalizing the posterior probability for better recognition. null Experiments were also run to compare the two methods of segmentation scoring described above. In the first method, based on equation (3), the same analysis de- null scribed earlier was used at each frame (cepstra plus differenced cepstra and differenced energy) and the summation was over the set of context independent phones.</Paragraph>
      <Paragraph position="1"> In the second method, which computes p(S IX) using equations (4)- (7), we modeled each of the conditional densities in (6) and (7) as the joint, full covariance, Gaussian distribution of the cepstral parameters of the two frames adjoining the hypothesized boundary. In order to reduce the number of free parameters to estimate in the Gaussian model, we used only the cepstral coefficients as features for each frame. On the Feb89 test set the first method had 9.1% combined word error for male and female speakers, while the second method had 11.0% word error. Using the best weights for the N-best combination from this test set, the segmentation algorithms were also run on the Oct89 test set. In this case, the word error rates for the two methods were 9.4% and 11.9%, respectively.</Paragraph>
      <Paragraph position="2"> This result suggests that the boundary-based segmentation score yields performance that is worse than no segmentation score. However, the &amp;quot;no segmentation&amp;quot; case actually uses an implicit segmentation score in that the N hypotheses are assumed to have equally likely segmentations (while all other segmentations have probability zero) and in that phoneme and word counts are used in the combined score. Although we suspect that the marginal distribution model for segmentation scores may still be preferable, clearly more experiments are needed with a larger number of sentence hypotheses to better understand the characteristics of the different approaches. null</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML