File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/04/w04-2610_metho.xml
Size: 29,819 bytes
Last Modified: 2025-10-06 14:09:22
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-2610"> <Title>Support Vector Machines Applied to the Classification of Semantic Relations in Nominalized Noun Phrases</Title> <Section position="3" start_page="0" end_page="0" type="metho"> <SectionTitle> 2 Semantic Relations in Nominalized Noun </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> Phrases In this paper we study the behavior of semantic relations </SectionTitle> <Paragraph position="0"> at the noun phrase level when one of the nouns is nominalized. The following NP level constructions are considered: complex nominals, genitives, adjective phrases, and adjective clauses.</Paragraph> <Paragraph position="1"> Complex Nominals Levi (Levi 1979) defines complex nominals (CNs) as expressions that have a head noun preceded by one or more modifying nouns, or by adjectives derived from nouns (usually called denominal adjectives). Each sequence of nouns, or possibly adjectives and nouns, has a particular meaning as a whole carrying an implicit semantic relation; for example, &quot;parental refusal&quot; (AGENT). The main tasks are the recognition, and the interpretation of complex nominals. The recognition task deals with the identification of CN constructions in text, while the interpretation of CNs focuses on the detection and classification of a comprehensive set of semantic relations between the noun constituents.</Paragraph> </Section> <Section position="2" start_page="0" end_page="0" type="sub_section"> <SectionTitle> Genitives </SectionTitle> <Paragraph position="0"> In English there are two kinds of genitives; in one, the modifier is morphologically linked to the possessive clitic 's and precedes the head noun (s-genitive e.g. &quot;John's conclusion&quot;), and in the second one the modifier is syntactically marked by the preposition of and follows the head noun (of-genitive, e.g. &quot;declaration of independence&quot;). null Adjective Phrases are prepositional phrases attached to nouns and act as adjectives (cf. (Semmelmeyer and Bolander 1992)). Prepositions play an important role both syntactically and semantically ( (Dorr 1997). Prepositional constructions can encode various semantic relations, their interpretations being provided most of the time by the underlying context. For instance, the preposition &quot;with&quot; can encode different semantic relations: (1) It was the girl with blue eyes (MERONYMY), (2) The baby with the red ribbon is cute (POSSESSION), (3) The woman with triplets received a lot of attention (KINSHIP).</Paragraph> <Paragraph position="1"> The conclusion for us is that in addition to the nouns semantic classes, the preposition and the context play important roles here.</Paragraph> <Paragraph position="2"> Adjective Clauses are subordinate clauses attached to nouns (cf. (Semmelmeyer and Bolander 1992)). Often they are introduced by a relative pronoun/adverb (ie that, which, who, whom, whose, where) as in the following examples: (1) Here is the book which I am reading (book is the THEME of reading) (2) The man who was driving the car was a spy (man is the AGENT of driving). Adjective clauses are inherently verb-argument structures, thus their interpretation consists of detecting the semantic role between the head noun and the main verb in the relative clause. This is addressed below.</Paragraph> </Section> </Section> <Section position="4" start_page="0" end_page="0" type="metho"> <SectionTitle> 3 Nominalizations and Mapping of NPs </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> into Grammatical Role Structures 3.1 Nominalizations </SectionTitle> <Paragraph position="0"> A further analysis of various examples of noun - noun pairs encoded by the first three major types of NP-level constructions shows the need for a different taxonomy based on the syntactic and grammatical roles the constituents have in relation to each other. The criterion in this classification splits the noun - noun examples (respectively, adjective - noun examples in complex nominals) into nominalizations and non-nominalizations.</Paragraph> <Paragraph position="1"> Nominalizations represent a particular subclass of NP constructions that in general have &quot;a systematic correspondence with a clause structure&quot; (Quirk et al.1985). The head or modifier noun is derived from a verb while the other noun (the modifier, or respectively, the head) is interpreted as an argument of this verb. For example, the noun phrase &quot;car owner&quot; corresponds to &quot;he owns a car&quot;. The head noun owner is morphologically related to the verb own. Otherwise said, the interpretation of this class of NPs is reduced to the automatic detection and interpretation of semantic roles mapped on the corresponding verb-argument structure.</Paragraph> <Paragraph position="2"> As in (Hull and Gomez 1996), in this paper we use the term nominalization to refer only to those senses of the nominalized nouns which are derived from verbs.</Paragraph> <Paragraph position="3"> For example, the noun &quot;decoration&quot; has three senses in WordNet 2.0: an ornament (#1), a medal (#2), and the act of decorating (#3). Only the last sense is a nominalization. However, there are more complex situations when the underlying verb has more than one sense that refers to an action/event. This is the case of &quot;examination&quot; which has five senses of which four are action-related. In this case, the selection of the correct sense is provided by the context.</Paragraph> <Paragraph position="4"> We are interested in answering the following questions: (1) What is the best set of features that can capture the meaning of noun - noun nominalization pairs for each NP-level construction? and (2) What is the semantic behavior of nominalization constructions across NP levels?</Paragraph> </Section> <Section position="2" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.2 Taxonomy of nominalizations </SectionTitle> <Paragraph position="0"> Deverbal vs verbal noun.</Paragraph> <Paragraph position="1"> (Quirk et al.1985) generally classify nominalizations based on the morphological formation of the nominalized noun. They distinguish between deverbal nouns, i.e. those derived from the underlying verb through word formation; e.g., &quot;student examination&quot;, and verbal nouns, i.e. those derived from the verb by adding the gerund suffix &quot;-ing&quot;; e.g.: &quot;cleaning woman&quot;. Most of the time, verbal nouns are derived from verbs which don't have a deverbal correspondent.</Paragraph> <Paragraph position="2"> Table 1 shows the mapping of the first three major syntactic NP constructions to the grammatical role level. By analyzing a large corpus, we have observed that Quirk's grammatical roles shown in Table 1 are not uniformly distributed over the types of NP-constructions. For example, the &quot;a0a2a1a4a3a6a5a8a7a9a0a11a10a13a12a15a14a17a16a19a18a6a20a22a21a24a23a26a25a28a27a30a29a31a16 &quot; pattern cannot be encoded by s-genitives (e.g., &quot;language teacher&quot;, &quot;teacher of language&quot;). null Some of the non-nominalization NP constructions can also capture the arguments of a particular verb that is missing (e.g., subject - object, subject - complement).</Paragraph> <Paragraph position="3"> The &quot;General&quot; subclass refers to all other types of noun - noun constructions that cannot be mapped on verb-argument relations (e.g., &quot;hundreds of dollars&quot;). Adjective clauses are not part of Table 1 as they describe by default verb-argument relations (semantic roles). Thus they cannot be classified as nominalizations or nonnominalizations. null Two other useful classifications for nominalizations are: paraphrased vs. non-paraphrased, and the classification according to the nominalized noun's verb-argument underlying structures as provided by the NomLex dictionary of English nominalizations (Macleod et al.</Paragraph> <Paragraph position="4"> 1998) discussed more later.</Paragraph> <Paragraph position="5"> Paraphrased vs non-Paraphrased.</Paragraph> <Paragraph position="6"> In most cases, the relation between the nominalized noun and the other noun argument can be captured from the subcategorization properties of the underlying verb. Otherwise said, most of the time, there is a systematic correspondence between the nominalized NP construction and the predicate-argument structure of the corresponding verb in a clausal paraphrase (paraphrased nominalization). The predicate-argument structure can be captured by three grammatical roles: verb-subject, verbobject, and verb-complement. We call the arguments of the verb that appear more frequently or are obligatory - frame arguments. From this point of view the non-nominalized noun can be mapped on the verb-argument frame or not. Thus we can classify paraphrased nominalizations in framed and non-framed according to the presence or absence of the non-nominalized noun in the frame of the verb. The semantic classification of nominalizations involves first the detection of a nominalization, the selection of the correct sense of the root verb, and finally the detection of the semantic relationship with the other noun.</Paragraph> <Paragraph position="7"> Besides the paraphrase nominalization, there is another type which occurs less frequently. We call this type non-paraphrased nominalization as its meaning is different from its most related paraphrase clause. Examples: research budget, design contract, preparation booklet, publishing sub-industry and editing error. An important observation is that the nominalized noun occurs most of the time on the first position in an NP construction.</Paragraph> <Paragraph position="8"> The criteria presented here consider also nominalizations with adjectival modifiers such as &quot;parental refusal&quot;. These adjectives are derived from nouns, so the construction is just a special case of nominalization between nouns.</Paragraph> <Paragraph position="9"> NomLex classification The NomLex dictionary of nominalizations (Macleod et al. 1998) contains 1025 lexical entries and lists the verbs from which the nouns are derived. This dictionary specifies the complements allowed for a nominalization. The mapping is done at a syntactic level only. NomLex is used in the first phase of our algorithm in order to detect a possible nominalization and the corresponding root verb. The criterion of NomLex classification is based on the verb-argument correspondence: a. Verb-nom: The nominalized noun represents the action/state of the verb (e.g., &quot;acquisition challenge&quot;, &quot;depositary receipt)&quot;, b. Subj-nom: The nominalized noun refers to the sub-ject of the verb (e.g., &quot;auto maker&quot;, &quot;math teacher&quot;). This type is also called agential nominalization (Quirk et al.1985) as the nominalized noun captures information about both the subject and verb.</Paragraph> <Paragraph position="10"> c. Obj-nom: The nominalized noun refers to the object of the verb (e.g., &quot;court order&quot;, &quot;company employee&quot;), d. Verb-part: the nominalized noun is derived from a compositional verb (e.g., &quot;takeover target&quot;).</Paragraph> <Paragraph position="11"> 3.3 Corpus Analysis at NP level The data We have assembled a corpus from the Wall Street Journal articles from TREC-9. Table 2 shows for each syntactic for semantic roles detection. The classification is the result of our observations of nominalization patterns at noun phrase level.</Paragraph> <Paragraph position="12"> category the number of randomly selected sentences, the number of instances found in these sentences, and finally the number of nominalized instances our group managed to annotate by hand. The annotation of each example consisted of specifying its feature vector and the most appropriate semantic relation as defined in (Moldovan et al. 2004).</Paragraph> </Section> <Section position="3" start_page="0" end_page="0" type="sub_section"> <SectionTitle> Inter-annotator Agreement </SectionTitle> <Paragraph position="0"> The annotators, four PhD students in Computational Semantics worked in groups of two, each group focusing on one half of the corpus to annotate. Besides the type of relation, the annotators were asked to provide information about the order of the modifier and the head nouns in the syntactic constructions if applicable. For example, &quot;owner of the car&quot; and &quot;car of the owner&quot;.</Paragraph> <Paragraph position="1"> The annotators were also asked to indicate if the instance was a nominalization and if yes, which of the noun constituents was derived from a verb (e.g. the head noun nominalization &quot;student protest&quot;, or the modifier noun nominalization &quot;working woman'&quot; cf. (Quirk et al.1985)).</Paragraph> <Paragraph position="2"> The annotators' agreement was measured using the Kappa statistics (Siegel and Castellan 1988), one of the most frequently used measure of inter-annotator agreement for classification tasks: a56a58a57a60a59a6a61a33a62</Paragraph> <Paragraph position="4"> a66a68a67a49a69a39a70a72a71 is the proportion of times the raters agree and a66a68a67a49a69a39a73a74a71 is the probability of agreement by chance. The K coefficient is 1 if there is a total agreement among the annotators, and 0 if there is no agreement other than that expected to occur by chance.</Paragraph> <Paragraph position="5"> For each construction, the corpus was split after agreement with an 80/20 training/testing ratio. For each pattern, we computed the K coefficient only for those instances tagged with one of the 35 semantic relations (K value for: NN (0.64), AdjN (0.70), s-genitive (0.69), of-genitive (0.73), adjective phrases (0.67), and adjective clauses (0.71)). For each pattern, we also calculated the number of pairs that were tagged with OTHERS by both annotators, over the number of examples classified in this category by at least one of the judges, averaged by the number of patterns considered (agreement for OTHERS: 75%).</Paragraph> <Paragraph position="6"> The K coefficient shows a good level of agreement for the training and testing data on the set of 35 relations, taking into consideration the task difficulty. This can be explained by the instructions the annotators received prior to annotation and by their expertise in lexical semantics.</Paragraph> </Section> <Section position="4" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.4 Distribution of Semantic Relations </SectionTitle> <Paragraph position="0"> Even noun phrase constructions are very productive allowing for a large number of possible interpretations, Table 3 shows that a relatively small set of 35 semantic relations covers a significant part of the semantic distribution of these constructions on a large open-domain corpus. Moreover, the distribution of these relations is dependent on the type of NP construction, each type encoding a particular subset. For example, in the case of s-genitives, there were 13 relations found from the total of 35 relations considered. The most frequently occurring relations were AGENT, TEMPORAL, LOCATION, and THEME. By comparing the subsets of semantic relations in each column we can notice that these semantic spaces (the set of semantic relations an NP construction can encode) are not identical, proving our initial intuition that the NP constructions cannot be alternative ways of packing the same information. Table 3 also shows that there is a subset of semantic relations that can be fully encoded by all types of NP constructions. The statistics about the annotated nominalized examples are as follows (lines 3 and 4 in Table 2): N-N (32.30%), Adj-N (30.80%), s-genitive (21.09%), of-genitive (21.8%), adjective phrase (40.5%).</Paragraph> <Paragraph position="1"> 80% of the examples in adjective phrases (respectively in 94% in s-genitives) had the nominalized noun on the head position.</Paragraph> <Paragraph position="2"> This simple analysis leads to the important conclusion that the NP constructions must be treated separately as their semantic content is different. We can draw from here the following conclusions: 1. Not all semantic relations can be encoded by all NP 2. There are semantic relations that have preferences over particular syntactic constructions.</Paragraph> </Section> <Section position="5" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.5 Model 3.6 Support Vector Machines </SectionTitle> <Paragraph position="0"> Support Vector Machines (SVM) have a strong mathematical foundation (Vapnik 1982) and have been applied successfully to text classification (Tong and Koller 2001), speech recognition, and other applications. We applied SVM to the semantic classification problem and obtained encouraging results.</Paragraph> <Paragraph position="1"> SVM algorithms are a special class of hyperplane classifiers that use the information encoded in the dotproducts of the transformed feature vectors as a similarity measure. The similarity between two instances a0 and a0a2a1 is given as a function</Paragraph> <Paragraph position="3"> inner product of the non-linear function a15a20a3a21a4a22a9a24a23 that maps the original feature vectors into real feature space.</Paragraph> <Paragraph position="4"> The function that provides the best classification is of the form: a25 a69a27a26 a71 a57a29a28a31a30a33a32a35a34</Paragraph> <Paragraph position="6"> tors a0a2a43 for which the Lagrange multipliers a40 a20a45a44a57a47a46 are called support vectors. Intuitively, they are the closest to the separating hyperplane. SVM provide good classifiers with few, well chosen training examples.</Paragraph> <Paragraph position="7"> In order to achieve classification in a32 classes, a32a20a48a50a49 , a binary classifier is built for each pair of classes (a total of a51a53a52 a16 classifiers). A voting procedure is then used to establish the class of a new example. For the experiments with semantic relations, the simplest voting scheme has been chosen; each binary classifier has one vote which is assigned to the class it chooses when it is run. Then the class with the largest number of votes is considered to be the answer. Using the specific nature of the semantic relation detection problem, new voting schemes can be designed, with good perspectives of improving the overall precision.</Paragraph> <Paragraph position="8"> The software used in these experiments is the package LIBSVM, http://www.csie.ntu.edu.tw/a54 cjlin/libsvm/ which implements the SVM algorithm described above.</Paragraph> <Paragraph position="9"> The choice of the kernel is the most difficult part of applying SVM algorithms as the performance of the classifier might be enhanced with a judicious choice of the kernel. We used in our experiments 4 types of general kernels (linear, polynomial, radial-based and sigmoid), with good results. All of them had nearly the same performance, with slight deviations between 2% and 4% on a reduced testing set. However, remarkable is the fact that all classifiers, regardless of the kernel used, made the same mistakes (misclassified the same examples - eg, a classifier with 58% precision makes the same mistakes as one with 62% precision, plus some of its own, and this situation occurred even when the two classifiers had different kernels), while the overall precision seems to be around to the same value during the coefficient tuning.</Paragraph> <Paragraph position="10"> This shows that the limitation is rather imposed by the classification task than by the kernel type.</Paragraph> </Section> <Section position="6" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.7 Feature space </SectionTitle> <Paragraph position="0"> The key to a successful semantic classification of NP constructions is the identification of their most specific lexical, syntactic, semantic and contextual features. We developed algorithms for finding their values automatically.</Paragraph> <Paragraph position="1"> The values of these features are determined with the help of some important resources mentioned below.</Paragraph> <Paragraph position="2"> ComLex (Grishman et al.1994) is a computational lexicon providing syntactic information for more than 38,000 English headwords. It contains detailed syntactic information about the attributes of each lexical item and the subcategorization frames when words have arguments.</Paragraph> <Paragraph position="3"> This last feature is the most useful for our task as the senses of verbs are clustered by the syntactic frames. We will use ComLex in combination with VerbLeX to map the syntactic behaviors to verb semantic classes.</Paragraph> <Paragraph position="4"> VerbLeX is an in-house verb lexicon built by enriching VerbNet (Kipper et al. 2000) with verb synsets from WordNet and verbs extracted from the semantic frames of FrameNet. It contains information about the semantic roles that can appear within a class of verbs together with the selectional restrictions for their lexical realizations, syntactic subcategorization and WordNet verb senses. The syntactic information is less detailed than in ComLex, but a mapping between these two resources will provide both the semantic and syntactic information needed for the task. From the total of 13,213 verbs in the extended VerbNet, 6,077 were distinct. It also provides a mapping from the FrameNet deep semantic roles to general thematic roles (list defined in (Moldovan et al.</Paragraph> <Paragraph position="5"> 2004)), and use cases for VerbNet.</Paragraph> <Paragraph position="6"> relations was presented in (Moldovan et al. 2004). The percentages represent the number of examples that encode a semantic relation for a particular pattern. The last row shows the number of examples covered by each pattern in the entire annotated corpus (1502 pairs).</Paragraph> <Paragraph position="7"> An essential aspect of our approach below is the word sense disambiguation (WSD) of the content words (nouns, verbs, adjectives and adverbs). Using a state-of-the-art open-text WSD system, each word is mapped into its corresponding WordNet 2.0 sense. When disambiguating each word, the WSD algorithm takes into account the surrounding words, and this is one important way through which context gets to play a role in the semantic classification of NPs.</Paragraph> <Paragraph position="8"> So far, we have identified and experimented with the following NP features: 1. Semantic class of the non-nominalized noun. The non-nominalized noun is classified into one of the 39 EuroWordNet noun semantic classes. VerbNet classes extended in VerbLeX contain selectional restrictions for different semantic roles inside the verb frame. These restrictions are provided based on the EuroWordNet noun semantic classes.</Paragraph> <Paragraph position="9"> Example: &quot;computer maker&quot;, where &quot;computer&quot; is mapped to the ABSTRACT noun category in EuroWord-Net. We intend to map the EuroWordNet top noun semantic classes into their WordNet correspondents.</Paragraph> <Paragraph position="10"> 2. Verb class for nominalized noun, or verb in adjective clauses maps the nominalizing verb into its VerbLeX class. The intuition behind this feature is that semantic relations cluster around specific VerbLeX verb classes.</Paragraph> <Paragraph position="11"> 3. Type of nominalization indicates the NomLex nominalization class. For this experiment we considered only examples that could be found in NomLex. By specifying subj-nom, obj-nom, and verb-nom types of nominalization, we reduce the list of possible semantic relations the verb can have with the non-nominalized noun. Example: &quot;computer maker&quot;, where &quot;maker&quot; is an agential deverbal noun that captures both the subject (respectively, AGENT) and the verb.</Paragraph> <Paragraph position="12"> Thus, the noun &quot;computer&quot; can only map to object (respectively, THEME).</Paragraph> <Paragraph position="13"> 4. Verbal nominalization is a binary feature indicating whether the nominalized noun is gerundive or not. Chomsky (Chomsky 1970) showed that gerundive nominalizations have different behavior than derived nominalizations. Example: &quot;woman worker&quot; vs &quot;working woman&quot;; here &quot;working&quot; is a verbal nominal.</Paragraph> </Section> </Section> <Section position="5" start_page="0" end_page="0" type="metho"> <SectionTitle> 5. Semantic class of the </SectionTitle> <Paragraph position="0"> coordinating word. This is a contextual feature and can be either a noun (if the phrase that contains the nominalization is attached to a noun) or a verb (if the phrase is an argument of the verb in the sentence).</Paragraph> <Paragraph position="1"> The feature value is either the VerbLeX class of the verb or the root of the noun in the WordNet hierarchy. The coordinating word captures some properties present in the noun phrase, properties that help to discriminate between various competing semantic relations. Example: &quot;Foreigners complain that they have limited access to [government procurement] in Japan.&quot; - the coordinating word is &quot;access&quot; which is a psychological feature.</Paragraph> <Paragraph position="2"> 6. Position of the nominalized noun depicts the position of the nominalizing verb in the compound; ie, either head or modifier. Example: &quot;working woman&quot;, where the nominalized noun is the modifier, and &quot;computer maker&quot; where the nominalized noun is the head noun.</Paragraph> <Paragraph position="3"> 7. In frame is a three-value feature indicating whether the compound has a paraphrase or if the peer in the compound is framed or not. If the peer in the NP noun-noun pair is in the corresponding VerbLeX predicate-argument frame, than the relation is captured in the predicate-argument structure. If it is not in the VerbLeX frame, but is an external argument (eg, LOCA-TION, TEMPORAL, MANNER, etc.), then it is no-frame.</Paragraph> <Paragraph position="4"> Otherwise, there is no paraphrase that keeps the meaning, so the relation is not defined by the predicate-argument frame. Example: &quot;computer maker&quot; is framed where as &quot;backyard composting&quot; is non-framed, and &quot;editing error&quot; is no-paraphrase (has no paraphrase of type verb-argument).</Paragraph> <Paragraph position="5"> 8. Relative pronoun/adverb applies only to adjective clauses and embeds information about the grammatical and/or semantic role of the head noun in the subordinate clause. Example: &quot;the room where the meeting took place&quot; - the word where implies location.</Paragraph> </Section> <Section position="6" start_page="0" end_page="0" type="metho"> <SectionTitle> 9. Grammatical role of relative </SectionTitle> <Paragraph position="0"> pronoun/adverb applies only to adjective clauses and specifies the grammatical role of the relative pronoun/adverb, if one exists. This feature depicts better the grammatical role played in the sentence by the head noun. We used for this purpose an in-house rule-based grammatical role detection module, which annotates the following roles (cf. (Quirk et al.1985): subject, direct object, indirect object, subject complement (argument for copular verbs), object complement (second argument for complex transitive verbs), object oblique, free predicative, and approximates extent and temporal semantic roles. Example: &quot;the man who gave Mary the book&quot; Mary and the book are indirect object and, respectively direct object, so man cannot be THEME or RECIPIENT.</Paragraph> <Paragraph position="1"> 10. Voice. This feature applies only to adjective clauses and indicates the voice of the verb in the relative clause. The voice plays an important role in the correlation between grammatical roles and semantic roles in a sentence. Example: &quot;the child that was taken to the zoo&quot; - passive voice, so the child is in a THEME relation with the verb take.</Paragraph> <Paragraph position="2"> Let's consider an example of nominalization with its features.</Paragraph> <Paragraph position="3"> &quot;Several candidates have withdrawn their names from consideration after administration officials asked them for their views on abortion and fetal-tissue transplants.&quot; The noun compound &quot;fetal-tissue#1 transplant#1&quot; is detected as a nominalization as the noun &quot;transplant&quot; is derived from the verb &quot;to transplant#3&quot;. The features and their values are: Feature 1: semantic class for fetal-tissue: body-part; Feature 2: verb class for transplant: fill-9.8; Feature 3: type of nominalization: verb-nom; Feature 4: gerundive: no (0); Feature 5: semantic class for coordinating word (&quot;view&quot;) = psychological feature#1; Feature 6: position of the nominalized noun = second; Feature 7: in frame = yes.</Paragraph> <Paragraph position="4"> The in-house extended verb lexicon VerbLeX shows the following semantic frame for the verb class fill9.8: Agent[+animate] Destination[+location -region] Theme[+concrete] Body-part is a subcategory of concrete. Thus, for this example the semantic relation is THEME.</Paragraph> </Section> class="xml-element"></Paper>