Abstract | We present an algorithm for learning from unlabeled text, based on the Vector SpaceModel (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the Scholastic Aptitude Test (SAT). A verbal analogy has the form <em>A:B::C:D</em>, meaning '<em>A</em> is to <em>B</em> as <em>C</em> is to <em>D</em>'; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, <em>A:B</em>, and the problem is to select the most analogous word pair, <em>C:D</em>, from a set of five choices. The VSM algorithm correctly answers 47% of a collection of 374 college-level analogy questions (random guessing would yield 20% correct). We motivate this research by relating it to work in cognitive science and linguistics, and by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as 'laser printer', according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbor algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5% (random guessing: 3.3%). With 5 classes of semantic relations, the F value is 43.2% (random: 20%). The performance is state-of-the-art for these challenging problems. |
---|