TY - GEN
T1 - Gradable adjective embedding for commonsense knowledge
AU - Lee, Kyungjae
AU - Cho, Hyunsouk
AU - Hwang, Seung won
N1 - Publisher Copyright:
© 2017, Springer International Publishing AG.
PY - 2017
Y1 - 2017
N2 - Adjective understanding is crucial for answering qualitative or subjective questions, such as “is New York a big city”, yet not as sufficiently studied as answering factoid questions. Our goal is to project adjectives in the continuous distributional space, which enables to answer not only the qualitative question example above, but also comparative ones, such as “is New York bigger than San Francisco?”. As a basis, we build on the probability P(New York—big city) and P(Boston—big city) observed in Hearst patterns from a large Web corpus (as captured in a probabilistic knowledge base such as Probase). From this base model, we observe that this probability well predicts the graded score of adjective, but only for “head entities” with sufficient observations. However, the observation of a city is scattered to many adjectives – Cities are described with 194 adjectives in Probase, and, on average, only 2% of cities are sufficiently observed in adjective-modified concepts. Our goal is to train a distributional model such that any entity can be associated to any adjective by its distance from the vector of ‘big city’ concept. To overcome sparsity, we learn highly synonymous adjectives, such as big and huge cities, to improve prediction accuracy. We validate our finding with real-word knowledge bases.
AB - Adjective understanding is crucial for answering qualitative or subjective questions, such as “is New York a big city”, yet not as sufficiently studied as answering factoid questions. Our goal is to project adjectives in the continuous distributional space, which enables to answer not only the qualitative question example above, but also comparative ones, such as “is New York bigger than San Francisco?”. As a basis, we build on the probability P(New York—big city) and P(Boston—big city) observed in Hearst patterns from a large Web corpus (as captured in a probabilistic knowledge base such as Probase). From this base model, we observe that this probability well predicts the graded score of adjective, but only for “head entities” with sufficient observations. However, the observation of a city is scattered to many adjectives – Cities are described with 194 adjectives in Probase, and, on average, only 2% of cities are sufficiently observed in adjective-modified concepts. Our goal is to train a distributional model such that any entity can be associated to any adjective by its distance from the vector of ‘big city’ concept. To overcome sparsity, we learn highly synonymous adjectives, such as big and huge cities, to improve prediction accuracy. We validate our finding with real-word knowledge bases.
KW - Adjective understanding
KW - Commonsense knowledge
KW - Word embedding
UR - http://www.scopus.com/inward/record.url?scp=85018409841&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-57529-2_63
DO - 10.1007/978-3-319-57529-2_63
M3 - Conference contribution
AN - SCOPUS:85018409841
SN - 9783319575285
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 814
EP - 827
BT - Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings
A2 - Cao, Longbing
A2 - Shim, Kyuseok
A2 - Lee, Jae-Gil
A2 - Kim, Jinho
A2 - Moon, Yang-Sae
A2 - Lin, Xuemin
PB - Springer Verlag
T2 - 21st Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2017
Y2 - 23 May 2017 through 26 May 2017
ER -