Description
Metadata
Settings
About:
“A picture is worth a thousand words”, the adage reads. However, pictures cannot replace words in terms of their ability to efficiently convey clear (mostly) unambiguous and concise knowledge. Images and text, indeed, reveal different and complementary information that, if combined, result in more information than the sum of that contained in the single media. The combination of visual and textual information can be obtained through linking the entities mentioned in the text with those shown in the pictures. To further integrate this with agent background knowledge, an additional step is necessary. That is, either finding the entities in the agent knowledge base that correspond to those mentioned in the text or shown in the picture or, extending the knowledge base with the newly discovered entities. We call this complex task Visual-Textual-Knowledge Entity Linking (VTKEL). In this paper, after providing a precise definition of the VTKEL task, we present a dataset composed of about 30K commented pictures, annotated with visual and textual entities, and linked to the YAGO ontology. Successively, we develop a purely unsupervised algorithm for the solution of the VTKEL tasks. The evaluation on the VTKEL dataset shows promising results.
Permalink
an Entity references as follows:
Subject of Sentences In Document
Object of Sentences In Document
Explicit Coreferences
Implicit Coreferences
Graph IRI
Count
http://ns.inria.fr/covid19/graph/entityfishing
5
http://ns.inria.fr/covid19/graph/articles
3
Faceted Search & Find service v1.13.91
Alternative Linked Data Documents:
Sponger
|
ODE
Raw Data in:
CXML
|
CSV
| RDF (
N-Triples
N3/Turtle
JSON
XML
) | OData (
Atom
JSON
) | Microdata (
JSON
HTML
) |
JSON-LD
About
This work is licensed under a
Creative Commons Attribution-Share Alike 3.0 Unported License
.
OpenLink Virtuoso
version 07.20.3229 as of Jul 10 2020, on Linux (x86_64-pc-linux-gnu), Single-Server Edition (94 GB total memory)
Copyright © 2009-2025 OpenLink Software