International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 40 - Number 17 |
Year of Publication: 2012 |
Authors: Amira Abd El-atey, Sherif El-etriby, Arabi S. kishk |
10.5120/5072-7464 |
Amira Abd El-atey, Sherif El-etriby, Arabi S. kishk . Semantic Data Extraction from Infobox Wikipedia Template. International Journal of Computer Applications. 40, 17 ( February 2012), 18-23. DOI=10.5120/5072-7464
Wikis are established means for collaborative authoring, versioning and publishing of textual articles. The Wikipedia for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Wikis are created by wiki software and are often used to create collaborative works. One of the key challenges of computer science is answering rich queries. Several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. Semantic web allows of creation of such web. Also, Semantic web contents help us to answer rich queries. One of the new applications in semantic web is DBpedia. DBpedia project focus on creating semantically enriched structured information of Wikipedia. In this article, we describe and clarify the DBpedia project. We test the project to get structured data as triples from some Wikipedia resources. We clarify examples of car resource and Berlin resource. The output data is in RDF (Resource Description Framework) triple format which is the basic technology used for building the semantic web. We can answer rich queries by making use of semantic web structure.