International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 157 - Number 10 |
Year of Publication: 2017 |
Authors: B. S. Daga, A. A. Ghatol, Anuprita Daga |
10.5120/ijca2017912685 |
B. S. Daga, A. A. Ghatol, Anuprita Daga . Video Events Extraction based on Mixed Feature Modeling. International Journal of Computer Applications. 157, 10 ( Jan 2017), 22-29. DOI=10.5120/ijca2017912685
Now a day’s increase in access of video-based application has opened need for extracting the content in videos. Unprocessed data and low-level features alone are not sufficient to complete the user’s need so deeper understanding of the content at the semantic level is required. Currently, manual techniques which are inefficient, subjective and expensive in time and limit the querying capabilities are used to fulfill the gap between low-level representative features and high-level semantic content. The system that allows the user to query and retrieve associated objects, events, and concepts automatically is proposed .The events can also be representative objects, actions, their impressions, and so on. Here an ontology-based video semantic content model which uses spatial/temporal relations in event and concept definitions is leveraged. Simple & efficient process consideration on main object detection & its common associated mixed direct measurable feature like shape, texture & derived features like co-occurrence & topology is evaluated. An ontology definition provides a wide-domain applicable rule construction standard. In addition to domain ontology’s, additional rule definitions to lower spatial relation computation cost are used. This leads to describe some complicated events close to human thinking. The proposed system has been implemented and tested on domains like road accident & sports for precision and recall measures.