Research Article Open Access

Multimodal Integration (Image and Text) Using Ontology Alignment

Ahmad Adel Abu Shareha, Mandava Rajeswari and Dhanesh Ramachandram


Problem statement: This study proposed multimodal integration method at the concept level to investigate information from multimodalities. The multimodal data was represented as two separate lists of concepts which were extracted from images and its related text. The concepts extracted from image analysis are often ambiguous, while the concepts extracted from text processing could be sense-ambiguous. The major problems that face the integration of the underlying modalities (image and text) were: The difference in the coverage and the difference in the granularity level. Approach: This study proposed a novel application using ontology alignment to unify the underlying ontologies. The said lists of concepts were represented in a structured form within the corresponding ontologies then the two structural lists are enriched and matched based on the alignment, this matching represent the final knowledge. Results: The difference in the coverage was solved in this study using the alignment process and the difference in the granularity level was solved using the enrichment process. Thus, the proposed integration produced accurate integrated results. Conclusion: Thus, integration of these concepts allows the totality of the knowledge be expressed more precisely.

American Journal of Applied Sciences
Volume 6 No. 6, 2009, 1217-1224


Submitted On: 30 April 2008 Published On: 30 June 2009

How to Cite: Shareha, A. A. A., Rajeswari, M. & Ramachandram, D. (2009). Multimodal Integration (Image and Text) Using Ontology Alignment. American Journal of Applied Sciences, 6(6), 1217-1224.

  • 10 Citations



  • Concept-level multimodal integration
  • ontology alignment and semantic knowledge