1
Ph.D.Student of Electrical and Computer Engineering, Yazd University
2
Department of Electrical and Computer Engineering, Yazd University
Abstract
Refining image annotation is an effective approach to improve tag base image retrieval. Many images in social networks and search engines have vague tags, incomplete and irrelevant content. However the unreliable tags, reducing the precision of image retrieval, recently some of the tag refinement (TR) algorithms have been suggested as labels noise removal and enrichment of images. In order to achieve optimal result in TR, extracting features that have a good description of visual content of images will have direct impact on accuracy of TR process. Achieving the appropriate description and relevant to the content of images, is the major challenges in the refining image annotation. Due to effectiveness of deep learning in research fields, in this paper we will use deep convolutional neural network (DCNN) in order to extract efficient features for computing images visual and semantic similarity. Employing transfer learning based ImageNet image database in DCNN, for large scale NUSWIDE dataset, indicating the effectiveness of this approach in refining image annotation.
Javanmardi, S., & Zare Chahooki, M. A. (2018). Refining large scale image annotation via transfer learning in deep convolutional neural network. Journal of Machine Vision and Image Processing, 5(1), 39-52.
MLA
Shima Javanmardi; Mohammad Ali Zare Chahooki. "Refining large scale image annotation via transfer learning in deep convolutional neural network". Journal of Machine Vision and Image Processing, 5, 1, 2018, 39-52.
HARVARD
Javanmardi, S., Zare Chahooki, M. A. (2018). 'Refining large scale image annotation via transfer learning in deep convolutional neural network', Journal of Machine Vision and Image Processing, 5(1), pp. 39-52.
VANCOUVER
Javanmardi, S., Zare Chahooki, M. A. Refining large scale image annotation via transfer learning in deep convolutional neural network. Journal of Machine Vision and Image Processing, 2018; 5(1): 39-52.