|Name:||Deep Learning for Visual Classification of Adjective Noun Pairs & Its Application in Image Captioning & Video Sentiment Analysis|
|Time:||Wednesday, June 22, 2016
09:00 am - 09:30 am
|Speaker:||Damian Borth, DFKI|
|Abstract:||Nowadays the Web, as a major platform for communication and information exchange, is shifting towards visual content.
Unfortunately, visual content in form of images or videos is limited in its accessibility as compared to textual content. With recent advances in deep learning we are able to analyze the content of images and videos as not seen before. This talk will present the first framework able to extract sentiment from visual content by introducing the Visual Sentiment Ontology (VSO). This ontology consists of thousands of Adjective Noun Pair (ANP) concepts able to capture such polarities. Further, the talk introduces SentiBank, the associated deep convolutional neural network (CNN) used to detect the presence of up to 2089 ANPs in images. Originally designed to assess sentiment in visual content, SentiBank was already shown to have a broad spectrum of application domains ranging from sentiment prediction, aesthetic assessment, image popularity prediction, filtering of explicit content, and image captioning. Finally, the talk will close with the Yahoo Flickr Creative Common 100 million (YFCC100m) dataset which is the largest available dataset in the academic community.