Can Machine Predict Sentiment Express via Images?
Quoc-Tuan Truong
Singapore
- 0 Collaborators
Project status: Published/In Market
Overview / Usage
With the popularity of smartphones, when people recount their experience with a product, service, or venue, they frequently take images as photographic records. Those images might capture the information of what users have experienced about the service, their satisfaction, and their preferences.
In this project, we investigate the problem of detecting the sentiment expressed through photos included in online reviews.
Let us imagine that given a photo taken by a user, the machine can infer the overall sentiment (positive/negative, like/dislike) associated with that photo. Those results can be applied to many real-world applications. For example, one application is improving recommender systems which give recommendations after learning user preferences. Other than that, determining whether a user has had a good or bad experience might help businesses to improve their services.
The scope of this project is not limited to online review photos, but is also widely applicable to other forms of user-generated visual content.
Methodology / Approach
The problem can be formulated as image classification using Convolutional Neural Networks (CNN). However, we observe that the sentiment captured within an image may be affected by three factors: image factor, user factor, and item factor. Thus, we develop item-oriented and user-oriented CNNs that better capture the interaction of image features with specific expressions of users or items.
Technologies Used
Hardware: Intel Xeon Processor
Software: Python, Caffe, Tensorflow
Repository
https://github.com/PreferredAI/Tutorials/tree/master/1806-image-classification/vs-cnn