With the increasing use of online matching platforms, predicting matching probability between users is crucial for efficient market design. Although previous studies have constructed various visual features to predict matching probability, facial features, which are important in online matching, have not been widely used. We find that deep learning-enabled facial features can significantly enhance the prediction accuracy of a user’s partner preferences from the individual rating prediction analysis in an online dating market. We also build prediction models for each gender and use prior theories to explain different contributing factors of the models. Furthermore, we propose a novel method to visually interpret facial features using the generative adversarial network (GAN). Our work contributes to the literature by providing a framework to develop and interpret facial features to investigate underlying mechanisms in online matching markets. Moreover, matching platforms can predict matching probability more accurately for better market design and recommender systems.
With the advent of social media and mobile platforms, visual and multimodal data are becoming the first citizen in big data analytics research. Compared to textual data that require significant cognitive efforts to comprehend, visual data (such as images and videos) can easily convey the message from the content creator to the general audience. To conduct large-scale studies on such data types, researchers need to use machine learning and computer vision approaches. In this post, I am trying to organize studies in Information Systems, Marketing, and other management disciplines that leverage large-scale analysis of image and video datasets. The papers are ordered randomly:
Based on an industry collaboration with Yahoo! Research
The first MISQ methods article based on machine learning
Presented in WeB (Fort Worth, TX 2015), WITS (Dallas, TX 2015), UT Arlington (2016), Texas FreshAIR (San Antonio, TX 2016), SKKU (2016), Korea Univ. (2016), Hanyang (2016), Kyung Hee (2016), Chung-Ang (2016), Yonsei (2016), Seoul National Univ. (2016), Kyungpook National Univ. (2016), UKC (Dallas, TX 2016), UBC (2016), INFORMS CIST (Nashville, TN 2016), DSI (Austin, TX 2016), Univ. of North Texas (2017), Arizona State (2018), Simon Fraser (2019), Saarland (2021), Kyung Hee (2021), Tennessee Chattanooga (2021), Rochester (2021), KAIST (2021), Yonsei (2021), UBC (2022), Temple (2023)
This research methods article proposes a visual data analytics framework to enhance social media research using deep learning models. Drawing on the literature of information systems and marketing, complemented with data-driven methods, we propose a number of visual and textual content features including complexity, similarity, and consistency measures that can play important roles in the persuasiveness of social media content. We then employ state-of-the-art machine learning approaches such as deep learning and text mining to operationalize these new content features in a scalable and systematic manner. For the newly developed features, we validate them against human coders on Amazon Mechanical Turk. Furthermore, we conduct two case studies with a large social media dataset from Tumblr to show the effectiveness of the proposed content features. The first case study demonstrates that both theoretically motivated and data-driven features significantly improve the model’s power to predict the popularity of a post, and the second one highlights the relationships between content features and consumer evaluations of the corresponding posts. The proposed research framework illustrates how deep learning methods can enhance the analysis of unstructured visual and textual data for social media research.