Kwon, Soonjae, Sung-Hyuk Park, Gene Moo Lee, Dongwon Lee (2021) “Learning Faces to Predict Matching Probability in an Online Dating Market”. Work-in-progress.
Under review for a conference presentation.
Based on an industry collaboration
With the increasing use of online matching markets, predicting the matching probability among users is crucial for better market design. Although previous studies have constructed visual features to predict the matching probability, facial features extracted by deep learning have not been widely used. By predicting user attractiveness in an online dating market, we find that deep learning-enabled facial features can significantly enhance prediction accuracy. We also predict the attractiveness at various evaluator groups and explain their different preferences based on the theory of evolutionary psychology. Furthermore, we propose a novel method to visually interpret deep learning-enabled facial features using the latest deep learning-based generative model. Our work contributes to IS researchers utilizing facial features using deep learning and interpreting them to investigate underlying mechanisms in online matching markets. From a practical perspective, matching platforms can predict matching probability more accurately for better market design and recommender systems for maximizing the matching outcome.
With the advent of social media and mobile platforms, visual data are becoming the first citizen in big data analytics research. Compared to textual data that require significant cognitive efforts to comprehend, visual data (such as image and video) can easily convey the message from the content creator to the general audience. To conduct large-scale studies on such data types, researchers need to use machine learning and computer vision approaches. In this post, I am trying to organize studies in Information Systems, Marketing, and other management disciplines that leverage large-scale analysis of image and video datasets. The papers are ordered randomly:
Based on an industry collaboration with Yahoo! Research
The first MISQ methods article based on machine learning
Presented in WeB (Fort Worth, TX 2015), WITS (Dallas, TX 2015), UT Arlington (2016), Texas FreshAIR (San Antonio, TX 2016), SKKU (2016), Korea Univ. (2016), Hanyang (2016), Kyung Hee (2016), Chung-Ang (2016), Yonsei (2016), Seoul National Univ. (2016), Kyungpook National Univ. (2016), UKC (Dallas, TX 2016), UBC (2016), INFORMS CIST (Nashville, TN 2016), DSI (Austin, TX 2016), Univ. of North Texas (2017), Arizona State (2018), Simon Fraser (2019), Saarland (2021), Kyung Hee (2021)
This research methods article proposes a visual data analytics framework to enhance social media research using deep learning models. Drawing on the literature of information systems and marketing, complemented with data-driven methods, we propose a number of visual and textual content features including complexity, similarity, and consistency measures that can play important roles in the persuasiveness of social media content. We then employ state-of-the-art machine learning approaches such as deep learning and text mining to operationalize these new content features in a scalable and systematic manner. For the newly developed features, we validate them against human coders on Amazon Mechanical Turk. Furthermore, we conduct two case studies with a large social media dataset from Tumblr to show the effectiveness of the proposed content features. The first case study demonstrates that both theoretically motivated and data-driven features significantly improve the model’s power to predict the popularity of a post, and the second one highlights the relationships between content features and consumer evaluations of the corresponding posts. The proposed research framework illustrates how deep learning methods can enhance the analysis of unstructured visual and textual data for social media research.