Tag Archives: visual representation

Learning Faces to Predict Matching Probability in an Online Dating Market

Kwon, Soonjae, Sung-Hyuk Park, Gene Moo Lee, Dongwon Lee (2021) “Learning Faces to Predict Matching Probability in an Online Dating Market”. Work-in-progress.

  • Presentations: DS 2021, AIMLBA 2021, WITS 2021
  • Based on an industry collaboration

With the increasing use of online matching markets, predicting the matching probability among users is crucial for better market design. Although previous studies have constructed visual features to predict the matching probability, facial features extracted by deep learning have not been widely used. By predicting user attractiveness in an online dating market, we find that deep learning-enabled facial features can significantly enhance prediction accuracy. We also predict the attractiveness at various evaluator groups and explain their different preferences based on the theory of evolutionary psychology. Furthermore, we propose a novel method to visually interpret deep learning-enabled facial features using the latest deep learning-based generative model. Our work contributes to IS researchers utilizing facial features using deep learning and interpreting them to investigate underlying mechanisms in online matching markets. From a practical perspective, matching platforms can predict matching probability more accurately for better market design and recommender systems for maximizing the matching outcome.

Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach (MISQ 2020)

Shin, Donghyuk, Shu He, Gene Moo Lee, Andrew B. Whinston, Suleyman Cetintas, Kuang-Chih Lee (2020) “Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach,” MIS Quarterly, 44(4), pp. 1459-1492. [SSRN]

  • Based on an industry collaboration with Yahoo! Research
  • The first MISQ methods article based on machine learning
  • Presented in WeB (Fort Worth, TX 2015), WITS (Dallas, TX 2015), UT Arlington (2016), Texas FreshAIR (San Antonio, TX 2016), SKKU (2016), Korea Univ. (2016), Hanyang (2016), Kyung Hee (2016), Chung-Ang (2016), Yonsei (2016), Seoul National Univ. (2016), Kyungpook National Univ. (2016), UKC (Dallas, TX 2016), UBC (2016), INFORMS CIST (Nashville, TN 2016), DSI (Austin, TX 2016), Univ. of North Texas (2017), Arizona State (2018), Simon Fraser (2019), Saarland (2021), Kyung Hee (2021), Tennessee Chattanooga (2021), Rochester (2021), KAIST (2021), Yonsei (2021)

This research methods article proposes a visual data analytics framework to enhance social media research using deep learning models. Drawing on the literature of information systems and marketing, complemented with data-driven methods, we propose a number of visual and textual content features including complexity, similarity, and consistency measures that can play important roles in the persuasiveness of social media content. We then employ state-of-the-art machine learning approaches such as deep learning and text mining to operationalize these new content features in a scalable and systematic manner. For the newly developed features, we validate them against human coders on Amazon Mechanical Turk. Furthermore, we conduct two case studies with a large social media dataset from Tumblr to show the effectiveness of the proposed content features. The first case study demonstrates that both theoretically motivated and data-driven features significantly improve the model’s power to predict the popularity of a post, and the second one highlights the relationships between content features and consumer evaluations of the corresponding posts. The proposed research framework illustrates how deep learning methods can enhance the analysis of unstructured visual and textual data for social media research.