Tag Archives: visual representation

Learning Faces to Predict Matching Probability in an Online Dating Market (ICIS 2022)

Kwon, Soonjae, Sung-Hyuk Park, Gene Moo Lee, Dongwon Lee. “Learning Faces to Predict Matching Probability in an Online Dating Market”. In Proceedings of International Conference on Information Systems 2022.

  • Presentations: DS (2021), AIMLBA (2021), WITS (2021), ICIS (2022)
  • Based on an industry collaboration

With the increasing use of online matching platforms, predicting matching probability between users is crucial for efficient market design. Although previous studies have constructed various visual features to predict matching probability, facial features, which are important in online matching, have not been widely used. We find that deep learning-enabled facial features can significantly enhance the prediction accuracy of a user’s partner preferences from the individual rating prediction analysis in an online dating market. We also build prediction models for each gender and use prior theories to explain different contributing factors of the models. Furthermore, we propose a novel method to visually interpret facial features using the generative adversarial network (GAN). Our work contributes to the literature by providing a framework to develop and interpret facial features to investigate underlying mechanisms in online matching markets. Moreover, matching platforms can predict matching probability more accurately for better market design and recommender systems.

Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach (MISQ 2020)

Shin, Donghyuk, Shu He, Gene Moo Lee, Andrew B. Whinston, Suleyman Cetintas, Kuang-Chih Lee (2020) “Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach,” MIS Quarterly, 44(4), pp. 1459-1492. [SSRN]

  • Based on an industry collaboration with Yahoo! Research
  • The first MISQ methods article based on machine learning
  • Presented in WeB (Fort Worth, TX 2015), WITS (Dallas, TX 2015), UT Arlington (2016), Texas FreshAIR (San Antonio, TX 2016), SKKU (2016), Korea Univ. (2016), Hanyang (2016), Kyung Hee (2016), Chung-Ang (2016), Yonsei (2016), Seoul National Univ. (2016), Kyungpook National Univ. (2016), UKC (Dallas, TX 2016), UBC (2016), INFORMS CIST (Nashville, TN 2016), DSI (Austin, TX 2016), Univ. of North Texas (2017), Arizona State (2018), Simon Fraser (2019), Saarland (2021), Kyung Hee (2021), Tennessee Chattanooga (2021), Rochester (2021), KAIST (2021), Yonsei (2021), UBC (2022)

This research methods article proposes a visual data analytics framework to enhance social media research using deep learning models. Drawing on the literature of information systems and marketing, complemented with data-driven methods, we propose a number of visual and textual content features including complexity, similarity, and consistency measures that can play important roles in the persuasiveness of social media content. We then employ state-of-the-art machine learning approaches such as deep learning and text mining to operationalize these new content features in a scalable and systematic manner. For the newly developed features, we validate them against human coders on Amazon Mechanical Turk. Furthermore, we conduct two case studies with a large social media dataset from Tumblr to show the effectiveness of the proposed content features. The first case study demonstrates that both theoretically motivated and data-driven features significantly improve the model’s power to predict the popularity of a post, and the second one highlights the relationships between content features and consumer evaluations of the corresponding posts. The proposed research framework illustrates how deep learning methods can enhance the analysis of unstructured visual and textual data for social media research.