Tag Archives: visual representation

Seeing the Unseen: The Effects of Implicit Representation in an Online Dating Platform

Kwon, Soonjae, Gene Moo Lee, Dongwon Lee, Sung-Hyuk Park (2024) “Seeing the Unseen: The Effects of Implicit Representation in an Online Dating Platform,” Working Paper.

  • Previous title: Learning Faces to Predict Matching Probability in an Online Dating Market
  • Presentations: DS (2021), AIMLBA (2021), WITS (2021), ICIS (2022)
  • Preliminary version in ICIS 2022 Proceedings
  • Based on an industry collaboration

This study investigates the effects of implicit preference-based representation on user engagement and matching outcomes in two-sided platforms, focusing on an online dating context. We develop a novel approach using explainable AI and generative AI to create personalized representations that reflect users’ implicit preferences. Through extensive matching simulations, we demonstrate that implicit representation significantly enhances both user engagement and matching outcomes across various recommendation algorithms. Our findings reveal heterogeneous effects driven by positive cross-side and same-side network effects, which vary depending on the gender distribution within the platform. This research contributes to understanding implicit representation in two-sided platforms and offers insights into the transformative potential of generative AI in digital ecosystems.

Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach (MISQ 2020)

Shin, Donghyuk, Shu He, Gene Moo Lee, Andrew B. Whinston, Suleyman Cetintas, Kuang-Chih Lee (2020) Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach, MIS Quarterly, 44(4), pp. 1459-1492. [SSRN]

  • Based on an industry collaboration with Yahoo! Research
  • The first MISQ methods article based on machine learning
  • Presented in WeB (Fort Worth, TX 2015), WITS (Dallas, TX 2015), UT Arlington (2016), Texas FreshAIR (San Antonio, TX 2016), SKKU (2016), Korea Univ. (2016), Hanyang (2016), Kyung Hee (2016), Chung-Ang (2016), Yonsei (2016), Seoul National Univ. (2016), Kyungpook National Univ. (2016), UKC (Dallas, TX 2016), UBC (2016), INFORMS CIST (Nashville, TN 2016), DSI (Austin, TX 2016), Univ. of North Texas (2017), Arizona State (2018), Simon Fraser (2019), Saarland (2021), Kyung Hee (2021), Tennessee Chattanooga (2021), Rochester (2021), KAIST (2021), Yonsei (2021), UBC (2022), Temple (2023)

This research methods article proposes a visual data analytics framework to enhance social media research using deep learning models. Drawing on the literature of information systems and marketing, complemented with data-driven methods, we propose a number of visual and textual content features including complexity, similarity, and consistency measures that can play important roles in the persuasiveness of social media content. We then employ state-of-the-art machine learning approaches such as deep learning and text mining to operationalize these new content features in a scalable and systematic manner. For the newly developed features, we validate them against human coders on Amazon Mechanical Turk. Furthermore, we conduct two case studies with a large social media dataset from Tumblr to show the effectiveness of the proposed content features. The first case study demonstrates that both theoretically motivated and data-driven features significantly improve the model’s power to predict the popularity of a post, and the second one highlights the relationships between content features and consumer evaluations of the corresponding posts. The proposed research framework illustrates how deep learning methods can enhance the analysis of unstructured visual and textual data for social media research.