Tag Archives: social media

Xiaoke Zhang’s Master’s Thesis

Xiaoke Zhang (2023). “How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok”, Master’s Thesis, University of British Columbia

Supervisors: Gene Moo Lee, Mi Zhou

The rising demand for online video content has fostered one of the fastest-growing markets as evidenced by the popularity of platforms like TikTok. Because video content is often difficult to create, platforms have attempted to leverage recent advancements in artificial intelligence (AI) to help creators with their video creation process. However, surprisingly little is known about the effects of AI on content creators’ productivity and creative patterns in this emerging market. Our paper investigates the adoption impact of AI-generated voice – a generative AI technology creating acoustic artifacts – on video creators by empirically analyzing a unique dataset of 4,021 creators and their 428,918 videos on TikTok. Utilizing multiple audio and video analytics algorithms, we detect the adoption of AI voice from the massive video data and generate rich measurements for each video to quantify its characteristics. We then estimate the effects of AI voice using a difference-in-differences model coupled with look-ahead propensity score matching. Our results suggest that the adoption of AI voice increases creators’ video production and that it induces creators to produce shorter videos with more negative words. Interestingly, creators produce more novel videos with less self-disclosure when using AI voice. We also find that AI-voice videos received less viewer engagement unintendedly. Our paper provides the first empirical evidence of how generative AI reshapes video content creation on online platforms, which provides important implications for creators, platforms, and policymakers in the digital economy.

 

AI Voice in Online Video Platforms: A Multimodal Perspective on Content Creation and Consumption

Zhang, Xiaoke, Mi Zhou, Gene Moo Lee AI Voice in Online Video Platforms: A Multimodal Perspective on Content Creation and Consumption,Working Paper.

  • Previous title: How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok
  • Presentations: INFORMS DS (2022), UBC (2022), WITS (2022), Yonsei (2023), POSTECH (2023), ISMS MKSC (2023), CSWIM (2023), KrAIS Summer (2023), Dalhousie (2023), CIST (2023), Temple (2024), Santa Clara U (2024), Wisconsin Milwaukee (2024)
  • Best Student Paper Nomination at CIST 2023; Best Paper Runner-Up Award at KrAIS 2023
  • Media coverage: [UBC News] [Global News]
  • API sponsored by Ensemble Data
  • SSRN version: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4676705

Major user-generated content (UGC) platforms like TikTok have introduced AI-generated voice to assist creators in complex multimodal video creation. AI voice in videos represents a novel form of partial AI assistance, where AI augments one specific modality (audio), whereas creators maintain control over other modalities (text and visuals). This study theorizes and empirically investigates the impacts of AI voice adoption on the creation, content characteristics, and consumption of videos on a video UGC platform. Using a unique dataset of 554,252 TikTok videos, we conduct multimodal analyses to detect AI voice adoption and quantify theoretically important video characteristics in different modalities. Using a stacked difference-in-differences model with propensity score matching, we find that AI voice adoption increases creators’ video production by 21.8%. While reducing audio novelty, it enhances textual and visual novelty by freeing creators’ cognitive resources. Moreover, the heterogeneity analysis reveals that AI voice boosts engagement for less-experienced creators but reduces it for experienced creators and those with established identities. We conduct additional analyses and online randomized experiments to demonstrate two key mechanisms underlying these effects: partial AI process augmentation and partial AI content substitution. This study contributes to the UGC and human-AI collaboration literature and provides practical insights for video creators and UGC platforms.

Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach (MISQ 2020)

Shin, Donghyuk, Shu He, Gene Moo Lee, Andrew B. Whinston, Suleyman Cetintas, Kuang-Chih Lee (2020) Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach, MIS Quarterly, 44(4), pp. 1459-1492. [SSRN]

  • Based on an industry collaboration with Yahoo! Research
  • The first MISQ methods article based on machine learning
  • Presented in WeB (Fort Worth, TX 2015), WITS (Dallas, TX 2015), UT Arlington (2016), Texas FreshAIR (San Antonio, TX 2016), SKKU (2016), Korea Univ. (2016), Hanyang (2016), Kyung Hee (2016), Chung-Ang (2016), Yonsei (2016), Seoul National Univ. (2016), Kyungpook National Univ. (2016), UKC (Dallas, TX 2016), UBC (2016), INFORMS CIST (Nashville, TN 2016), DSI (Austin, TX 2016), Univ. of North Texas (2017), Arizona State (2018), Simon Fraser (2019), Saarland (2021), Kyung Hee (2021), Tennessee Chattanooga (2021), Rochester (2021), KAIST (2021), Yonsei (2021), UBC (2022), Temple (2023)

This research methods article proposes a visual data analytics framework to enhance social media research using deep learning models. Drawing on the literature of information systems and marketing, complemented with data-driven methods, we propose a number of visual and textual content features including complexity, similarity, and consistency measures that can play important roles in the persuasiveness of social media content. We then employ state-of-the-art machine learning approaches such as deep learning and text mining to operationalize these new content features in a scalable and systematic manner. For the newly developed features, we validate them against human coders on Amazon Mechanical Turk. Furthermore, we conduct two case studies with a large social media dataset from Tumblr to show the effectiveness of the proposed content features. The first case study demonstrates that both theoretically motivated and data-driven features significantly improve the model’s power to predict the popularity of a post, and the second one highlights the relationships between content features and consumer evaluations of the corresponding posts. The proposed research framework illustrates how deep learning methods can enhance the analysis of unstructured visual and textual data for social media research.

Does Deceptive Marketing Pay? The Evolution of Consumer Sentiment Surrounding a Pseudo-Product-Harm Crisis (J. Business Ethics 2019)

Song, Reo, Ho Kim, Gene Moo Lee, and Sungha Jang (2019) Does Deceptive Marketing Pay? The Evolution of Consumer Sentiment Surrounding a Pseudo-Product-Harm CrisisJournal of Business Ethics, 158(3), pp. 743-761.

The slandering of a firm’s products by competing firms poses significant threats to the victim firm, with the resulting damage often being as harmful as that from product-harm crises. In contrast to a true product-harm crisis, however, this disparagement is based on a false claim or fake news; thus, we call it a pseudo-product-harm crisis. Using a pseudo-product-harm crisis event that involved two competing firms, this research examines how consumer sentiments about the two firms evolved in response to the crisis. Our analyses show that while both firms suffered, the damage to the offending firm (which spread fake news to cause the crisis) was more detrimental, in terms of advertising effectiveness and negative news publicity, than that to the victim firm (which suffered from the false claim). Our study indicates that, even apart from ethical concerns, the false claim about the victim firm was not an effective business strategy to increase the offending firm’s performance.