Tag Archives: generative AI

Large Language Models in the Institutional Press: Investigating the Effects on Information Sourcing and News Production

Zhang, Xiaoke, Myunghwan Lee, Mi Zhou, Gene Moo Lee.Large Language Models in the Institutional Press: Investigating the Effects on Information Sourcing and News Production,” 3rd round R&R, MIS Quarterly.

  • Presentations: UBC (2024), DS (2024), CIST (2024), BIGS (2024), JUSWIS (2025), UIUC (2025)
  • Industry partner: Muhayu

Large language models (LLMs) are transforming journalism by directly entering journalistic workflows, introducing new opportunities and challenges for the institutional press. This study investigates how LLM assistance affects journalists’ information sourcing in news production using a mixed-method approach. We begin with a qualitative study of 43 journalists to identify and theorize how LLM assistance affects three core journalistic values: publication promptness, information source quantity, and information source originality. We then compile a large-scale dataset of 1,073,742 news articles from 111 South Korean news outlets and collaborate with industry experts to detect undisclosed LLM-assisted articles. Our event-level analysis shows that LLM assistance accelerates publication but reduces the number of information sources used in news articles, with a larger decline in primary sources than in secondary sources. Heterogeneity analyses and a randomized experiment suggest that this reduction is driven by two mechanisms: an LLM generation mechanism that narrows the set of retrieved and represented sources, and a metacognitive regulation mechanism that reduces journalists’ active search and evaluation. We further show that these effects extend beyond individual articles. A journalist-level difference-in-differences analysis indicates that LLM adoption leads to persistent reductions in source usage over time. Our findings offer practical implications for LLM system design, newsroom practices, and institutional disclosure policy.

Xiaoke Zhang’s Master’s Thesis

Xiaoke Zhang (2023). “How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok”, Master’s Thesis, University of British Columbia

Supervisors: Gene Moo Lee, Mi Zhou

The rising demand for online video content has fostered one of the fastest-growing markets as evidenced by the popularity of platforms like TikTok. Because video content is often difficult to create, platforms have attempted to leverage recent advancements in artificial intelligence (AI) to help creators with their video creation process. However, surprisingly little is known about the effects of AI on content creators’ productivity and creative patterns in this emerging market. Our paper investigates the adoption impact of AI-generated voice – a generative AI technology creating acoustic artifacts – on video creators by empirically analyzing a unique dataset of 4,021 creators and their 428,918 videos on TikTok. Utilizing multiple audio and video analytics algorithms, we detect the adoption of AI voice from the massive video data and generate rich measurements for each video to quantify its characteristics. We then estimate the effects of AI voice using a difference-in-differences model coupled with look-ahead propensity score matching. Our results suggest that the adoption of AI voice increases creators’ video production and that it induces creators to produce shorter videos with more negative words. Interestingly, creators produce more novel videos with less self-disclosure when using AI voice. We also find that AI-voice videos received less viewer engagement unintendedly. Our paper provides the first empirical evidence of how generative AI reshapes video content creation on online platforms, which provides important implications for creators, platforms, and policymakers in the digital economy.

 

AI Voice in Online Video Platforms: A Multimodal Perspective on Content Creation and Consumption

Zhang, Xiaoke, Mi Zhou, Gene Moo Lee AI Voice in Online Video Platforms: A Multimodal Perspective on Content Creation and Consumption,3rd round R&R at MIS Quarterly.

  • Best Student Paper Nomination at CIST 2023; Best Paper Runner-Up Award at KrAIS Summer Workshop 2023
  • Presentations: INFORMS DS (2022), UBC (2022), WITS (2022), Yonsei (2023), POSTECH (2023), ISMS MKSC (2023), CSWIM (2023), KrAIS Summer (2023), Dalhousie (2023), CIST (2023), Temple (2024), Santa Clara U (2024), Wisconsin Milwaukee (2024)
  • Media coverage: [UBC News] [Global News]
  • API sponsored by Ensemble Data
  • SSRN version: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4676705
  • Previous title: How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok

Major user-generated content (UGC) platforms like TikTok have introduced AI-generated voice to assist creators in complex multimodal video creation. AI voice in videos represents a novel form of partial AI assistance, where AI augments one specific modality (audio), whereas creators maintain control over other modalities (text and visuals). This study theorizes and empirically investigates the impacts of AI voice adoption on the creation, content characteristics, and consumption of videos on a video UGC platform. Using a unique dataset of 554,252 TikTok videos, we conduct multimodal analyses to detect AI voice adoption and quantify theoretically important video characteristics in different modalities. Using a stacked difference-in-differences model with propensity score matching, we find that AI voice adoption increases creators’ video production by 21.8%. While reducing audio novelty, it enhances textual and visual novelty by freeing creators’ cognitive resources. Moreover, the heterogeneity analysis reveals that AI voice boosts engagement for less-experienced creators but reduces it for experienced creators and those with established identities. We conduct additional analyses and online randomized experiments to demonstrate two key mechanisms underlying these effects: partial AI process augmentation and partial AI content substitution. This study contributes to the UGC and human-AI collaboration literature and provides practical insights for video creators and UGC platforms.

Seeing the Unseen: The Effects of Implicit Representation in an Online Dating Platform

Kwon, Soonjae, Gene Moo Lee, Dongwon Lee, Sung-Hyuk Park (2024) “Seeing the Unseen: The Effects of Implicit Representation in an Online Dating Platform,” Working Paper.

  • Previous title: Learning Faces to Predict Matching Probability in an Online Dating Market
  • Presentations: DS (2021), AIMLBA (2021), WITS (2021), ICIS (2022)
  • Preliminary version in ICIS 2022 Proceedings
  • Based on an industry collaboration

This study investigates the effects of implicit preference-based representation on user engagement and matching outcomes in two-sided platforms, focusing on an online dating context. We develop a novel approach using explainable AI and generative AI to create personalized representations that reflect users’ implicit preferences. Through extensive matching simulations, we demonstrate that implicit representation significantly enhances both user engagement and matching outcomes across various recommendation algorithms. Our findings reveal heterogeneous effects driven by positive cross-side and same-side network effects, which vary depending on the gender distribution within the platform. This research contributes to understanding implicit representation in two-sided platforms and offers insights into the transformative potential of generative AI in digital ecosystems.