Generative artificial intelligence (AI) has the potential to revolutionize the creative industry by reshaping the human creative process. We explore the potential of generative AI in the creator economy by investigating the effects of AI-generated voice adoption on creators’ productivity and creative patterns on TikTok, one of the world’s largest video-sharing platforms. Using a unique dataset of 554,252 videos from 4,691 TikTok creators, we conduct multimodal analyses of the video data to detect the adoption of AI voice and to quantify video characteristics. We then estimate the adoption effects using a stacked difference-in-differences model coupled with propensity score matching. Our results suggest that AI voice adoption significantly increases creator productivity. This effect is larger among less experienced or less popular creators, suggesting an equalizing effect of generative AI. Moreover, we find that the use of AI voice enhances video novelty across image, audio, and text modalities, especially among experienced creators, suggesting its role in reducing routine workload and fostering creative exploration. Lastly, our study also uncovers a disinhibition effect, where creators conceal their identities with the AI voice and exert more negative sentiments because of diminished social image concerns. Our paper provides the first empirical evidence of how generative AI reshapes online video creation.
Park, Sungho, Gene Moo Lee, Donghyuk Shin, Sang-Pil Han. “When Does Congruence Matter for Pre-roll Video Ads? The Effect of Multimodal, Ad-Content Congruence on the Ad Completion“, Working Paper [Last update: Jan 29, 2023]
Previous title: Targeting Pre-Roll Ads using Video Analytics
Funded by Sauder Exploratory Research Grant 2020
Presented at Southern Methodist University (2020), University of Washington (2020), INFORMS (2020), AIMLBA (2020), WITS (2020), HKUST (2021), Maryland (2021), American University (2021), National University of Singapore (2021), Arizona (2022), George Mason (2022), KAIST (2022), Hanyang (2022), Kyung Hee (2022), McGill (2022)
Research assistants: Raymond Situ, Miguel Valarao
Pre-roll video ads are gaining industry traction because the audience may be willing to watch an ad for a few seconds, if not the entire ad, before the desired content video is shown. Conversely, a popular skippable type of pre-roll video ads, which enables viewers to skip an ad in a few seconds, creates opportunity costs for advertisers and online video platforms when the ad is skipped. Against this backdrop, we employ a video analytics framework to extract multimodal features from ad and content videos, including auditory signals and thematic visual information, and probe into the effect of ad-content congruence at each modality using a random matching experiment conducted by a major video advertising platform. The present study challenges the widely held view that ads that match content are more likely to be viewed than those that do not, and investigates the conditions under which congruence may or may not work. Our results indicate that non-thematic auditory signal congruence between the ad and content is essential in explaining viewers’ ad completion, while thematic visual congruence is only effective if the viewer has sufficient attentional and cognitive capacity to recognize such congruence. The findings suggest that thematic videos demand more cognitive processing power than auditory signals for viewers to perceive ad-content congruence, leading to decreased ad viewing. Overall, these findings have significant theoretical and practical implications for understanding whether and when viewers construct congruence in the context of pre-roll video ads and how advertisers might target their pre-roll video ads successfully.