Tag Archives: machine learning

Jaecheol Park’s PhD Proposal: Strategic Roles of AI and Mobile Management on Performance: Evidence from U.S. Public Firms

Jaecheol Park (2024) “Strategic Roles of AI and Mobile Management on Performance: Evidence from U.S. Public Firms”, Ph.D. Dissertation Proposal, University of British Columbia. https://jaecheol-park.github.io/

Supervisor: Gene Moo Lee

Supervisory Committee Members: J. Frank Li, Jiyong Park (Georgia)

The integration of emerging technologies such as Artificial Intelligence (AI) and mobile IT into the workplace is transforming how businesses operate. Despite the increasing prevalence and importance of AI and mobile IT, there is limited research on how firms can strategically manage these technologies to achieve competitive advantage and enhance performance. This dissertation consists of two large-scale empirical studies on U.S. public firms, aiming to provide new theoretical and managerial insights into how firms can harness the power of these technologies to drive success.

The first chapter investigates the impact of mobile device management (MDM) on firm performance during the recent pandemic, highlighting the importance of MDM in digital resilience. Drawing on the resource-based view and a novel proprietary dataset from a global MDM provider for U.S. public firms, we find that firms with MDM have better financial performance during the pandemic, demonstrating greater resilience to the shock. Additionally, we explore the moderating role of external and internal factors, revealing that firms with high environmental munificence or those with low IT capabilities experience greater resilience effects from MDM. This study contributes to the work-from-home and hybrid work literature by emphasizing the business value of MDM and its crucial role in building digital resilience.

The second chapter investigates the effect of AI strategic orientation on firm performance with a dual lens on product and process orientation. We create a novel measure of AI orientation by employing a large language model to assess business descriptions in Form 10-K filings, and identify an increasing trend of AI disclosure among U.S. public firms. By dissecting firms’ AI disclosure into AI washing and AI (product and process) orientation, our long-difference analyses show that AI orientation significantly affects costs, sales, and market value but AI washing does not, showing the importance of strategic deployment of AI to create business value. Moreover, we find the heterogeneous effects between AI product and process orientation on performance. This study contributes to the recent AI management literature by providing the strategic role of AI orientation on firm performance.

The findings of the dissertation offer valuable insights for academics, practitioners, and policymakers seeking to understand and leverage these emerging technologies’ full potential. From an academic perspective, this dissertation contributes to the literature on the business value of IT and AI by empirically demonstrating the business value of MDM and AI strategies. From an industry perspective, this research provides actionable guidance for businesses looking to leverage the power of MDM and AI to achieve strategic goals and drive success in the digital age.

News Speed vs. Quality: Investigating Large Language Models’ Impact on Modern Journalism

Zhang, Xiaoke, Myunghwan Lee, Mi Zhou, Gene Moo Lee “News Speed vs. Quality: Investigating Large Language Models’ Impact on Modern Journalism”, Work-in-progress.

  • Presentations: UBC (2024), DS (2024), CIST (2024)

With the advancement of generative artificial intelligence (AI), news outlets are increasingly incorporating large language models (LLMs) into their workflow to increase news productivity and quality. Utilizing a unique empirical setting where two major news organizations in South Korea introduced LLM-based news writing assistants, this study examines how LLM assistance affects news production and consumption. We first developed a novel framework using GPT-4o to extract information sources from news articles. We then constructed a unique dataset of 571 LLM-assisted news articles and 3,489 competing human-generated articles covering the same events. Using the DiNardo-Fortin-Lemieux reweighting method to ensure comparability between the LLM-assisted and human-generated news, our empirical analysis reveals that LLM assistance significantly increases news publication speed but reduces the diversity of information sources in news articles. Furthermore, LLM-assisted news is associated with decreased reader consumption, a trend exacerbated by reduced source diversity even with faster publication speed. Our findings contribute to the broader literature on generative AI’s role in professional content creation.

Exploring the Influence of Machine Learning on Organizational Learning: An Empirical Analysis of Publicly Listed Organizations

Lee, Myunghwan, Timo Sturm, Gene Moo Lee “Exploring the Influence of Machine Learning on Organizational Learning: An Empirical Analysis of Publicly Listed Organizations”, Work-in-Progress.

  • Presentations: JUSWIS 2024, KrAIS Summer 2024

Organizational learning is a core process that controls organizations’ innovation and thus affects organizational performance and long-term survival. Due to the learning capability of machine learning (ML), recent research has recognized the far-reaching influence of ML systems’ contributions to organizational learning. So far, however, the emerging discourse on the role of ML in organizational learning has remained largely theoretical, offering helpful initial insights but inconclusive predictions about ML’s impact. To resolve this tension by adding empirical evidence, we explore the innovations of 265 ML and 700 non-ML organizations from 2006 to 2017. Based on a comprehensive ML measure based on datasets on employees, patents, and academic publications, our results suggest that ML primarily contributes to shifting organizational learning towards exploration. Our results further show that ML’s influence depends on external environmental factors: ML’s effect increases with higher levels of competitors’ strategic orientation towards ML. Lastly, we find that organizations using ML tend to survive longer due to increased performance and more balanced innovation. To the best of our knowledge, this is the first large-scale empirical study of the impact of ML on organizational learning outcomes, contributing to rethinking organizational learning in the era of ML.

Unpacking AI Transformation: The Impact of AI Strategies on Firm Performance with a Dual Lens on Product and Process Orientation

Park, Jaecheol, Myunghwan Lee, J. Frank Li, Gene Moo Lee “Unpacking AI Transformation: The Impact of AI Strategies on Firm Performance with a Dual Lens on Product and Process Orientation”, Work-in-Progress.

  • Presentations: UBC (2024), INFORMS (2024)

Artificial intelligence (AI) technologies hold great potential for large-scale economic impact. Aligned with this trend, recent studies explore the adoption impact of AI technologies on firm performance. However, they predominantly measure AI capabilities with input (e.g., labor/job posting), neglecting to consider the strategic use of such AI input in business operations and value creation. In this paper, we empirically examine how firms’ strategic AI orientation affects firm performance with a dual-lens on product and process orientation. We create a novel firm-year-level AI orientation measure by employing a large language model to analyze business descriptions in Form 10-K filings and identify an increasing trend of AI disclosure among U.S. public firms. By further dissecting firms’ AI disclosure into AI mention and AI (product and process) orientation, our long-difference analyses show that AI orientation significantly affects costs, sales, and market value but AI mention does not, showing the importance of strategic deployment of AI to create business value. Moreover, we find the heterogeneous effects across AI product orientation and AI process orientation on performance. This study contributes to the recent AI management literature by providing the strategic role of AI orientation on firm performance.

Xiaoke Zhang’s Master’s Thesis

Xiaoke Zhang (2023). “How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok”, Master’s Thesis, University of British Columbia

Supervisors: Gene Moo Lee, Mi Zhou

The rising demand for online video content has fostered one of the fastest-growing markets as evidenced by the popularity of platforms like TikTok. Because video content is often difficult to create, platforms have attempted to leverage recent advancements in artificial intelligence (AI) to help creators with their video creation process. However, surprisingly little is known about the effects of AI on content creators’ productivity and creative patterns in this emerging market. Our paper investigates the adoption impact of AI-generated voice – a generative AI technology creating acoustic artifacts – on video creators by empirically analyzing a unique dataset of 4,021 creators and their 428,918 videos on TikTok. Utilizing multiple audio and video analytics algorithms, we detect the adoption of AI voice from the massive video data and generate rich measurements for each video to quantify its characteristics. We then estimate the effects of AI voice using a difference-in-differences model coupled with look-ahead propensity score matching. Our results suggest that the adoption of AI voice increases creators’ video production and that it induces creators to produce shorter videos with more negative words. Interestingly, creators produce more novel videos with less self-disclosure when using AI voice. We also find that AI-voice videos received less viewer engagement unintendedly. Our paper provides the first empirical evidence of how generative AI reshapes video content creation on online platforms, which provides important implications for creators, platforms, and policymakers in the digital economy.

 

How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok

Zhang, Xiaoke, Mi Zhou, Gene Moo Lee How Does AI-Generated Voice Affect Online Video Creation? Evidence from TikTok”, Working Paper.

The rising demand for online video content has fostered one of the fastest-growing markets as evidenced by the growing popularity of platforms like TikTok. In response to the challenges of video creation, these platforms are increasingly incorporating artificial intelligence (AI) to support creators in their video creation process. However, little is known about how AI integration influences online content creation. Our paper aims to address this gap by investigating the impact of AI-generated voice on video creators’ productivity and creative patterns. Using a comprehensive dataset of 554,252 videos from 4,691 TikTok creators, we conduct multimodal analyses of the video data to detect the adoption of AI voice and to quantify video characteristics. We then estimate the adoption effects using a stacked difference-in-differences model coupled with propensity score matching. Our results suggest that AI voice adoption significantly increases creator productivity. Moreover, we find that the use of AI voice enhances video novelty across image, audio, and text modalities, suggesting its role in reducing workload on routine tasks and fostering creative exploration. Lastly, our study also uncovers a disinhibition effect, where creators tend to conceal their identities with the AI voice and exert more negative sentiments because of diminished social image concerns. Our paper provides the first empirical evidence of how AI reshapes online video creation, providing important implications for creators, platforms, and policymakers in the creator economy.

Ideas are Easy but Execution is Everything: Measuring the Impact of Stated AI Strategies and Capability on Firm Innovation Performance

Lee, Myunghwan, Gene Moo Lee (2022) “Ideas are Easy but Execution is Everything: Measuring the Impact of Stated AI Strategies and Capability on Firm Innovation Performance”Work-in-Progress.

Contrary to the promise that AI will transform various industries, there are conflicting views on the impact of AI on firm performance. We argue that existing AI capability measures have two major limitations, limiting our understanding of the impact of AI in business. First, existing measures on AI capability do not distinguish between stated strategies and actual AI implementations. To distinguish stated AI strategy and actual AI capability, we collect various AI-related data sources, including AI conferences (e.g., NeurIPS, ICML, ICLR), patent filings (USPTO), inter-firm transactions related to AI adoption (FactSet), and AI strategies stated in 10-K annual reports. Second, while prior studies identified successful AI implementation factors (e.g., data integrity and intelligence augmentation) in a general context, little is known about the relationship between AI capabilities and in-depth innovation performance. We draw on the neo-institutional theory to articulate the firm-level AI strategies and construct a fine-grained AI capability measure that captures the unique characteristics of AI-strategy. Using our newly proposed AI capability measure and a novel dataset, we will study the impact of AI on firm innovation, contributing to the nascent literature on managing AI.

Do Incentivized Reviews Poison the Well? Evidence from a Natural Experiment at Amazon.com

Park, Jaecheol, Arslan Aziz, Gene Moo Lee. “Do Incentivized Reviews Poison the Well? Evidence from a Natural Experiment at Amazon.comWorking Paper.

  • Presentations: UBC (2021), KrAIS (2021), WISE (2021), PACIS (2022), SCECR (2022), BU Platform (2022), CIST (2022), BIGS (2022)
  • Preliminary version in PACIS 2022 Proceedings

The rapid growth in e-commerce has led to a concomitant increase in consumers’ reliance on digital word-of-mouth to inform their choices. As such, there is an increasing incentive for sellers to solicit reviews for their products. The literature has examined the direct and indirect effects of incentivized reviews on subsequent organic reviews within consumers who received incentives. However, since incentivized reviews and reviewers are often only a small proportion of a review platform (only 1.2% in our sample), it is important to understand whether their presence and absence on the platform affect the organic reviews from other reviewers who have not received incentives, which are often in the majority. We theorize two underlying effects that incentivized reviews can generate on other organic reviews: the herding effect from imitating incentivized reviews and the disclosure effect from the increased trust or skepticism by explicit incentive disclosure statements. Those two effects make organic reviews either follow or deviate from incentivized reviews. Using Bidirectional Encoder Representations from Transformers (BERT) to identify incentivized reviews and a natural experiment caused by a policy change on Amazon.com in October 2016, we conduct difference-in-differences with propensity score matching analyses to identify the effects of banning incentivized reviews on organic reviews. Our results suggest the disclosure effects are salient: banning incentivized reviews has positive effects on organic reviews in terms of frequency, sentiment, length, image, and helpfulness. Moreover, we find that the presence of incentivized reviews has poisoned the well for organic reviews regardless of the incentivized review ratio and that the effect is heterogeneous to product quality uncertainty. Our findings contribute to the literature on online review and platform design and provide insights to platform managers.

Seeing the Unseen: The Effects of Implicit Representation in an Online Dating Platform

Kwon, Soonjae, Gene Moo Lee, Dongwon Lee, Sung-Hyuk Park (2024) “Seeing the Unseen: The Effects of Implicit Representation in an Online Dating Platform,” Working Paper.

  • Previous title: Learning Faces to Predict Matching Probability in an Online Dating Market
  • Presentations: DS (2021), AIMLBA (2021), WITS (2021), ICIS (2022)
  • Preliminary version in ICIS 2022 Proceedings
  • Based on an industry collaboration

This study investigates the effects of implicit preference-based representation on user engagement and matching outcomes in two-sided platforms, focusing on an online dating context. We develop a novel approach using explainable AI and generative AI to create personalized representations that reflect users’ implicit preferences. Through extensive matching simulations, we demonstrate that implicit representation significantly enhances both user engagement and matching outcomes across various recommendation algorithms. Our findings reveal heterogeneous effects driven by positive cross-side and same-side network effects, which vary depending on the gender distribution within the platform. This research contributes to understanding implicit representation in two-sided platforms and offers insights into the transformative potential of generative AI in digital ecosystems.

My thoughts on AI, Big Data, and IS Research

Last update: May 31, 2024

Back in 2021, I had a chance to share my thoughts on how Big Data Analytics and AI will impact Information Systems (IS) research. Thanks to ever-growing datasets (public and proprietary) and powerful computational resources (cloud API, open-source projects), AI and Big Data will be important in IS research in the foreseeable future. If you are an aspiring IS researcher, I believe that you should be able to embrace this and take advantage of this.

First, AI and Big Data are powerful “tools” for IS research. It could be intimidating to see all the fancy new AI techniques. But they are just tools to analyze your data. You don’t need to reinvent the wheel to use them. There are many open-source projects in Python and R that you can use to analyze your data. Also, many cloud services (e.g., Amazon Rekognition, Google Cloud ML, Microsoft Azure ML) allow you to use pre-trained AI models at a modest cost (that your professors can afford). What you need is some working knowledge in programming languages like Python and R. And a high-level understanding of the idea behind algorithms.

Don’t shy away from hands-on programming. Using AI and Big Data tools may not be a competitive advantage in the long run because of the democratization of AI tools. However, I believe it will be the new baseline. So you need to have it in your research toolbox. Specifically, I believe that IS researchers should have a working knowledge of Python/R programming and Linux environment. I recommend these online courses: AI Fundamentals, Data ScienceMachine LearningLinuxSQL, and NoSQL.

Second, AI and Big Data Analytics are creating a lot of interesting new “phenomena” in personal lives, firms, and societies. How AI and robots will be adopted in the workplace and how will that affect the labor market? Are we losing our jobs? Or can we improve our productivity with AI tools? How will experts use AI in professional services? What are the unintended consequences (such as biases, security, privacy, and misinformation) of AI adoptions in the organization and society? And how can we mitigate such issues? There are so many new and interesting research questions.

To stay relevant, I think that IS researchers should closely follow emerging technologies. Again, it could be hard to keep up with all the advances. I try to keep up to date by reading industry reports (from McKinsey and Deloitte) and listening to many podcasts (e.g., Freakonomics Radio, a16 Podcasts by Andreessen Horowitz, Lex Fridman Podcast, Stanford’s Entrepreneurial Thought Leaders, HBR’s Exponential View by Azeem Azhar).

For UBC current and prospective students, here are some resources:

For educators, I have shared my teaching experience using AI in May 2024. You can find the slide deck here.

I hope this post may help people shape their research, teaching, and career strategies. I will try to keep updating this post. Cheers!