Tag Archives: LLM

Designing for Designers: A Multimodal Hypergraph RAG System To Enhance Automotive Design

Zhang, Xiaoke, Angela Kwon, Mi Zhou, Gene Moo Lee “Designing for Designers: A Multimodal Hypergraph RAG System to Enhance Automotive Design,” Work-in-progress.

The growing adoption of large language models (LLMs) across industries highlights the need for domain-specific systems that leverage an organization’s proprietary knowledge. In the automotive sector, general-purpose LLMs often lack specialized expertise and may produce irrelevant or misleading outputs, hindering vehicle designers’ creative processes. To address these challenges, we partner with Kia Motor, a leading automotive manufacturer in South Korea, to develop a designer-oriented, multimodal hypergraph retrieval-augmented generation (RAG) framework for vehicle concept ideation. Our framework consists of two core components. First, we construct a custom hypergraph knowledge base that captures complex relationships between customer feedback (text modality) and design assets (visual modality). Second, we design an interactive chatbot interface that accepts both free-form text and image inputs, retrieves relevant subgraphs from the knowledge base, and generates contextually grounded responses. We plan to evaluate the prototype through randomized online experiments and user studies involving Kia’s design teams. This work will contribute to design science by proposing a scalable method for multimodal knowledge representation and demonstrating how interactive AI tools can support domain-specific creative exploration.

Large Language Models in the Institutional Press: Investigating the Effects on Information Sourcing and News Production

Zhang, Xiaoke, Myunghwan Lee, Mi Zhou, Gene Moo Lee.Large Language Models in the Institutional Press: Investigating the Effects on Information Sourcing and News Production,” 3rd round R&R, MIS Quarterly.

  • Presentations: UBC (2024), DS (2024), CIST (2024), BIGS (2024), JUSWIS (2025), UIUC (2025)
  • Industry partner: Muhayu

Large language models (LLMs) are transforming journalism by directly entering journalistic workflows, introducing new opportunities and challenges for the institutional press. This study investigates how LLM assistance affects journalists’ information sourcing in news production using a mixed-method approach. We begin with a qualitative study of 43 journalists to identify and theorize how LLM assistance affects three core journalistic values: publication promptness, information source quantity, and information source originality. We then compile a large-scale dataset of 1,073,742 news articles from 111 South Korean news outlets and collaborate with industry experts to detect undisclosed LLM-assisted articles. Our event-level analysis shows that LLM assistance accelerates publication but reduces the number of information sources used in news articles, with a larger decline in primary sources than in secondary sources. Heterogeneity analyses and a randomized experiment suggest that this reduction is driven by two mechanisms: an LLM generation mechanism that narrows the set of retrieved and represented sources, and a metacognitive regulation mechanism that reduces journalists’ active search and evaluation. We further show that these effects extend beyond individual articles. A journalist-level difference-in-differences analysis indicates that LLM adoption leads to persistent reductions in source usage over time. Our findings offer practical implications for LLM system design, newsroom practices, and institutional disclosure policy.

Unpacking AI Transformation: The Impact of AI Strategies on Firm Performance from the Dynamic Capabilities Perspective

Park, Jaecheol, Myunghwan Lee, J. Frank Li, Gene Moo Lee “Unpacking AI Transformation: The Impact of AI Strategies on Firm Performance from the Dynamic Capabilities Perspective,” Work-in-Progress.

  • Presentations: UBC (2024), CIST (2024), INFORMS (2024), SNU (2024), UMass (2024), BIGS (2024), KrAIS (2024), CityU Hong Kong (2025), NTU (2025), AIM (2025), ISR-PDW (2025)
  • Best Paper Award at BIGS 2024
  • Best Student Paper Award at KrAIS 2024

Artificial intelligence (AI) technologies hold great potential for large-scale economic impact. Aligned with this trend, recent studies explore the adoption impact of AI technologies on firm performance. However, they predominantly measure firms’ AI capabilities with input (e.g., labor/job posting) or output (e.g., patents), neglecting to consider the strategic direction toward AI in business operations and value creation. In this paper, we empirically examine how firms’ AI strategic orientation affects firm performance from the dynamic capabilities perspective. We create a novel firm-year AI strategic orientation measure by employing a large language model to analyze business descriptions in Form 10-K filings and identify an increasing trend and changing status of AI strategies among U.S. public firms. Our long-difference analysis shows that AI strategic orientation is associated with greater operating cost, capital expenditure, and market value but not sales, showing the importance of strategic direction toward AI to create business value. By further dissecting firms’ AI strategic orientation into AI awareness, AI product orientation, and AI process orientation, we find that AI awareness is generally not related to performance, that AI product orientation is associated with short-term increased operating expenses and long-term market value, and that AI process orientation is associated with long-term increased costs and sales. Moreover, we find the negative moderating effect of environmental dynamism on AI process orientation. This study contributes to the recent AI strategy and management literature by providing the strategic role of AI orientation on firm performance.