Tag Archives: risk

IT Risk and Stock Price Crash Risk

Song, Victor, Hasan Cavusoglu, Jaecheol Park, Mary L. Z. Ma, Gene Moo Lee (2026) “IT Risk and Stock Price Crash Risk,” Under review.

This study examines whether and how firm-level information technology (IT) risk contributes to stock price crash risk. We construct a novel measure of ex-ante IT risk from risk factor disclosures in Item 1A of firms’ 10-K filings using advanced machine learning approaches. We find that higher IT risk is associated with greater stock price crash risk. Mechanism analyses indicate that this effect operates primarily through increased downside operating risk, rather than through heightened exposure to data breach events. We further document heterogeneity in the relationship between IT risk and stock price crash risk: (1) cybersecurity risk has a stronger effect than noncybersecurity IT risk; (2) the effect is stronger for newly disclosed IT risk factors; and (3) higher readability amplifies the crash risk effect. Together, these findings highlight IT risk as a previously underexplored determinant of stock price crash risk and offer new insights into the capital market consequences of firms’ IT-related disclosures.

Understanding Security Vulnerability Awareness, Firm Incentives, and ICT Development in Pan-Asia (JMIS 2020)

Zhuang, Yunhui, Yunsik Choi, Shu He, Alvin Chung Man Leung, Gene Moo Lee, Andrew B. Whinston (2020) Understanding Security Vulnerability Awareness, Firm Incentives, and ICT Development in Pan-Asia. Journal of Management Information Systems, 37(3): 668-693.

This paper investigates how the awareness of a security vulnerability index affects firms’ security protection strategy and how the information awareness effect interacts with firm incentives and country-wide IT development level. The security index is constructed based on outgoing spams and phishing website hosting, which may serve as an indicator of a firm’s security controls. To study whether security vulnerability awareness causes firms to improve their security, we conducted a randomized field experiment on 1,262 firms in six Pan-Asian countries and regions. Among 631 randomly selected treated firms, we alerted them of their security vulnerability index and their relative rankings compared to their peers via advisory emails and websites. Difference-in-differences analyses show that compared with the controls, the treated firms improve their security over time, with a statistically significant reduction of outgoing spam volume according to one of the data sources but not phishing website hosting. However, a statistically significant reduction in phishing website hosting was observed among non-web hosting firms, suggesting that firms’ underlying incentives play an important role in the treatment effect. Lastly, exploiting the multi-country nature of the data, we found that firms in countries with high information and communications technology (ICT) development are more responsive to our intervention because they have higher IT capabilities and more resources to resolve security issues. Our study provides cybersecurity policymakers with useful insights on how firm incentives and ICT environments play roles in firms’ security measure adoption.

Developing Cyber Risk Assessment Framework for Cyber Insurance: A Big Data Approach (KIRI Research Report 2018)

Lee, G. M. (2018). Developing Cyber Risk Assessment Framework for Cyber Insurance: A Big Data Approach (in Korean)KIRI Research Report 2018-15.

As our society is heavily dependent on information and communication technology, the associated risk has also significantly increased. Cyber insurance has been emerged as a possible means to better manage such cyber risk. However, the cyber insurance market is still in a premature stage due to the lack of data sharing and standards on cyber risk and cyber insurance. To address this issue, this research proposes a data-driven framework to assess cyber risk using externally observable cyber attack data sources such as outbound spam and phishing websites. We show that the feasibility of such an approach by building cyber risk assessment reports for Korean organizations. Then, by conducting a large-scale randomized field experiment, we measure the causal effect of cyber risk disclosure on organizational security levels. Finally, we develop machine-learning models to predict data breach incidents, as a case of cyber incidents, using the developed cyber risk assessment data. We believe that the proposed data-driven methods can be a stepping-stone to enable information transparency in the cyber insurance market.

Predicting Litigation Risk via Machine Learning

Lee, Gene Moo*, James Naughton*, Xin Zheng*, Dexin Zhou* (2020) “Predicting Litigation Risk via Machine Learning,” Working Paper. [SSRN] (* equal contribution)

This study examines whether and how machine learning techniques can improve the prediction of litigation risk relative to the traditional logistic regression model. Existing litigation literature has no consensus on a predictive model. Additionally, the evaluation of litigation model performance is ad hoc. We use five popular machine learning techniques to predict litigation risk and benchmark their performance against the logistic regression model in Kim and Skinner (2012). Our results show that machine learning techniques can significantly improve the predictability of litigation risk. We identify two best-performing methods (random forest and convolutional neural networks) and rank the importance of predictors. Additionally, we show that models using economically-motivated ratio variables perform better than models using raw variables. Overall, our results suggest that the joint consideration of economically-meaningful predictors and machine learning techniques maximize the improvement of predictive litigation models.

Security Defense against Long-term and Stealthy Cyberattacks (DSS 2023)

Kookyoung Han, Choi, Jin Hyuk, Yun-Sik Choi, Gene Moo Lee, Andrew B. Whinston (2023) Security Defense against Long-term and Stealthy Cyberattacks. Decision Support Systems, 166: 113912.

  • Funded by NSF (Award #1718600) and UNIST
  • Best Paper Award at KrAIS 2017
  • Presented at UT Austin (2017), UNIST (2017), INFORMS (Houston, TX 2017), CIST (Houston, TX 2017), WITS (Seoul, Korea 2017), and KrAIS (Seoul, Korea 2017)
  • Previous titles: “Misinformation and Optimal Time to Detect”, “Optimal Stopping and Strategic Espionage”, “To Disconnect or Not: A Cybersecurity Game”

Modern cyberattacks such as advanced persistent threats have become sophisticated. Hackers can stay undetected for an extended time and defenders do not have sufficient countermeasures to prevent advanced cyberattacks. Reflecting on this phenomenon, we propose a game-theoretic model to analyze strategic decisions made by a hacker and a defender in equilibrium. In our game model, the hacker launches stealthy cyberattacks for a long time and the defender decides when to disable a suspicious user based on noisy observations of the user’s activities. Damages caused by the hacker can be enormous if the defender does not immediately ban a suspicious user under certain circumstances, which can explain the emerging sophisticated cyberattacks with detrimental consequences. Our model also predicts that the hacker may opt to be behavioral to avoid worst cases. This is because behavioral cyberattacks are less threatening and the defender decides not to immediately block a suspicious user to reduce cost of false detection.

    How would information disclosure influence organizations’ outbound spam volume? Evidence from a field experiment (J. Cybersecurity 2016)

    He, Shu*, Gene Moo Lee*, Sukjin Han, Andrew B. Whinston (2016) How Would Information Disclosure Influence Organizations’ Outbound Spam Volume? Evidence from a Field ExperimentJournal of Cybersecurity 2(1), pp. 99-118. (* equal contribution)

    Cyber-insecurity is a serious threat in the digital world. In the present paper, we argue that a suboptimal cybersecurity environment is partly due to organizations’ underinvestment on security and a lack of suitable policies. The motivation for this paper stems from a related policy question: how to design policies for governments and other organizations that can ensure a sufficient level of cybersecurity. We address the question by exploring a policy devised to alleviate information asymmetry and to achieve transparency in cybersecurity information sharing practice. We propose a cybersecurity evaluation agency along with regulations on information disclosure. To empirically evaluate the effectiveness of such an institution, we conduct a large-scale randomized field experiment on 7919 US organizations. Specifically, we generate organizations’ security reports based on their outbound spam relative to the industry peers, then share the reports with the subjects in either private or public ways. Using models for heterogeneous treatment effects and machine learning techniques, we find evidence from this experiment that the security information sharing combined with publicity treatment has significant effects on spam reduction for original large spammers. Moreover, significant peer effects are observed among industry peers after the experiment.