IT risk, especially cybersecurity risk, has rapidly increased and become a top concern for researchers, regulators, firm managers, and investors. This study creates a novel firm-level IT risk measure applicable to all US-listed firms by applying the BERTopic topic modeling to risk factors reported in Item 1A of the 10-K annual reports. We validate the measure with multiple approaches including cross-validations, presenting illustrative excerpts of IT risk factors, conducting cross-sectional and over-time distribution analyses, and analyzing firm characteristics associated with IT risk. The measure is found to be heightened in IT-intensive industries and for firms with larger sizes, higher profits, and better growth potential, and it can predict future data breaches. Using this ex-ante IT risk measure, we examine the relation between IT risk and stock price crash risk, which reflects a firm’s propensity to stock price crashes. Our findings suggest that IT risk is positively associated with crash risk, and we also identify that downward operating risk and predictability for data breaches are two mechanisms for the crash risk effect of IT risk. By decomposing IT risk into cybersecurity risk and non-cybersecurity IT risk, we find that both types of IT risk increase crash risk, but the effect of cybersecurity risk is stronger than that of non-cybersecurity IT risk, consistent with their different risk natures. We further observe that the novelty and readability of IT risk factors strengthen the crash risk effects of IT risk, consistent with the notion that the novelty represents updated and increased IT risk, and readability improves the understanding of IT risk. Lastly, difference-in-differences analyses reveal that IT risk increases stock price crash risk, not the other way around. We conclude the paper by discussing academic contributions and practical implications in the context of the SEC’s directives on reporting and managing IT risk and cybersecurity risk.
Information Disclosure and Security Vulnerability Awareness: A Large-Scale Randomized Field Experiment in Pan-Asia
Information Disclosure and Security Policy Design: A Large-Scale Randomization Experiment in Pan-Asia
This paper investigates how the awareness of a security vulnerability index affects firms’ security protection strategy and how the information awareness effect interacts with firm incentives and country-wide IT development level. The security index is constructed based on outgoing spams and phishing website hosting, which may serve as an indicator of a firm’s security controls. To study whether security vulnerability awareness causes firms to improve their security, we conducted a randomized field experiment on 1,262 firms in six Pan-Asian countries and regions. Among 631 randomly selected treated firms, we alerted them of their security vulnerability index and their relative rankings compared to their peers via advisory emails and websites. Difference-in-differences analyses show that compared with the controls, the treated firms improve their security over time, with a statistically significant reduction of outgoing spam volume according to one of the data sources but not phishing website hosting. However, a statistically significant reduction in phishing website hosting was observed among non-web hosting firms, suggesting that firms’ underlying incentives play an important role in the treatment effect. Lastly, exploiting the multi-country nature of the data, we found that firms in countries with high information and communications technology (ICT) development are more responsive to our intervention because they have higher IT capabilities and more resources to resolve security issues. Our study provides cybersecurity policymakers with useful insights on how firm incentives and ICT environments play roles in firms’ security measure adoption.
As our society is heavily dependent on information and communication technology, the associated risk has also significantly increased. Cyber insurance has been emerged as a possible means to better manage such cyber risk. However, the cyber insurance market is still in a premature stage due to the lack of data sharing and standards on cyber risk and cyber insurance. To address this issue, this research proposes a data-driven framework to assess cyber risk using externally observable cyber attack data sources such as outbound spam and phishing websites. We show that the feasibility of such an approach by building cyber risk assessment reports for Korean organizations. Then, by conducting a large-scale randomized field experiment, we measure the causal effect of cyber risk disclosure on organizational security levels. Finally, we develop machine-learning models to predict data breach incidents, as a case of cyber incidents, using the developed cyber risk assessment data. We believe that the proposed data-driven methods can be a stepping-stone to enable information transparency in the cyber insurance market.
Data: litigation risk scores for 6134 firms in 1996-2015 [link]
Research assistant: Raymond Situ
This study examines whether and how machine learning techniques can improve the prediction of litigation risk relative to the traditional logistic regression model. Existing litigation literature has no consensus on a predictive model. Additionally, the evaluation of litigation model performance is ad hoc. We use five popular machine learning techniques to predict litigation risk and benchmark their performance against the logistic regression model in Kim and Skinner (2012). Our results show that machine learning techniques can significantly improve the predictability of litigation risk. We identify two best-performing methods (random forest and convolutional neural networks) and rank the importance of predictors. Additionally, we show that models using economically-motivated ratio variables perform better than models using raw variables. Overall, our results suggest that the joint consideration of economically-meaningful predictors and machine learning techniques maximize the improvement of predictive litigation models.
Previous titles: “Misinformation and Optimal Time to Detect”, “Optimal Stopping and Strategic Espionage”, “To Disconnect or Not: A Cybersecurity Game”
Modern cyberattacks such as advanced persistent threats have become sophisticated. Hackers can stay undetected for an extended time and defenders do not have sufficient countermeasures to prevent advanced cyberattacks. Reflecting on this phenomenon, we propose a game-theoretic model to analyze strategic decisions made by a hacker and a defender in equilibrium. In our game model, the hacker launches stealthy cyberattacks for a long time and the defender decides when to disable a suspicious user based on noisy observations of the user’s activities. Damages caused by the hacker can be enormous if the defender does not immediately ban a suspicious user under certain circumstances, which can explain the emerging sophisticated cyberattacks with detrimental consequences. Our model also predicts that the hacker may opt to be behavioral to avoid worst cases. This is because behavioral cyberattacks are less threatening and the defender decides not to immediately block a suspicious user to reduce cost of false detection.
Research assistants: Yun-Sik Choi, Ying-Yu Chen, Mark Varga, Zeyuan Zhu, Niyati Parameswaran, Markus Iivonen
Cyber-insecurity is a serious threat in the digital world. In the present paper, we argue that a suboptimal cybersecurity environment is partly due to organizations’ underinvestment on security and a lack of suitable policies. The motivation for this paper stems from a related policy question: how to design policies for governments and other organizations that can ensure a sufficient level of cybersecurity. We address the question by exploring a policy devised to alleviate information asymmetry and to achieve transparency in cybersecurity information sharing practice. We propose a cybersecurity evaluation agency along with regulations on information disclosure. To empirically evaluate the effectiveness of such an institution, we conduct a large-scale randomized field experiment on 7919 US organizations. Specifically, we generate organizations’ security reports based on their outbound spam relative to the industry peers, then share the reports with the subjects in either private or public ways. Using models for heterogeneous treatment effects and machine learning techniques, we find evidence from this experiment that the security information sharing combined with publicity treatment has significant effects on spam reduction for original large spammers. Moreover, significant peer effects are observed among industry peers after the experiment.