Modern cyberattacks such as advanced persistent threats have become sophisticated. Hackers can stay undetected for an extended time and defenders do not have sufficient countermeasures to prevent advanced cyberattacks. Reflecting on this phenomenon, we propose a game-theoretic model in which a hacker launches stealthy cyberattacks for a long time and a defender’s actions are to monitor the activities and to disable a suspicious user. Damages caused by the hacker can be enormous if the defender does not immediately ban a suspicious user under certain circumstances, which can explain the emerging sophisticated cyberattacks with detrimental consequences. Our model also predicts that the hacker may opt to be behavioral to avoid the worst cases. This is because behavioral cyberattacks are less threatening and the defender decides not to immediately block a suspicious user to reduce the cost of false detection.
Research assistants: Yun-Sik Choi, Ying-Yu Chen, Mark Varga, Zeyuan Zhu, Niyati Parameswaran, Markus Iivonen
Cyber-insecurity is a serious threat in the digital world. In the present paper, we argue that a suboptimal cybersecurity environment is partly due to organizations’ underinvestment on security and a lack of suitable policies. The motivation for this paper stems from a related policy question: how to design policies for governments and other organizations that can ensure a sufficient level of cybersecurity. We address the question by exploring a policy devised to alleviate information asymmetry and to achieve transparency in cybersecurity information sharing practice. We propose a cybersecurity evaluation agency along with regulations on information disclosure. To empirically evaluate the effectiveness of such an institution, we conduct a large-scale randomized field experiment on 7919 US organizations. Specifically, we generate organizations’ security reports based on their outbound spam relative to the industry peers, then share the reports with the subjects in either private or public ways. Using models for heterogeneous treatment effects and machine learning techniques, we find evidence from this experiment that the security information sharing combined with publicity treatment has significant effects on spam reduction for original large spammers. Moreover, significant peer effects are observed among industry peers after the experiment.