Tag Archives: information asymmetry

Security Defense against Long-term and Stealthy Cyberattacks (CIST 2017, WITS 2017, KrAIS 2017)

Kookyoung Han, Choi, Jin Hyuk, Yun-Sik Choi, Gene Moo Lee, Andrew B. Whinston (2021) “Security Defense against Long-term and Stealthy Cyberattacks”. Under Review.

  • Latest version: May 2021
  • Funded by NSF (Award #1718600) and UNIST
  • Best Paper Award at KrAIS 2017
  • Presented at UT Austin (2017), UNIST (2017), INFORMS (Houston, TX 2017), CIST (Houston, TX 2017), WITS (Seoul, Korea 2017), and KrAIS (Seoul, Korea 2017)
  • Previous titles:
    • Misinformation and Optimal Time to Detect
    • Optimal Stopping and Strategic Espionage
    • To Disconnect or Not: A Cybersecurity Game

Modern cyberattacks such as advanced persistent threats have become sophisticated. Hackers can stay undetected for an extended time and defenders do not have sufficient countermeasures to prevent these advanced cyberattacks. Reflecting on this phenomenon, we propose a game-theoretic model in which a hacker launches stealthy cyberattacks for a long time and a defender’s actions are to monitor the activities and to disable a suspicious user. Focusing on cases in which the players sufficiently care about future payoffs, we find that if the defender does not immediately ban a suspicious user, damages caused by the hacker can be enormous. Therefore, the defender bans every suspicious user in equilibrium to avoid huge losses, resulting in the worst payoffs for both players. These results explain the emerging sophisticated cyberattacks with detrimental consequences. Our model also predicts that the hacker may opt to be non-strategic. This is because non-strategic cyberattacks are less threatening and the defender decides not to immediately block a suspicious user to reduce false detection, in which case both players become better off.

How would information disclosure influence organizations’ outbound spam volume? Evidence from a field experiment (J. Cybersecurity 2016)

He, Shu*, Gene Moo Lee*, Sukjin Han, Andrew B. Whinston (2016) How Would Information Disclosure Influence Organizations’ Outbound Spam Volume? Evidence from a Field ExperimentJournal of Cybersecurity 2(1), pp. 99-118. (* equal contribution)

Cyber-insecurity is a serious threat in the digital world. In the present paper, we argue that a suboptimal cybersecurity environment is partly due to organizations’ underinvestment on security and a lack of suitable policies. The motivation for this paper stems from a related policy question: how to design policies for governments and other organizations that can ensure a sufficient level of cybersecurity. We address the question by exploring a policy devised to alleviate information asymmetry and to achieve transparency in cybersecurity information sharing practice. We propose a cybersecurity evaluation agency along with regulations on information disclosure. To empirically evaluate the effectiveness of such an institution, we conduct a large-scale randomized field experiment on 7919 US organizations. Specifically, we generate organizations’ security reports based on their outbound spam relative to the industry peers, then share the reports with the subjects in either private or public ways. Using models for heterogeneous treatment effects and machine learning techniques, we find evidence from this experiment that the security information sharing combined with publicity treatment has significant effects on spam reduction for original large spammers. Moreover, significant peer effects are observed among industry peers after the experiment.