Nekara: Generalized Consistency Testing

Udit Agarwal, Pantazis Deligiannis, Cheng Huang, Kumseok Jung, Akash Lal, Immad Naseer, Matthew Parkinson, Arun Thangamani, Jyothi Vedurada, Yunpeng Xiao, Proceedings of the ACM/IEEE International Conference on Automated Software Engineering (ASE), 2021. [ PDF | Talk Slides]

Abstract: Testing concurrent systems remains an uncomfortable problem for developers. The common industrial practice is to stress-test a system against large workloads, with the hope of triggering enough corner-case interleavings that reveal bugs. However, stress testing is often inefficient and its ability to get coverage of interleavings is unclear. In reaction, the research community has proposed the idea of systematic testing, where a tool takes over the scheduling of concurrent actions so that it can explore the space of interleavings.

We present an experience paper on the application of systematic testing to several case studies. We separate the algorithmic advancements in prior work (on searching the large space of interleavings) from the engineering of their tools. The latter was unsatisfactory; often the tools were limited to a small domain, hard to maintain, and hard to extend to other domains. We designed Nekara, an open-source cross-platform library for easily building custom systematic testing solutions.

We show that (1) Nekara can effectively encapsulate state-of-the-art exploration algorithms by evaluating on prior benchmarks, and (2) Nekara can be applied to a wide variety of scenarios, including existing open-source systems as well as cloud services of a major IT company. Nekara was easy to use, improved testing, and found multiple new bugs.

Comments are closed.