Tag Archives: conference

DoDOM: Leveraging DOM Invariants for Web 2.0 Application Robustness Testing

Karthik Pattabiraman and Benjamin Zorn, Proceedings of the International Symposium on Software Reliability Engineering (ISSRE), 2010.
[ PDF File ] [Talk slides]
You can find the technical report version of the paper here.
Continue reading

Discovering Application-level Insider Attacks Using Symbolic Execution

Karthik Pattabiraman, Zbigniew Kalbarczyk and Ravishankar Iyer, Proceedings of the IFIP International Conference on Information Security (SEC), 2009.[ PDF File | Talk ]
You can find the technical report version of the paper here.
Continue reading

An End-to-end Approach for the Automatic Derivation of Application-aware Error Detectors

Galen Lyle, Shelley Chen, Karthik Pattabiraman, Zbigniew Kalbarczyk and Ravishankar Iyer, Proceedings of the International Conference on Dependable Systems and Networks (DSN), 2009.
[ PDF File | Talk ]
Continue reading

Detecting and Tolerating Asymmetric Races

Paruj Ratanaworabhan, Martin Burtscher, Darko Kirovski, Rahul Nagpal, Benjamin Zorn and Karthik Pattabiraman, Proceedings of the International Symposium on the Principles and Practice of Parallel Programming (PPoPP), 2009. [ PDF File | Talk ]
You can find the technical report version here.
Continue reading

Modeling Coordinated Checkpointing for Large-Scale Supercomputers

Long Wang, Karthik Pattabiraman, Lawrence Votta, Christopher Vick, Alan Wood, Zbigniew Kalbarczyk and Ravishankar Iyer, Proceedings of the International Conference on Dependable Systems and Networks (DSN), 2005.
[ PDF File | Talk ]

Abstract: Current supercomputing systems consisting of thousands of nodes cannot meet the demands of emerging high-performance scientific applications. As a result, a new generation of supercomputing systems consisting of hundreds of thousands of nodes is being proposed. However, these systems are likely to experience far more frequent failures than today’s systems, and such failures must be tackled effectively. Coordinated checkpointing is a common technique to deal with failures in supercomputers. This paper presents a model of a coordinated checkpointing protocol for large-scale supercomputers, and studies its scalability by considering both the coordination overhead and the effect of failures. Unlike most of the existing checkpointing models, the proposed model takes into account failures during checkpointing and recovery, as well as correlated failures. Stochastic Activity Networks (SANs) are used to model the system, and the model is simulated to study the scalability, reliability, and performance of the system.

Formal Reasoning of Various Categories of Widely Exploited Security Vulnerabilities using Pointer Taintedness Semantics

Shuo Chen, Karthik Pattabiraman, Zbigniew Kalbarczyk and Ravishankar Iyer, Proceedings of the IFIP International Conference on Information Security (SEC), 2004.
[ PDF File | Talk ]

Abstract: This paper is motivated by a low level analysis of various categories of severe security vulnerabilities, which indicates that a common characteristic of many classes of vulnerabilities is pointer taintedness. A pointer is said to be tainted if a user input can directly or indirectly be used as a pointer value. In order to reason about pointer taintedness, a memory model is needed. The main contribution of this paper is the formal definition of a memory model using equational logic, which is used to reason about pointer taintedness. The reasoning is applied to several library functions to extract security preconditions, which must be satisfied to eliminate the possibility of pointer taintedness. The results show that pointer taintedness analysis can expose different classes of security vulnerabilities, such as format string, heap corruption and buffer overflow vulnerabilities, leading us to believe that pointer taintedness provides a unifying perspective for reasoning about security vulnerabilities.

Samurai: Protecting Critical Data in Unsafe Languages

Karthik Pattabiraman, Vinod Grover and Benjamin G. Zorn, Proceedings of the European Conference on Computer Systems (EuroSys), 2008.
[ PDF File | Talk ]
Continue reading

SymPLFIED: Symbolic Program-Level Fault Injection and Error Detection Framework

Karthik Pattabiraman, Nithin Nakka, Zbigniew Kalbarczyk and Ravishankar Iyer, Proceedings of the International Conference on Dependable Systems and Networks (DSN), 2008.
This paper won the William C. Carter award for the best paper at the conference
[ PDF File | Talk ]
You can find the tech report for the conference paper here.


Abstract:
This paper introduces SymPLFIED, a program-level framework which allows specification of arbitrary error detectors and the verification of their efficacy against hardware errors. SymPLFIED comprehensively enumerates all transient hardware errors in registers, memory and computation (expressed as value errors) that potentially evade detection and cause program failure. The framework uses symbolic execution to abstract the state of erroneous values in the program and model checking to comprehensively find all errors that evade detection. We demonstrate the use of SymPLFIED on a widely deployed aircraft collision avoidance application, tcas. Our results show that the SymPLFIED framework can be used to uncover hard-to-detect corner cases caused by transient errors in programs that may not be exposed by random fault-injection based validation.

The Coordinated Science Lab at UIUC did an article about this paper

Processor-Level Selective Replication

Nithin Nakka, Karthik Pattabiraman and Ravishankar Iyer, Proceedings of the International Conference on Dependable Systems and Networks (DSN), 2007.
[ PDF File | Talk ]

Abstract: Full duplication of an entire application (through spatial or temporal redundancy) would detect many errors that are benign to the application from the perspective of the end-user. It has also been seen that duplication has upto 30% performance overhead and needs significant introduction of hardware to synchronize the replicas. In order to overcome the drawbacks of performance overhead and detection of “benign” faults, we propose a processor-level technique called Selective Replication, which provides the application the capability to choose where in its application stream and to what degree it requires replication. Recent work on static analysis and fault-injection based experiments on applications reveals that certain variables in the application are critical to its crash- and hang-free execution. If it can be ensured that the computation of these variables is error-free, then a high degree of crash/hang coverage can be achieved at a low performance overhead to the application. The Selective Replication technique provides an ideal platform for validating this claim. The technique is compared against complete duplication as provided in current architectural level techniques. The results show that with about 59% less overhead than full duplication selective replication detects 97% of the data errors and 87% of the instruction errors that were covered by full duplication. It also reduces the detection of errors benign to the final outcome of the application by 17.8% as compared to full duplication.

Application-based Metrics for Strategic Placement of Detectors

Karthik Pattabiraman, Zbigniew Kalbarczyk and Ravishankar K. Iyer, Proceedings of the International Symposium on Pacific-Rim Dependable Computing (PRDC), 2005.
[ PDF File | Talk ]

Abstract: The goal of this study is to provide low-latency detection and prevent error propagation due to value errors. This paper introduces metrics to guide the strategic placement of detectors and evaluates (using fault injection) the coverage provided by ideal detectors embedded at program locations selected using the computed metrics. The computation is represented in the form of a Dynamic Dependence Graph (DDG), a directed-acyclic graph that captures the dynamic dependencies among the values produced during the course of program execution. The DDG is employed to model error propagation in the program and to derive metrics (e.g., value fanout or lifetime) for detector placement. The coverage of the detectors placed is evaluated using fault injections in real programs, including two large SPEC95 integer benchmarks (gcc and perl). Results show that a small number of detectors, strategically placed, can achieve a high degree of detection coverage.