Tag Archives: Trusted Illiac

Processor-level Selective Replication

Nithin Nakka, Karthik Pattabiraman, Zbigniew Kalbarczyk and Ravishankar Iyer, Workshop on Silicon Errors in Logic- System Effects (SELSE), 2006.
[ PDF File | Talk ]

This paper is superceded by the following conference paper.

Abstract: Even though replication has been widely used in providing fault tolerance, the underlying hardware is unaware of the application executing on it. The application cannot choose to use redundancy for a specific code section and run in a normal, unreplicated mode for the rest of the code. In this paper we propose Processor-level Selective Replication, a mechanism to dynamically configure the degree of instruction-level replication according to the applications demands. The application can choose to replicate only code sections that are critical to its crash-free execution. This decreases the impact on the performance. It is also known that many of the processor-level faults do not lead to failures observable in the application outcome. So, selective replication also decreases the number of false positives.

FPGA Hardware Implementation of Statically Derived Error Detectors

Peter Klemperer, Shelley Chen, Karthik Pattabiraman, Zbigniew Kalbarczyk, Ravishankar K. Iyer, Workshop on Dependable and Secure Nanocomputing (WDSN), 2007.
[ PDF File | Talk ]

This paper is superceded by the following conference paper.

Abstract: Previous software-only error detection techniques have provided high-coverage, low-latency detection but suffer significant performance overheads with a large percentage of benign detections. This paper presents a FPGA hardware implementation of application-aware data error detectors. The detectors are automatically derived at compile time and executed in hardware at runtime, minimizing the performance overhead. We implement the static detectors using the Reliability and Security Engine, which provides a standard interface for developing reliability and security hardware modules. An initial, proof-of-concept model shows that there is only a 2% performance penalty when the detectors are implemented in hardware.

Hardware Implementation of Information Flow Signatures Derived via Program Analysis

Paul Dabrowski, William Healey, Karthik Pattabiraman, Shelley Chen, Zbigniew Kalbarczyk, and
Ravishankar K. Iyer, Workshop on Dependable and Secure Nanocomputing (WDSN), 2008.
[ PDF File | Talk Slides ]

Abstract: We present an architectural solution that provides trustworthy execution of C code that computes critical data, in spite of potential hardware and software vulnerabilities. The technique uses both static compiler-based analysis to generate a signature for an application, or operating system, and dynamic hardware/software signature checking. A prototype implementation of the hardware on a soft processor within an FPGA incurs no performance overhead and about 4% chip area overhead, while the software portion of the technique adds between 1% and 69% performance overhead in our test applications, depending on the selection of critical data.

Processor-Level Selective Replication

Nithin Nakka, Karthik Pattabiraman and Ravishankar Iyer, Proceedings of the International Conference on Dependable Systems and Networks (DSN), 2007.
[ PDF File | Talk ]

Abstract: Full duplication of an entire application (through spatial or temporal redundancy) would detect many errors that are benign to the application from the perspective of the end-user. It has also been seen that duplication has upto 30% performance overhead and needs significant introduction of hardware to synchronize the replicas. In order to overcome the drawbacks of performance overhead and detection of “benign” faults, we propose a processor-level technique called Selective Replication, which provides the application the capability to choose where in its application stream and to what degree it requires replication. Recent work on static analysis and fault-injection based experiments on applications reveals that certain variables in the application are critical to its crash- and hang-free execution. If it can be ensured that the computation of these variables is error-free, then a high degree of crash/hang coverage can be achieved at a low performance overhead to the application. The Selective Replication technique provides an ideal platform for validating this claim. The technique is compared against complete duplication as provided in current architectural level techniques. The results show that with about 59% less overhead than full duplication selective replication detects 97% of the data errors and 87% of the instruction errors that were covered by full duplication. It also reduces the detection of errors benign to the final outcome of the application by 17.8% as compared to full duplication.