Introduction
BIST is a design-for-testability technique that places the testing functions physically with the circuit under test (CUT), as illustrated in Figure 40.1 [1]. The basic BIST architecture requires the addition of three hardware blocks to a digital circuit: a test pattern generator, a response analyzer, and a test controller. The test pattern generator generates the test patterns for the CUT.
Examples of pattern generators are a ROM with stored patterns, a counter, and a linear feedback shift register (LFSR). A typical response analyzer is a comparator with stored responses or an LFSR used as a signature analyzer. It compacts and analyzes the test responses to determine correctness of the CUT. A test control block is necessary to activate the test and analyze the responses. However, in general, several test-related functions can be executed through a test controller circuit.
Fig. 40.1 A Typical BIST Architecture
As shown in Figure 40.1, the wires from primary inputs (PIs) to MUX and wires from circuit output to primary outputs (POs) cannot be tested by BIST. In normal operation, the CUT receives its inputs from other modules and performs the function for which it was designed. During test mode, a test pattern generator circuit applies a sequence of test patterns to the CUT, and the test responses are evaluated by a output response compactor. In the most common type of BIST, test responses are compacted in output response compactor to form (fault) signatures. The response signatures are compared with reference golden signatures generated or stored onchip, and the error signal indicates whether chip is good or faulty. Four primary parameters must be considered in developing a BIST methodology for embedded systems; these correspond with the design parameters for on-line testing techniques discussed in earlier chapter [2].
Issues for BIST
Benefits of BIST
BIST can be used for non-concurrent, on-line testing of the logic and memory parts of a system [2]. It can readily be configured for event-triggered testing, in which case, the BIST control can be tied to the system reset so that testing occurs during system start-up or shutdown. BIST can also be designed for periodic testing with low fault latency. This requires incorporating a testing process into the CUT that guarantees the detection of all target faults within a fixed time.
On-line BIST is usually implemented with the twin goals of complete fault coverage and low fault latency. Hence, the test generation (TG) and response monitor (RM) are generally designed to guarantee coverage of specific fault models, minimum hardware overhead, and reasonable set size. These goals are met by different techniques in different parts of the system.
TG and RM are often implemented by simple, counter-like circuits, especially linear-feedback shift registers (LFSRs) [3]. The LFSR is simply a shift register formed from standard flip-flops, with the outputs of selected flip-flops being fed back (modulo-2) to the shift register’s inputs. When used as a TG, an LFSR is set to cycle rapidly through a large number of its states. These states, whose choice and order depend on the design parameters of the LFSR, define the test patterns. In this mode of operation, an LFSR is seen as a source of (pseudo) random tests that are, in principle, applicable to any fault and circuit types. An LFSR can also serve as an RM by counting (in a special sense) the responses produced by the tests. An LFSR RM’s final contents after applying a sequence of test responses forms a fault signature, which can be compared to a known or generated good signature, to see if a fault is present. Ensuring that the fault coverage is sufficiently high and the number of tests is sufficiently low are the main problems with random BIST methods. Two general approaches have been proposed to preserve the cost advantages of LFSRs while making the generated test sequence much shorter. Test points can be inserted in the CUT to improve controllability and observability; however, they can also result in performance loss. Alternatively, some determinism can be introduced into the generated test sequence, for example, by inserting specific “seed” tests that are known to detect hard faults.
A typical BIST architecture using LFSR is shown in Figure 40.2 [4]. Since the output patterns of the LFSR are time-shifted and repeated, they become correlated; this reduces the effectiveness of the fault detection. Therefore a phase shifter (a network of XOR gates) is often used to decorrelate the output patterns of the LFSR. The response of the CUT is usually compacted by a multiple input shift register (MISR) to a small signature, which is compared with a known faultfree signature to determine whether the CUT is faulty.
Fig. 40.2 A generic BIST architecture based on an LFSR, an MISR, and a phase shifter
BIST Test Pattern Generation Techniques
Stored patterns
An automatic test pattern generation (ATPG) and fault simulation technique is used to generate the test patterns. A good test pattern set is stored in a ROM on the chip. When BIST is activated, test patterns are applied to the CUT and the responses are compared with the corresponding stored patterns. Although stored-pattern BIST can provide excellent fault coverage, it has limited applicability due to its high area overhead.
Exhaustive patterns
Exhaustive pattern BIST eliminates the test generation process and has very high fault coverage. To test an n-input block of combinational logic, it applies all possible 2n-input patterns to the block. Even with high clock speeds, the time required to apply the patterns may make exhaustive pattern BIST impractical for a circuit with n>20.
Fig. 40.3 Exhaustive pattern generator
Pseudo-exhaustive patterns
In pseudo-exhaustive pattern generation, the circuit is partitioned into several smaller subcircuits based on the output cones of influence, possibly overlapping blocks with fewer than n inputs. Then all possible test patterns are exhaustively applied to each sub-circuit. The main goal of pseudo-exhaustive test is to obtain the same fault coverage as the exhaustive testing and, at the same time, minimize the testing time. Since close to 100% fault coverage is guaranteed, there is no need for fault simulation for exhaustive testing and pseudo-exhaustive testing. However, such a method requires extra design effort to partition the circuits into pseudo-exhaustive testable sub-circuits. Moreover, the delivery of test patterns and test responses is also a major consideration. The added hardware may also increase the overhead and decrease the performance.
Fig. 40.4 Pseudo-exhaustive pattern generator
Circuit partitioning for pseudo-exhaustive pattern generation can be done by cone segmentation as shown in Figure 40.4. Here, a cone is defined as the fan-ins of an output pin. If the size of the largest cone in K, the patterns must have the property to guarantee that the patterns applied to any K inputs must contain all possible combinations. In Figure 40.4, the total circuit is divided into two cones based on the cones of influence. For cone 1 the PO h is influenced by X1, X2, X3, X4 and X5 while PO f is influenced by inputs X4, X5, X6, X7 and X8. Therefore the total test pattern needed for exhaustive testing of cone 1 and cone 2 is (25 +25) = 64. But the original circuit with 8 inputs requires 28 = 256 test patterns exhaustive test.
Pseudo-Random Pattern Generation
A string of 0’s and 1’s is called a pseudo-random binary sequence when the bits appear to be random in the local sense, but they are in someway repeatable. The linear feedback shift register (LFSR) pattern generator is most commonly used for pseudo-random pattern generation. In general, this requires more patterns than deterministic ATPG, but less than the exhaustive test. In contrast with other methods, pseudo-random pattern BIST may require a long test time and necessitate evaluation of fault coverage by fault simulation. This pattern type, however, has the potential for lower hardware and performance overheads and less design effort than the preceding methods. In pseudorandom test patterns, each bit has an approximately equal probability of being a 0 or a 1. The number of patterns applied is typically of the order of 103 to 107 and is related to the circuit's testability and the fault coverage required.
Linear feedback shift register reseeding [5] is an example of a BIST technique that is based on controlling the LFSR state. LFSR reseeding may be static, that is LFSR stops generating patterns while loading seeds, or dynamic, that is, test generation and seed loading can proceed simultaneously. The length of the seed can be either equal to the size of the LFSR (full reseeding) or less than the LFSR (partial reseeding). In [5], a dynamic reseeding technique that allows partial reseeding is proposed to encode test vectors. A set of linear equations is solved to obtain the seeds, and test vectors are ordered to facilitate the solution of this set of linear equations.
Fig. 40.5 Standard Linear Feedback Shift Register
Figure 40.5 shows a standard, external exclusive-OR linear feedback shift register. There are n flip-flops (Xn-1,……X0) and this is called n-stage LFSR. It can be a near-exhaustive test pattern generator as it cycles through 2n-1 states excluding all 0 states. This is known as a maximal length LFSR. Figure 40.6 shows the implementation of a n-stage LFSR with actual digital circuit. [1]
Fig. 40.6 n-stage LFSR implementation with actual digital circuit
Pattern Generation by Counter
In a BIST pattern generator based on a folding counter, the properties of the folding counter are exploited to find the seeds needed to cover the given set of deterministic patterns. Width compression is combined with reseeding to reduce the hardware overhead. In a two-dimensional test data compression technique an LFSR and a folding counter are combined for scan-based BIST. LFSR reseeding is used to reduce the number of bits to be stored for each pattern (horizontal compression) and folding counter reseeding is used to reduce the number of patterns (vertical compression).
Weighted Pseudo-random Pattern Generation
Bit-flipping [9], bit-fixing, and weighted random BIST [1,8] are example of techniques that rely on altering the patterns generated by LFSR to embed deterministic test cubes. A hybrid between pseudorandom and stored-pattern BIST, weighted pseudorandom pattern BIST is effective for dealing with hard-to-detect faults. In a pseudorandom test, each input bit has a probability of 1/2 of being either a 0 or a 1. In a weighted pseudorandom test, the probabilities, or input weights, can differ. The essence of weighted pseudorandom testing is to bias the probabilities of the input bits so that the tests needed for hard-to-detect faults are more likely to occur. One approach uses software that determines a single or multiple weight set based on a probabilistic analysis of the hard-to detect faults. Another approach uses a heuristic-based initial weight set followed by additional weight sets produced with the help of an ATPG system. The weights are either realized by logic or stored in on-chip ROM. With these techniques, researchers obtained fault coverage over 98% for 10 designs, which is the same as the coverage of deterministic test vectors.
In hybrid BIST method based on weighted pseudorandom testing, a weight of 0, 1, or μ (unbiased) is assigned to each scan chain in CUT. The weight sets are compressed and stored on the tester. During test application, an on-chip lookup table is used to decompress the data from the tester and generate weight sets. In order to reduce the hardware overhead, scan cells are carefully reordered and a special ATPG approach is used to generate suitable test cubes.
Fig. 40.7 Weighted pseudo-random pattern generator
Fig. 40.8 weighted pseudorandom patterns.
Figure 40.7 shows a weighted pseudo-random pattern generator implemented with programmable probabilities of generating zeros and ones at the PIs. As we know, LFSR generates pattern with equal probability of 1s and 0s. As shown in Figure 40.8 (a), if a 3-input AND gate is used, the probability of 1s becomes 0.125. If a 2-input OR gate is used, the probability becomes 0.75. Second, one can use cellular automata to produce patterns of desired weights as shown in Figure 40.8(b).
Cellular Automata for Pattern Generation
Cellular automata are excellent for pattern generation, because they have a better randomness distribution than LFSRs. There is no shift induced bit value correlation. A cellular automaton is a collection of cells with regular connections. Each pattern generator cell has few logic gates, a flip-flop and is connected only to its local neighbors. If Ci is the state of the current CA cell, Ci+1 and Ci-1 are the states of its neighboring cells. The next state of cell Ci is determined by (Ci-1, Ci , and Ci+1). The cell is replicated to produce cellular automaton. The two commonly used CA structures are shown in Figure 40.9.
In addition to an LFSR, a straightforward way to compress the test response data and produce a fault signature is to use an FSM or an accumulator. However, the FSM hardware overhead and accumulator aliasing are difficult parameters to control. Keeping the hardware overhead acceptably low and reducing aliasing are the main difficulty in RM design.
Comparison of Test Generation Strategies
Implementing a BIST strategy, the main issues are fault coverage, hardware overhead, test time overhead, and design effort. These four issues have very complicated relationship. Table 1 summarizes the characteristics of the test strategies mentioned earlier based on the four issues.
Table 7.1 Comparison of different test strategies
Test Generation Methodology | Fault Coverage | Hardware Overhead | Test Time Overhead | Design Effort |
Stored Pattern | High | High | Short | Large |
Exhaustive | High | Low | Long | Small |
Pseudo-exhaustive | High | High | Medium | Large |
Pseudo-random | Low | Low | Long | Small |
Weighted Pseudo-random | Medium | Medium | Long | Medium |
BIST Response Compression/Compaction Techniques
During BIST, large amount of data in CUT responses are applied to Response Monitor (RM). For example, if we consider a circuit of 200 outputs and if we want to generate 5 million random patterns, then the CUT response to RM will be 1 billion bits. This is not manageable in practice. So it is necessary to compact this enormous amount of circuit responses to a manageable size that can be stored on the chip. The response analyzer compresses a very long test response into a single word. Such a word is called a signature. The signature is then compared with the prestored golden signature obtained from the fault-free responses using the same compression mechanism. If the signature matches the golden copy, the CUT is regarded fault-free. Otherwise, it is faulty. There are different response analysis methods such as ones count, transition count, syndrome count, and signature analysis.
Compression: A reversible process used to reduce the size of the response. It is difficult in hard ware.
Compaction: An irreversible (lossy) process used to reduce the size of the response.
Signature analysis – Compact good machine response into good machine signature. Actual signature generated during testing, and compared with good machine signature.
Aliasing: Compression is like a function that maps a large input space (the response) into a small output space (signature). It is a many-to-one mapping. Errors may occur in the in the input bit stream. Therefore, a faulty response may have the signature that matches the to the golden signature and the circuit is reported as the fault-free one. Such a situation is referred as the aliasing or masking. The aliasing probability is the possibility that a faulty response is treated as fault-free. It is defined as follows:
Let us assume that the possible input patterns are uniformly distributed over the possible mapped signature values. There are 2m input patterns, 2r signatures and 2n-r input patterns map into given signature. Then the aliasing or masking probability
P(M) = Number of erroneos input that map in to the golden signature / Number of faulty input responses
The aliasing probability is the major considerations in response analysis. Due to the n-to-1 mapping property of the compression, it is unlikely to do diagnosis after compression. Therefore, the diagnosis resoluation is very poor after compression. In addition to the aliasing probability, hardware overhead and hardware compatibility are also important issues. Here, hardware compatibility is referred to how well the BIST hardware can be incorporated in the CUT or DFT.
47 videos|69 docs|65 tests
|
1. What is Built-In Self Test (BIST) for Embedded Systems? |
2. Why is Built-In Self Test (BIST) important in embedded systems? |
3. How does Built-In Self Test (BIST) work in embedded systems? |
4. What are the advantages of using Built-In Self Test (BIST) in embedded systems? |
5. Are there any limitations to Built-In Self Test (BIST) in embedded systems? |
|
Explore Courses for Computer Science Engineering (CSE) exam
|