The usefulness and success of today's simple switch-level transistor models in TAs are based on extensive use and a rather limited focus on only determining time delay in digital circuits. For more detailed analyses requiring an accurate knowledge of the signal shape, the authoritative comparison is still a SPICE run, which uses a complicated SPICE equivalent circuits for transistors. Even if limited to digital circuits, transistor models will have to continue to evolve with shrinking minimum layout geometries, if for no other reason than to know when a new physics effect may start to affect the DSM VLSI circuits in unexpected ways.
Fortunately, current switch-level-based TAs work just fine for now for delay analysis. We have also determined that it is the interconnects that dominate the liming on DSM VLSI circuits. Does that mean, we should only look at the interconnects when we optimize the layout geometry for minimum delay and power consumption?
This question is prompted largely by the fact that there are at present only commercial solutions that modify transistors to achieve the above goals.
With the push towards increased layout density for all the previously mentioned reasons, the geometrical separation between some of the elements in a layout are often smaller than necessary for optimal performance. Needless to say, the other key fallout from such a layout is lower yield than what could optimally be achieved for a VLSI chip. Lately, there have also been allusions to this in the literature. Optimizing yield is clearly a big financial issue.
We have reviewed some of the issues dealing with optimizing the physical layout of DSM VLSI circuits. The focus of this optimization was primarily performance optimization and managing runaway power Consumption, since these VLSI chips pack more and more functionality into smaller areas. It is self-evident that for something as complicated as fabricating a multimillion-transistor VLSI chip, there has to be many process steps along the way that could be optimized.
Listening to person after person and speaker after speaker in the EDA field gives the distinct impression that all design problems will be solved through more intelligent synthesis and place and route techniques. To put it diplomatically, this is tunnel vision. Just as doctors should examine the entire person, the VLSI design community should examine the entire design process.
In this chapter, we have specified that we mean by front-end everything up to and including place and route. Back-end addresses a VLSI chip design after everything has been put in place at the GDS2 level. Back-end layout manipulations literally amount to what could be called “massaging” the layout. Mathematicians know that “massaging” equations can do a lot of good, even in an exact science such as mathematics. The same is true for VLSI chips, especially if they have been fabricated with a DSM technology. Extensive research has already demonstrated that for an otherwise well laid out VLSI chip, time delay and power consumption reductions of over 50% can be achieved merely with back-end layout manipulations. This is substantial and can not be ignored in the long or short run,.
When we discuss design flows in Chapter 7, we talk about levels of abstraction in the design process. We suggest that, while a very high level of abstraction yields great benefits for the design process in terms of complexity management, its direct control over the physical aspects of a layout lends to be relatively weak. In the past, this presented few obstacles to the chance of first time success for a VLSI chip design. Because of the Importance of layout parameters, these issues have to be taken increasingly seriously in design disciplines such as synthesis and timing-driven layout and especially during the floorplanning phase in DSM technologies.
At present, there are some solutions in the industry, seeking optimal dimensioning of transistors in a VLSI circuit as a postlayout optimization step. These tools focus only on transistor sizing. We discuss some of these tools in a separate section on available industrial solutions.
A key measure for successful DSM VLSI chip design and manufacturing is the percentage of defect-free chips at the end of the process line. No matter how well chips are designed to meet performance specifications, if the percentage of good chips coming off the processing line is too low in comparison to the bad, nonworking chips, it is a losing proposition.
Of course, we should try to increase manufacturing yield without sacrificing performance, if possible. We will see that some layout dimensions can, in fact, often be enlarged to improve yield without any loss in performance. Alternatively, a trade-off between yield and a minor sacrifice in performance might be acceptable.
With increased density, the probability of one's chip defect being large enough to cause a failure is considerably increased. Other important contributing factors are the large sizes of today's chips and the enormous number of devices now placed on a chip. The discussion here is not intended to be comprehensive. It is focused on just some of the design-related steps that can be taken to improve manufacturing yield.
Depending on the nature of a chip, different approaches can be taken to increase its yield. As in other design disciplines, redundancy has often been viewed as a good approach to overcome the debilitating effects of a failure of certain components. Redundancy is being used to bring about “self-repair” of a failing system that is in use at the time of failure. For chips containing one or more defects after manufacturing, defects could be bypassed by designing redundancy into the chip. Such redundancies are sometimes referred to as “swapping redundancies” [15]. As the name implies, the design would have to allow for substitution of an operable part of a structure for a failing one through exchange.
Redundancy can work well for highly repetitive structures. Any array-type structure could be a suitable candidate. A very good example is the frequently very large portion of very densely laid out embedded static/dynamic RAMs and/or flash memories added for S-o-C designs.
After all, adding on the order of 256 Mbits of memory, possible in a 0.18 micron technology, is a nonnegligible amount of defect density exposure. We discuss the S-o-C approach in conjunction with Hard IP reuse in Chapter 5.
What other design-related steps can be taken to increase manufacturing yield?
We will now discuss how compaction can be used for just that purpose. We will discuss the following three techniques that will improve manufacturing yield by using compaction: