HARD IP MIGRATION
HARD IP MIGRATION WITH A PROVEN SYSTEM AND METHODOLOGY

The discussions in this chapter serve to provide an overview of what a typical retargeting system can do. While it is based on an actually existing, commercially available system, the discussion is kept as general as reasonably possible. As opposed to discussions about some other VLSI chip design tools, such as simulation or synthesis, the choice of fully functional retargeting systems is rather limited and permission to write about them could only be obtained for one. However, what can traditionally be done with migrating Hard IP is pretty much covered here. This chapter does not cover some of the latest advances, such as what is becoming available in hierarchy maintenance and some of the latest algorithms in layout optimization. These more specific subjects are covered separately in later chapters.

For now, the goal is to establish an understanding of the most important functions that should be included in a retargeting environment.

2.1 HARD IP REUSE, LINEAR SHRINK OR COMPACTION?

Effective and technically sophisticated retargeting and VLSI design postlayout optimization methodologies are very much at the heart of the reuse of existing VLSI designs, in the form of Hard IP. However, calling Hard IP that is retargeted hard and “Hard IP reuse” completely new is potentially misleading.

While the existing data concerns the actual laid-out hard silicon, we will show that the physical layout dimensions can be manipulated and optimized for Hard IP, emphasizing the features that are the most important for supplying substantial improvements in performance and yield. Also, while “Hard IP reuse” may at a first glance appear to be a completely new approach compared to reusing existing designs, it is not. A simplistic approach to Hard IP reuse, “linear shrink”, has been used, but discussed little, long before IP reuse became such a popular notion.

Linear shrink has always been and is still being practiced extensively today. Many highly desirable circuits that designers are unwilling to abandon are adjusted to newer processes by using a linear shrink. The word linear shrink implies a reduction. However, for some applications, there might be interest in a linear enlargement. Unless specifically stated, we will generally assume a reduction in layout dimensions consistent with trying to push the limits of performance.

Linear shrink just means to adjust all the layout geometries of all the layers according to some proportionality factor until one of the layout dimensions on the chip reaches the smallest allowable value. This is sometimes called an “optical shrink” for obvious reasons. This process is, of course, very straightforward. It can be done very quickly and with minimum risk. This linear shrink offers the advantages of minimally “disturbing” the geometrical proportionalities of layout dimensions of a proven layout of a working chip. Maintaining the geometrical proportionalities of a physical layout, implies the reasonable underlying assumption that the relative timing relationships of the shrunken chip are also maintained. Accordingly, the circuit should still work after a linear shrink, but faster.

Linear shrink has been very useful to the engineering community for a long time. However, for DSM technologies, a linear shrink generally results in improvements that are insufficient in comparison to the substantial investments in equipment necessary to improve processing capabilities, due to a lack of required flexibility. It is no longer adequate to perform a shrink until one of the chip's critical dimensions hits the allowable minimum - because once this first one dimension readies the minimum, no other dimensions can be reduced any farther either.

Although some companies push the limits of linear shrinks by using “creative linear shrinks,” applying different proportionality factors to different features on the chip, this approach only somewhat delays the inevitable. The more processing technologies move into the DSM area, the more inadequate becomes a “creative linear shrink.”

A more powerful retargeting methodology is now needed. We need a polygon-by-polygon-based postlayout manipulation methodology.

A polygon-by-polygon-based postlayout manipulation is done with the help of computers and the appropriate software. The underlying methodology of this software driven retargeting or migration is polygon-based compaction, which is the capability of repositioning individual polygon edges according to new process rules. It is the basis for all of the Hard IP engineering discussed in this book. There are many reasons why an approach more sophisticated than a linear shrink is needed for retargeting a physical layout. We address many of these issues in the following discussions and in the remaining chapters.

The following observations will serve as basic guidelines and stress some of the benefits of polygon-based compaction:

  1. The more the performance of a VLSI circuit design depends on physical layout parameters, the more important it becomes for the methodology used for retargeting to allow very high-level control of layout geometries. Hard IP migration with compaction allows unprecedented control of layout geometries and freedom to adjust any or any number of the layout features individually and at practically any time. This enables a concentration on features that are the most critical for DSM technologies.
  2. With the fast-moving evolution of processing technology, many process parameters discussed in the previous section are in a constant state of flux. If processing engineers recognize rules that significantly and negatively impact the yield, they may have to change those rules. On the other hand, there may be layout rules that are too conservative and that could be tightened up a bit. Using compaction for retargeting requires the circuit designer to work with processing engineers to find layout rules that optimally satisfy both performance needs in terms of the circuit and yield what is required in terms of processing as practiced in DfM. This will become even more significant as the technologies move deeper into DSM processing.

Considering that the processes are constantly tuned, a “last minute” retargeting based on the absolute latest process parameters can produce significant benefits. Hard IP retargeting allows changes as long as the user is willing or able to make some compaction reruns. It also depends 011 just how much lie wants to “squeeze out” of his design or how sensitively he depends 011 the last few percentage points of performance. With today's state-of-the-art migration software, most reruns can be done overnight or faster. Such rerun times can be predicted rather accurately because of the way migration projects can be organized.

Complex migration projects can be roughly organized in three phases - two setup phases and one run phase. The first is the setup phase for the process files. The second phase and the effort required depends on the layout to be migrated. The phase is different for migrating libraries as opposed to migrating memories or other layouts. Determining how to best migrate a layout is something like a trial and error phase. We explore these phases in more detail later. The final phase is computer runs. Everything is setup correctly. So, if process parameters are tuned to provide an updated layout based on the latest process parameters, only this last phase is required with some straightforward batch type computer runs.

structures, a class that covers any block ever encountered and, finally, chips. All of this retargeting was done in Hard IP. What about mixing Soft IP and Hard IP on one chip? And what about analog circuits?

What if one of the blocks in Figure 2.18 were an analog block? For now, the answer is that it can be done and is done routinely by some companies. We will show an example of this kind of a migration in Chapter S, when we examine some of the issues of analog Hard IP migration.

Another challenge: What if the chip migration depicted in Figure 2.18 is extended to take blocks from different sources, blocks that were not on the Same chip before and not even processed by the same foundry? Such a truly S-o-C scenario is possible, but without a doubt challenging. Possible? Probably says the skeptic, most certainly says the optimist. We shall examine some of the arguments in Chapter 5.

Finally, how about mixing and matching Soft IP, Hard IP, analog and making all of this work well in a chip while keeping within the power budget and guaranteeing a highly testable circuit? Well??? Conceptually, even this is possible. Feasible and practical? A marketing department would respond with: Good question! We will address even this issue in Chapter 5.