TOC PREV NEXT INDEX  

Chapter 3

Introduction to Coverage Analysis

 

The Origins of Coverage Analysis

Coverage analysis was originally developed at the time when third generation software languages like Pascal, ADA and C started to appear in the 1980s. These languages were fundamentally different to previous languages like BASIC in that they were structured and required the programmer to be much more organised and disciplined. For example, before a variable could be used it had to be declared together with the type of information (i.e. bit, binary, string, integer or floating) that it would be handling. Placing this simple discipline on the programmer enabled the software compiler to check and trap situations where the program tried to write the wrong type of information to a variable. Although this facility was useful, it only solved part of the problem because type checking was restricted to checking data and it did not help uncover problems associated with the logical behavior of the program. The emphasis was on checking the data rather than the control logic. For example, the reason why a program gets stuck in an infinite loop might just be that a particular variable is not incremented and therefore fails to satisfy a logical or arithmetic test at some point later in the program. Finding this type of logical design error can be time consuming, especially if the program contains hundreds of lines of code rather than small self-contained procedures or functions.

The first generation of coverage analysis tools were fairly rudimentary and did no more than simply report the current state of the variables used in the program and the lines of code that had been executed. Coverage analysis techniques became more sophisticated as they started to address the problems associated with the control flow of a program as well as the data. For example, the values of the variables that were used in conditional statements like `if' and `case' constructs could be tracked so that the programmer could see why a particular branch had or had not been taken.

Consider the following line of code:

if carry==1 or line_count>=80 then

Using coverage analysis techniques a programmer could find out whether the branch was taken because:

carry was true and line_count was never true,
line_count was true and carry was never true,
carry and line_count were true at least once during the execution of the computer program.

The use of coverage analysis techniques gained momentum as programmers quickly realized that they could improve their levels of productivity and the quality of their programs by adopting these tools. Finding innovative methods of testing software continues to be a critical factor, as the size of computer programs tends to increase as the size of memory devices increase and the unit cost of storage falls. This means that coverage analysis techniques need to be applied at the subroutine, procedure or function level as well as the system programming level.

Applying Coverage Analysis Techniques to HDL

Because the two HDLs (hardware description languages) Verilog and VHDL are fundamentally very similar to other structured languages it was a natural progression to apply coverage analysis techniques to these tools. In fact the syntax and structure of Verilog is based very closely on the `C' language whilst the origins of VHDL can be traced to Pascal and ADA.

By applying coverage analysis techniques to hardware description languages productivity was improved by enabling a designer to isolate areas of un-tested or inadequately covered HDL code. As every piece of un-tested code can potentially contain a coding error it can be seen that it is extremely important to identify these areas and exercise them with an appropriate set of test vectors. In this way a designer can quickly build confidence in the `correctness' of the design and show that it meets the original specification before committing excessive amounts of time and money to further development of the product.

Modern Hardware Design Tools

During the last ten years there have been a significant number of changes that have affected how products that contain hardware and software are designed in the majority of companies today. Perhaps the most fundamental change that has occurred is the introduction of hardware description languages and logic synthesis tools, although the increase in the power and sophistication of logic simulators should not be overlooked. All of these changes have probably been dwarfed by the exponential increase that has occurred in the packing density of electronic circuits and the number of transistors that can be integrated onto a silicon chip. Today the limiting factor in electronic design is not "Is there a chip that is large enough for my design?" but "Are there design tools with sufficient capacity to handle my design?"

Instead of producing a hardware prototype a designer now has access to a HDL and a logic simulator that can be used to develop the product and prove that it operates correctly. At this stage the design could be completely abstract and bear little relationship to physical hardware. These modern development tools enable any number of iterative changes to be made to the design quickly and easily during the product design cycle. Once the product design operates correctly then the designer can move on and consider how the product should be implemented. Various factors may influence this step. For example, the physical size of the final product, manufacturing cost, customer perception of how the product should look, product marketing requirements, etc. This means that the design team have to make decisions to determine which parts of the product should be implemented in software and which parts will be implemented in hardware.

Translating the design from the HDL level into physical hardware is normally carried out automatically using a synthesis tool or may be done manually if the design is very small. As well as saving a considerable amount of time and effort, synthesis tools also enable a designer to target different types, sizes and speeds of FPGAs, and by applying a number of `what if' decisions find the most cost-effective solution for the product.

Many designers consider synthesis to be a milestone event in a product development cycle. This is because any changes that are introduced from this point onwards are costly in terms of time and money. It can also involve a lengthy iterative loop if changes need to be made to the original HDL code because re-simulation and re-synthesis will need to be performed. Showing that the design is functionally correct before the synthesis stage is reached is a goal that all designers aim to achieve.

Selecting the most appropriate coverage analysis tool for a particular project can obviously help to achieve the goals outlined above as well as providing the designer with a high degree of confidence that the design is functionally correct.

Typical Capabilities of Coverage Analysis Tools

The coverage analysis tools that are currently available can be divided into two main groups. There are the tools developed by independent software tool suppliers and those developed by logic simulator vendors. Which tool you should use depends on a number of factors, some of which are listed below.

… Is more than one HDL used within your organisation?

Some organisations, particularly multi-national companies, may use Verilog in one division and VHDL in another and share HDL code between project groups. If this is the case then you may want to consider using a coverage analysis tool that can support both HDLs. Some coverage analysis tools offer just single language support while others offer language neutral or dual language capabilities. Choose the tool that matches the current and future needs of your organisation.

… Is more than one type of simulator used within your organisation?

Again some companies have standardised on the type of simulator that is used throughout the organisation while others may use Verilog in one division and VHDL in another. This means that you need to ensure that the coverage analysis tool you select will satisfy the needs of all your designers.

… What are your coverage analysis needs?

The number and type of coverage analysis measurements can vary dramatically between tool vendors. Although statement and branch coverage is offered by most of the vendors, one or two of the tool suppliers offer a rich set of measurements that are particularly useful when tackling control or data problems at the sub-system and system integration level. Detailed descriptions of various coverage analysis measurements appear in Chapter 6 of this manual.

How Coverage Analysis Tools Operate

The remaining part of this chapter will take a brief look at how a typical coverage analysis tool is used to extract and present information to a HDL designer or verification engineer. A coverage analysis tool performs three basic functions:

… Analyzing the HDL source code
… Collecting coverage data from the simulation
… Presenting the results to the user

Analyzing the HDL source code

During this phase the coverage analysis tool inspects the HDL source code to determine where monitor points known as probes should be inserted in order to collect the maximum amount of information about simulation activity in the design. It is crucial that the source code is not altered in any way, so this process (known as instrumenting) must be non-intrusive and is normally carried out by making copies of the original source files and instrumenting those files. Different types of probes are used depending on the type of coverage measurements selected by the user.

A set of control files is also created at the analysis stage. These files are used to define the maximum number of possible ways the code can be executed. For example, a simple two way branch for an if statement will have the value 2 stored in the control file, while a case construct with say six different decision paths will have the value 6 stored. At this stage the mappings to a designer's work area and component libraries will also be set up if any VHDL code is used in the design.

Collecting coverage data from the simulation

Most coverage analysis tools automatically invoke the appropriate logic simulator (i.e. Verilog or VHDL) and run a normal simulation to collect information about activity in the design. The information collected from the various probes that were embedded in the source files is used to build a series of history files for each design unit or module in the design. The information in the history file defines what actually happened during the simulation. When this figure is compared with the values in the corresponding control files it enables the percentage coverage to be computed. An example of how branch coverage is calculated for a Case construct is given below.

Branch coverage=(Case_History/Case_Control)*100

where Case_History is the number of actual branches taken and Case_Control is the maximum number of branches that could have been taken.

In a number of situations all a designer or verification engineer really wants to know is that a certain piece of HDL code has been exercised a minimum number of times. For example in a multi-way branch all that may be required, during the early stages of verification, is knowing that every branch has been taken. So the absolute number of times each branch has been taken may not be so important. Some coverage analysis tools have sophisticated probe management facilities that enable a probe to be automatically deactivated once it has collected a predefined amount of information. This mechanism can help reduce the simulation overhead when many changes are performed to the HDL code during the early phases of the project development.

Presenting the results to the user

The previous two stages have analyzed the HDL code and collected the results from the logic simulation. The last phase is to present the results to the user in such a way that the problem areas can be highlighted quickly so that effort can be directed accordingly.

One of the traps that a user can easily fall into is collecting too much information. Then more time is spent wading through the results searching for where the problems are located than fixing the problems. The temptation is to switch on all the different coverage measurements for the whole of the design just so that you do not miss anything! A better methodology, which will be described in greater detail later, is to partition the design into functional blocks, single design units or modules and to apply appropriate coverage measurements. This means initially starting with statement and branch coverage to find the obvious and simple coding problems, and then moving on to the more powerful measurements to find the obscure and difficult problems.

Most coverage analysis tools enable the results to be displayed graphically on the screen as well as generating textual printouts. Hierarchical views, color-coding and filtering techniques are all used to enable a user to quickly navigate to the problem areas. A selection of these techniques is described in detail in Chapter 6 of this manual together with screen shots that give examples of how this information is conveyed to a user.

Command Line and Batch Mode

Most coverage analysis tools include a command line interface that enables a user to drive the tools directly from the keyboard, a script or a batch file. This facility is especially useful in projects where there may be hundreds of HDL source files that need to be instrumented and simulated before the coverage results can be inspected.

Some form of automation, using a batch file, is also imperative when the regression test suite is run after changes are made to an HDL source file. More information on analyzing and optimizing the test suite can be found in Chapter 12 of this manual.


    TOC PREV NEXT INDEX  
Copyright (c) 2002
Teamwork International and TransEDA Limited
http://www.transeda.com
Voice: (408) 335-1300
Fax: (408) 335-1319
info@transeda.com