Open side-bar Menu
 The Breker Trekker
Tom Anderson, VP of Marketing
Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More »

Extending Verification Planning to Formal and Graphs

 
April 29th, 2014 by Tom Anderson, VP of Marketing

Last week I mentioned that I attended the third “Decoding Formal Club” meeting sponsored by formal consulting experts Oski Technology. I started out to write about this event but was distracted by the big news that Cadence had acquired formal leader Jasper Design Automation for $170M. As the meeting was winding up, a friend from Mentor picked up the news alert and showed it to me. I pulled up the news on my own smartphone and showed it to Vigyan Singhal, CEO of Oski and also the original founder of Jasper.

So I had the pleasure of informing Jasper’s founder that his old company had been acquired. But I don’t want to let that bit of fun or the Jasper news in general to lead us to forget about the Decoding Formal meeting. There were two primary segments: a presentation from Vigyan on verification planning and a panel of expert users on building a formal team. I’ll talk about the presentation today and cover the panel in a future post.

Verification planning has grown in importance as the verification process itself has become longer and more complex. In the good old days, which I remember well, simulation testbenches were minimally automated. Most tests were hand-written and designed to exercise very specific portions of the design. Data and addresses might have been randomized and a few advanced verification teams built infrastructures to run randomized series of operations.

Verification planning consisted of identifying the design features that needed to be verified, listing the tests required, and checking off the tests as they were written, debugged, and running cleanly on the design. Periodic regressions were run to ensure that tests continued to pass as the design evolved. Code coverage was sometimes used as a cross-check to see if any important features were missed in the test list. Formal analysis, if it was run at all, was an entirely separate process with no shared metrics.

With the emergence of constrained-random testbenches, there was no longer an explicit link between each test and the functionality it verified. The planning process evolved so that the verification team specified each feature to be verified and wrote a functional coverage point to ensure that the feature was actually verified over the course of all the constrained-random simulations. If the coverage point was hit by a passing test, the feature was deemed verified. Again, code coverage was useful to identify testing holes.

Vigyan’s talk focused on end-to-end formal, the use of formal analysis to verify a design block without block-level simulation. Clearly, identifying appropriate blocks early in the process is part of verification planning. The best candidate blocks are those that are heavy on control logic and those that transport data with transforming it (for example, via an arithmetic computation). In some cases, portions of the design may have to be abstracted to give formal a fighting chance for completion.

The verification plan for formal analysis of a block should include the list of assertions needed to check design correctness and the list of assertions needed to constrain the input space to legal boundaries. The results of assertion coverage from formal analysis (triggered, bounded proof, full proof, etc.) should be brought back into the test plan and combined with any coverage results from simulation. Most modern verification planning tools support formal metrics as well as traditional code and functional coverage metrics from simulation.

The domains of simulation and formal can interact in interesting ways. For example, the assertions used as constraints for a block should be included in higher-level simulation runs that include that block. An assertion violation may point to a bug in the logic feeding the block, but may also flag over-constrained formal analysis. If there is legal input to the block that was constrained out, then the formal results are suspect. Block-level formal analysis must be re-run with the proper set of constraints.

I thought that Vigyan did a nice job of setting the sometimes alien world of formal in the context of mainstream verification planning. The project manager who must ultimately sign off the design needs to know what was accomplished by both testbench simulation and formal analysis. For system-on-chip (SoC) designs with embedded processors, there is the third technique of SoC verification using C test cases. Fortunately, the notions of planning and coverage exist for SoC verification as well.

As a quick reminder, Breker’s TrekSoC product automatically generates multi-threaded, multi-processor C test cases that run on the SoC’s embedded processors in simulation. All functional coverage points, and even code coverage, can be active during these simulation runs. So the verification planning process can begin by deciding which blocks will be verified by formal, which features will be verified by constrained-random testbenches, and which features will be verified by auto-generated C tests.

TrekSoC uses a graph-based scenario model as its input, and as we’ve discussed before, graphs automatically provide a high-level form of system coverage. This coverage corresponds to specific features in the verification plan but also to combinations of features into realistic user scenarios. Since the Unified Coverage Interoperability Standard (UCIS) supports the importation of external coverage types, compliant verification planning tools can read in system coverage and merge it with other forms of coverage for a comprehensive view.

This single view of coverage results, when combined with other metrics such as bug-rate convergence, coverage convergence, and test suite completion, gives the verification team and project manager the information they need to decide when to tape out a complex SoC.

Tom A.

The truth is out there … sometimes it’s in a blog.

Please visit us today at www.brekersystems.com

Tags: , , , , , , , , , , , ,

2 Responses to “Extending Verification Planning to Formal and Graphs”

  1. Avatar Anand says:

    ” Clearly, identifying appropriate blocks early in the process is part of verification planning. The best candidate blocks are those that are heavy on control logic and those that transport data with transforming it (for example, via an arithmetic computation). ”

    Really? My understanding was formal was good with the Computation intensive blocks but not for control intensive blocks.

    • Tom Anderson, VP of Marketing Tom Anderson, VP of Marketing says:

      Anand,

      My past experience with formal matches Oski’s suggestions. Formal analysis is best applied on control-intensive blocks. Classic examples of functions amenable to formal include cache coherency, arbitration, flow control handshakes, bus protocols, and power domain control. If you have had positive experiences with formal on computational blocks I’d love to learn more.

      Tom A.


Warning: Undefined variable $user_ID in /www/www10/htdocs/blogs/wp-content/themes/ibs_default/comments.php on line 83

You must be logged in to post a comment.




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise