Making sense of HDL Verification Methodologies

Posted on 2017-02-03 by Lieven Lemiengre
Tagged as: VHDLSystemVerilogVerification

In the last decade a lot has changed in the field of digital hardware design verification. To cope with the increasing complexity of designs, the industry needed better solutions to test hardware. A lot of novel solutions were developed. Some of them have became very successful and some are already obsolete. This can be very confusing for engineers who are new to this field.

In this article I sketch the verification landscape and give an overview of verification methodologies.

There are 3 broad categories for verification: Directed testing, Constrained random testing and Formal methods:

  • Directed testing: This is the simplest form of verification. Every test(bench) aims to verify a specific behavior of the design.

  • Constrained random testing: With this methodology the design is driven by transactions generated by a constrained random generator. The design itself is augmented with various assertions that detect illegal states and behavior of the design. The behavior that needs to be tested, is called into a coverage point. A design is fully tested when the transaction generator can reach all coverage points without failing assertions. This technique has the advantage that it is often less work to cover all coverage points than with directed tests. In addition this also tests unanticipated scenarios.

  • Formal methods: The third kind of testing is using formal methods. Like constrained random testing, formal methods also use assertions to validate your design. But you do not have to write a constrained random transaction generator or define coverage points. Formal tools analyze your code, constrained with assertions, and use this information to look for scenarios in which certain assertions fail. They also detect what code can never be reached (dead code). This technique requires that the inputs of a design or part of a design are correctly constrained. This methodology can be more effective than constrained random testing because the complete behavior of your design is tested.

So what methodology should you choose to validate your design?

All of these techniques are perfectly valid to use, and effective! The complexity of your design and the requirements will dictate what technique you should choose. The cost and time to train an engineer and the cost of the tooling is also important. It must be in proportion with the cost of a bug in the design.

For FPGA design, directed tests are often good enough. Constrained random testing is a more powerful methodology but it requires using complex frameworks & engineers that specialize in this kind of testing. That is why it is rarely used for testing FPGAs. FPGAs are mainly developed by small teams that don’t have the luxury of having specialized verification engineers.

For ASIC designs and large FPGA designs, constrained random testing has become the standard test methodology. Verification engineers specialize in mastering complex frameworks like UVM to verify designs. These frameworks are free and often open source but the simulator may be more expensive. The learning curve of frameworks like UVM is high so adding a new engineer to you verification team may require expensive training.

Formal tools are the latest development. This methodology requires special tools and an even more specialized verification engineer. Formal verification tools are expensive and the verification engineer needs to be very skilled. So the initial investment is even higher than constrained random testing. But when used correctly, formal tools are more effective than constrained random testing. Formal tools are mainly used for high-end ASIC design.

In the near future, I will write some follow up posts where I will explain the different methods in more detail. I will also go through some tools and frameworks that can used.

See also

comments powered by Disqus