Unit testing: Check result of e.g. verifyEqual

In a unit test, I have many lines such as:
verifyEqual(testCase,1.5,x);
Next I would like to have an 'if' statement to execute one of two code blocks depending on whether that verification was successful or failed. Does verifyEqual store its result somewhere that I can check, maybe in testCase, or must I duplicate the test on my own in order to check the result?
Honestly, I find it surprising that this is not supported:
thisResult = verifyEqual(testCase,1.5,x);
if ~thisResult % MATLAB complains here because verifyEqual does not return an output
disp('problem here'); % My goal is to set a debugger breakpoint on this line.
end
Any suggestions? Thanks

 Respuesta aceptada

Steven Lord
Steven Lord el 20 de Ag. de 2018

1 voto

For your specific use case of entering debug mode when a verification failure occurs, add the matlab.unittest.plugins.StopOnFailuresPlugin plugin to your test runner.
If you want to perform some other type of action when a verification failure occurs, you probably want to use a diagnostic in your verifyEqual call. Note that you can specify your diagnostic as a char vector or string and it will automatically be treated as a matlab.unittest.diagnostics.StringDiagnostic, and similarly for a function handle and a matlab.unittest.diagnostics.FunctionHandleDiagnostic.

5 comentarios

Jeff Miller
Jeff Miller el 21 de Ag. de 2018
Thanks for your answer, Steven.
I don't yet see how to use StopOnFailuresPlugin for my actual use case, unfortunately. Early in development I'd like to be able to stop and investigate certain failures but ignore others (to be dealt with later). As far as I can see, the plugin would stop on all errors.
Maybe I can use diagnostics to pause/debug selectively--haven't figured those out yet. I must say, the overall architecture does seem a bit over-engineered relative to the trivial option of having the verify***'s return their results and letting the programmer use those however he or she likes.
Steven Lord
Steven Lord el 21 de Ag. de 2018
You are correct, the plugin does stop each time a verification fails.
If the test itself were to decide when to enter debug mode, it is no longer a Fully Automated Test. If you're running this test in a Continuous Integration (CI) system and you forget to remove this code before submitting it to the CI system, you could stall your CI system overnight (or until it or someone monitoring the progress of the CI system decides your MATLAB session has stalled or crashed and kills it) and that's a Bad Thing.
Your workflow is a bit different from the one I usually follow. Rather than writing a whole bunch of tests then run them all at the end I tend to write a test method, run that test, debug that test, and move on to the next method once it's finished. [The ability that was introduced in release R2018a to run tests from the Editor toolstrip helps with this workflow.] But I think there's a way to achieve what you want to do without too much extra work and without potentially leaving debug code in your test itself.
  1. Run your test and obtain the matlab.unittest.TestResult array.
  2. Identify those tests in your suite that failed, as per this example in the documentation.
  3. Once you have your suite of failed tests (represented in that example as the failedTests variable) apply the StopOnFailuresPlugin plugin to that pared-down suite.
  4. Run that suite (or individual elements in the suite one at a time), investigating and fixing each failure in turn.
This has a couple other benefits:
  1. If a test that was failing stops failing when you run it in isolation, it may have been affected by some change in the state of the system under test made by a test method that ran earlier. If this happens, it means you don't have independent tests but interacting tests.
  2. If 99% of your tests run and pass, you don't have to spend the time running those passing tests before you reach the 1% of the tests that fail.
  3. The test that you run outside your CI system will be identical to the test you run inside your CI system, so ideally it should run identically in both places.
Jeff Miller
Jeff Miller el 22 de Ag. de 2018
Editada: Jeff Miller el 22 de Ag. de 2018
Thanks again, Steven. Your suggestion of saving the failed tests and selectively rerunning individual ones of those was very helpful, so I will accept this answer.
In case you are interested, here is the reason I want to perform a bunch of tests all at once rather than doing them one by one for each new method: I am developing a collection of probability distribution classes, and each new class has to pass most of the same tests as all of the earlier classes (e.g., integrals of density function must match cumulative density, other integrals must match moments, etc). To make that easier, each of the unit test classes for these distributions is a descendant of a generic distribution test class (where most of the tests are actually done). In essence, I perform the complete generic set of tests when debugging each new class. I do need to fix errors in a particular order (e.g., if the density function is wrong, that has to be fixed first), and that is why I wanted to be able to stop at certain failed tests but not others. Hope that makes some sense...
Steven Lord
Steven Lord el 22 de Ag. de 2018
In the scenario you described, you might want to tag your tests if you're using release R2015a or later. Doing so would allow you to run first all the tests for the constructor of the probability distribution class you're currently developing (which could have TestTags = {'constructor'}), then the tests for the density function (TestTags = {'density'}), etc.
This would let you lock down each method in turn before you move on to the next in "importance" and/or development order. It would also let you more quickly qualify bug fixes or enhancements to a given method by running just the tests specifically intended to test that method.
You can use this test tagging and selective running technique with the StopOfFailuresPlugin as well to iterate through just the failed tests for the density method and debug the failures.
Jeff Miller
Jeff Miller el 23 de Ag. de 2018
Thanks! I did not know about tags, and you are right that those will be very helpful in my scenario.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Preguntada:

el 20 de Ag. de 2018

Comentada:

el 23 de Ag. de 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by