Test Exceptions and Mask Contraints within the TestManager

8 visualizaciones (últimos 30 días)
Stefanie
Stefanie el 2 de Jul. de 2024
Comentada: Umar el 19 de Sept. de 2025
Hi,
i want to test my module under test using a test harness and testmanager combined with a test assesment. Additional to "normal" test cases, i also want to do out of range tests, or invalid mask inputs (contraints protected) or setting unvalid inputs for signals.
I found the openExample('matlab/VerifyErrorTestInputValidationOfAFunctionExample') example, which is kind of what i want to do, but i want to integrate it into my test manager to automatically run several different tests.
An example, I do have a simulink subsystem, which has a mask parameter called a, which needs to be a postive number and is protected by a mask constraint. In my first test step i enter value 5 for this mask, in the next test i want to add a -5. I do expect the mask contraint exception to be thworn and it will be thrown but i want to add this expected exception to my test manager as a valid test that has passed when the correct exception has happened.
How can i do that?

Respuestas (2)

Umar
Umar el 2 de Jul. de 2024
Hi Stefanie,
You can use the verifyError function within your test case. For example, in your case, after running the simulation with the negative value (-5) for the mask parameter, you can use verifyError to confirm that the expected exception was thrown. Here is an example on how to implement it
testCase = sltest.TestCase('MyTest');
verifyError(testCase, @() yourSimulationFunction(-5), 'expectedExceptionIdentifier');
Please make sure to replace yourSimulationFunction with the function that runs your simulation with the negative value. Make sure to provide the correct expectedExceptionIdentifier that matches the exception you expect to be thrown when the constraint is violated.
For more information regarding verifyError function, please refer to
https://www.mathworks.com/help/matlab/ref/matlab.unittest.qualifications.verifiable.verifyerror.html
Hope this will help resolve your problem.

Kris Schweikert
Kris Schweikert el 18 de Sept. de 2025
Hello, I’d like to join in on the question. I have exactly the same use case.
The first sub-problem — evaluating the corresponding error — I was able to solve using the Custom Criteria field in the Test Manager as follows:
test.verifyEqual(test.sltest_simout.SimulationMetadata.ExecutionInfo.ErrorDiagnostic.Diagnostic.identifier,'Simulink:Masking:ConstraintErrorMessageHeader')
But I still have the issue that the test fails overall due to the (expected) error, even though the custom criterion is fulfilled.
My question therefore is: Is there a way to override the entire test result? I understand that TestResult is read-only — but maybe there’s a workaround?
Best regards
  3 comentarios
Kris Schweikert
Kris Schweikert el 19 de Sept. de 2025
thank you very much for your reply - inspired by your response, I ended up doing something quite similar by using a try-catch statement in the custom criteria field of my test. Instead of running the simulation, I just set the parameter of the mask to a false value and evaluate the thrown error:
try
% inject false value of parameter
set_param(strcat(test.sltest_bdroot{1},'/<myMaskedSubsystem>'), 'myParameter', 'myFalseValue');
test.VerifyTrue(false); % fail test if no error was thrown after false injection
catch ME
% check error
test.verifyEqual(ME.identifier,'Simulink:Masking:InvalidParameterSettingWithPrompt');
end
When added to a valid test (i.e. containing some other logical & temporal assesments) this works without failing the whole test due to the thrown error.
Cheers
Kris
Umar
Umar el 19 de Sept. de 2025

Hi @Kris Schweikert,

That’s a great solution — I’m glad my response helped spark the idea! Using set_param directly in the custom criteria to inject an invalid value and then catching the expected error is a smart way to bypass simulation-level failures while still validating error handling. I like how you explicitly fail the test when no error is thrown — that makes the intent very clear.

As you pointed out, wrapping this in a broader test with other logical or temporal assessments ensures that the expected error doesn’t inadvertently fail the entire test run. It's a clean workaround for the limitations of TestResult mutability.

Thanks for sharing your approach — I’m sure others will find it helpful too!

Iniciar sesión para comentar.

Productos


Versión

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by