Using Model-Based System Engineering to comply with ARP4754A
Overview
This panel will offer a comprehensive overview of how Model-Based System Engineering and Model-Based Design can be utilized to meet the objectives of the ARP4754A Standard. Industry experts will dive into the standard's objectives, showing how the use of MathWorks tools can significantly improve your processes.
Highlights
During the panel you will learn about the following techniques in the context of the standard and how to use MathWorks products to accelerate your processes:
- Requirements capture and validation.
- System and subsystem-level architecture modelling and development.
- System Allocation, Assessment and Decomposition.
- System Verification Techniques.
About the Presenter
Marco Bimbi holds a M.Sc. in Aerospace Engineering from the University of Pisa in control theory and flight mechanics. Before joining The MathWorks, he has worked for 10+ years in aerospace as well as rails industries such as Rolls-Royce, Lilium and Deutsche Bahn focusing on Systems Engineering workflows for safety critical applications. During his career he held various roles such as Control Systems Architect, Model Based Systems Engineering Specialist and Requirements Manager. At MathWorks he focuses on Model Based Design Systems Engineering workflows for safety critical applications.
Ulrich Fräbel holds the degree of a Graduate Engineer in control theory of University of Rostock and Aerospace Engineering for Engine and Flight Control Systems of Officers’ College of East German Airforce. He worked for 20+ years in aerospace industry in Airbus and Rolls-Royce in the areas of systems engineering, flight testing and certification. During this time, he intensively used MATLAB and Simulink. Currently he runs his own business as general manager of Hybrid Aerotech GmbH in Germany developing hybrid electric propulsion systems for small aircraft and UAV and of the consulting service Certification Office. In several projects he led or actively supports the introduction of ARP4754A design assurance standard into the design management system of the several design organizations.
Juan Valverde is the Aerospace and Defence Industry Manager for the EMEA region at MathWorks. His technical background is on the design of dependable embedded computing solutions for aerospace.
Prior to MathWorks, Juan was a Principal Investigator for Embedded Computing at the Advanced Technology Centre for Collins Aerospace - Raytheon Technologies in Ireland. Juan has a PhD in Microelectronics and Computing Architectures by the Technical University of Madrid (Spain).
Recorded: 16 Jan 2024
Let's go with the seminar today. Today, we have two presenters. First, we have Ulrich Frabel from Wankel Aviation and Marco Bimbi from MathWorks, so good afternoon to you both.
Good afternoon.
[INTERPOSING VOICES]
Thank you. So just to give a couple of words about them, Mr. Frabel worked for more than 20 years at Airbus and Rolls-Royce, and he works now at the Wankel Aviation as certification expert. Mr. Bimbi has worked for more than 10 years in Rolls-Royce, Lilium, and Deutsche Bahn and works now for MathWorks as a system engineering specialist.
Great. So then let's start with the structure of the seminar today. So we will have three blocks. And before, we will have an introduction. In each one of these blocks Mr. Frabel will talk a bit about what the document is asking for, and then Mr. Bimbi will explain possible ways to do these things with the MathWorks tools. We will have a short Q and A session at the end of each block, so please, once again, drop your question there.
In the first part, Mr. Frabel will provide an introduction about the standards and the ecosystem. Then we will jump into the first block about requirements capturing and validation. Then in the second block, we will talk about system architecture development. And then in the third block, we will include information about implementation and verification.
At the end of these blocks, we will do a small recap and answer more questions. So I think that we can start with the interesting part. So please, Ulrich, the floor is yours.
Hello. Good morning. Hello, good afternoon, everybody, for this webinar. Before we start, I would like to mention the ARP4754B results was published end of last year. So we're talking about here today about ARP4754A. When we come to a point where ARP4754B differs from that one, I will make some side notes.
What I can say-- the structure is retained. The principal is retained. The ARP4754 is-- A and B are have been developed to design complex aircraft systems. And the B version made some nice point of view, provided some clarifications, about-- compared with the B version. So for me, it's a good development in the right direction.
So let's start here with the ARP4754A. As I said, it's for development of complex systems. And this is-- the concern arose over years, when complexity of aircraft and systems increased, that authorities have concerns about whether the systems are comprehensible and whether the systems are-- have anything in-- any design errors in the system itself, because one of the characteristics of a complex system is you cannot test it completely any more, complex system, so you need to use analytical methods.
You need to use to use analytical tools to finally ensure you have no omitted requirements in your system, and you also have no unintended added features in your system.
So just to start about this one, a few words about complexity in general, you see here this chart where you see over the time and the costs, when you are living in the ROI in requirements, design, or implementation, unintended. The costs are increasing significantly.
So the problem here is that we are-- our human brain cannot really handle complexity. And therefore, as I said, ARP was introduced. And what we are typically doing is we are tending to strive for problems in the late phase because the problem-- the system and the product is there. It's physically there. You can see. You can touch it. The product talks to you. And it's easy to find the problem there at the end, at the very end, but it's very difficult and expensive to fix.
The next one, please, Juan. Yes. So therefore, we need to pay attention for the early signs, where where the level of obstruction to your product is still relatively high. is in the phase of setting your requirements, understanding your requirements, starting to architect functional development, architectural development.
So these are stages in your product lifecycle where you are still relatively remote from your product. It's not there. It's on the paper. It's somewhere in the computer. But it's not physically there. So therefore, our brain is-- let me say constructed in a way that we don't like to handle these issues at the very beginning.
And exactly this one was realized over the years when ARP was introduced, when it was created, saying, OK, we must put focus on the very beginning of the design life cycle, the development life cycle, and in order to address this one.
Thank you. It's good. OK. Juan, go to the next one. Yes. Thank you.
So this is an expression which is similar to that, what you find on the new version. It talks about-- more about also-- it shows more-- provides more information about the safety process itself. Today, we're going to talk about this one we have inside the dark blue circle. So that's what we're talking about today, about system design, system development life cycle.
So we will not talk about today ARP4761, by the way, where the A is also issued meanwhile. And so that's the content we are going to talk about today.
So what is ARP asking for? ARP is asking for-- or is embedded in the regulatory landscape of authorities, which is asking for what planned and systematic tasks should be defined and performed from the very beginning of the design activities until the continued activities, so-- and ARP is exactly doing this especially for complex systems.
This is the-- sorry about this, Juan. This is the-- zoom-- we zoomed in here. What we're going to talk about today. As he said, it is-- ARP's here to support the compliance demonstration against your certification requirements, against your company own requirements, so that you can achieve your business requirements, and your customer requirements, but it's all to achieve certification requirements, safety requirements.
So we're ending up in a set of different, sometimes conflicting requirements. And ARP is there to help you to come to a robust set of requirements, and later on, design and implementation. So the key point is for showing compliance with 25, 13, 09, and 23, 27, 29, and respective AMCs, which finally, for the ARP4754A, especially for the complex systems.
OK. This one picture here, what we seen before the picture, it was the right-- the left side of the On this one here, you don't see anymore This is-- it's the right side of ARP. We will talk about this one in the dark blue boxes for setting up your verification program, how MATLAB Simulink can be-- support these activities.
So you see here that your test program is directed against mainly the requirements, and however, taking design considerations into account, of course, but also, looking for what your safety assessment is telling you about criticality from your functions, you are-- have on the verification, and also the design features, the design-- the safety requirements and safety features, you have to take into consideration.
OK. So-- and it also provides you, and this is the good thing, what I see in the new ARP, where the final safety assessment is made with-- when you have completed your verification. So the first safety assessment you do is normally during when the design is ready. The safety assessment evolves with the design. And when you are-- design implemented, you are ending up typically with a first stage of safety assessments.
However, later on, you need to go, and you need to analyze your test results and analyze whether the implemented system is-- meets the requirements.
And you see here also the open problem report. So finally, the ARP itself will help you to verify whether your system-- implemented system meets still the safety assessment, OK?
So we see here, again, an extract of that what we are going to talk about today, and here, a report example to see how aircraft developments can-- developments can happen. So you have a set of aircraft functions. And then you see how those aircraft functions are getting allocated to different systems. And then representative for one system you have here the continuation down to the software and hardware development processes.
And yes, you see here the aircraft functional development, which is a useful grouping of aircraft requirements, allocating them to functions in a way that you can ease your development process and subdividing this one into manageable subtasks, and then conduct this task in sequential manner.
What are we talking about here today, as I said, we're talking about system development. So we are interested in the allocation of functions to aircraft systems. And an important thing also is to mention, thank you, Juan, it's more in this early phase, where we said, remember for the chart at the beginning, we are still in the phase of very-- you are still remote from the products, so the level of abstraction is very high, still relatively high.
So to find the optimum solution defines-- to define all required functions to find the right optimum allocation of functions to subsystems. It's more a cyclic, rather than a sequential, process. So when we're taking this abstraction here-- abstraction here, you can see complexity starts already at the top level, in a very high level.
So you can do a lot of research here by-- or not in terms of not finding the optimum allocation of functions to aircraft systems. And you see here you can have several functions are allocated to different systems, but also, different functions allocated to a single system.
The example is engine. Marco, myself, is coming from Rolls-Royce engine supplier. We know what we're talking about. So it's a it's a extremely very, very high, complex system, such an engine. And you will have-- definitely have some different functions. But the main aircraft functions OK.
Great. So I think that the-- thanks a lot, Ulrich, for the introduction. I think we're going to set a poll now, so please, if you have an answer, please answer. It would be great to know what you guys are doing.
And then we're going to start now with the first block. In the first block, we will talk about requirements, OK? Like I mentioned, the dynamics will be-- Ulrich will present an introduction, and then Marco can explain how to do that in the tools. Yep. Ulrich, all yours.
So it's mine. So you do not expect me to have yes, requirements. OK. Let's start requirements. What for do we need requirements? Requirements expressing a need from a customer. The customer tells us what he wants to do and what he wants us to do and in which quantity, in which time, and in which amount of costs.
So here in mind, customer requirements are never complete. Customer requirements are never correct. And customers very often changing their mind over the development life cycle. So a valid-- validation-- a robust validation process is very, very, very, very important.
So finally, requirements telling us what we have to deliver commercially, and that's what I said before. That's very important. So as I said, we need to come to a point somewhere. You need to start your design, and you cannot-- you can only cope with a limited level of continuously changing requirements. So therefore, you need to agree with your customer what you need to deliver. Commercially, it's the very beginnings step in the development.
Very important to know, or to realize, requirements are the beginning of any design. So the design always should start, and saying should because ARP is an AMC, with a design. And the next one please, then. We're coming to this later on. And finally, the requirements are the confirmation, whether we have something done right or coming to the conclusion we have not done it right. So these are the key points why we need requirements. And the next one, please.
So if we-- Juan can you click the mouse? Yes. Thank you.
So when we have an incorrect, incomplete, inconsistent set of requirements, we are at a high risk. That's what we need to know, because our world in designing these complex systems or other systems-- it's a garbage in, garbage out process. So when you're not defining requirements correctly, you will not get what you want.
So therefore, requirements, and remember, again, for the slide we had before, high level of abstraction. You are still far away from your product. And therefore, you need to put effort and emphasis on, amongst others, also starting to define your requirements.
Two things I've seen in my career, potentially, I still would like to do is-- OK. What engineers don't like to do is to write requirements and manage configuration. So writing requirements is a highly subjective task. You-- very starting, the very set of requirements, most likely very objective, so it's very difficult to read the paper and what you have in your mind.
And to help you to do this, to do it in an efficient way, you should have requirements standards in your company. Writing, applying these standards by using sometimes, not always, but using standard phrases where a certain technical expression or understanding is behind, and therefore, make very clear what do you want to express when you write a requirement.
So when you're writing your requirements and validating these requirements, the first thing you need to do-- you need to do your own validating your requirements, the requirement itself, for completeness, correctness, unambiguity. Did the requirement follow the standards in your company? And identifying is the requirement derived?
Because-- and then you need to check your traceability. Is this the requirement traced to a parent requirement, or does it trace to a rationale if it is a derived requirement? The ARP4754B here makes-- is different with regard to derived requirements. Best thing is to look into-- set up your minds in this own, on your own. So the definition of derived requirement is different than-- in the B version than it was before in the A.
So checking when you're validating, is the traceability established? That's one of the key elements in validating of your design, your requirements. And does the requirements itself also take the higher level design decisions sufficiently into consideration?
So when you have made this design decision, decision on aircraft level, for example, when you make a decision saying, OK, are we going to develop a mechanical flight control system, or are we going to develop an electronics flight control system? Of course, this is a design decision that will massively drive your lower level requirements. So therefore, you have to validate, for the consistency.
Oh, Juan, you're driving me up.
OK. So this is other validation cases, and also, what you need to validate, does the requirements meet-- sorry, does the design finally you develop meet the requirements, and can I develop a design for my requirements? Because you can also easily write a requirement which cannot be transferred into a design. That's what I said here already. Juan, can you go to the next one?
And also what we said. Does the design comply with the requirements? So here we are on the first point, where you, for example, also can use testing. Validation testing is an absolute valid point. Very often what I'm hearing from my consultancy tasks is testing is verification. Now, this is not true. Testing is also a means of validating your requirements, validating your design, right? The next one, please, Juan? Yes.
And also, can I implement my validation into the design? OK, it's not directly linked to the requirement, but this is what the left side of the diagram is asking for in the ARP, therefore, completeness reasons, I have put this one in. And you see here, this is already the left and the right side of the diagram. You cannot really separate them from each other. It's a flowing-- a smooth transition from one to the next one.
And the final one, Juan, please Yeah. So this is gray area. This is then-- we're talking about when we're coming to the verification of implementation. And so that's so far from my side for the requirements.
And yes, the validation process itself is ensuring whether requirements are, like I said, correct and complete, and whether all the products will meet airworthiness, customer, and project requirements.
Correctness and completeness means-- are-- do we have ambiguous statements, and we need to find them out, eliminate them. Do we have incorrect statements, and do we have incomplete statements? It's the point of validating and searching for this kind of requirements with these characteristics.
And yeah, completeness. Do all the requirements traced to an identified source? And assumptions and constraints are adequately defined, because sometimes, you cannot always be 100% sure what you are using the design process. Then you need to declare this as an assumption. And the assumptions are very critical. And therefore, they need to be handled with exceptional care.
Typically, the engine world, we have a certain-- certification paragraphs using 30 assumptions where because of the high complexity, you need to make assumptions of what's going on on the other side of your interface. And this is-- the engine world, we have this, we see that, typically, but also, inside systems, this can happen, so that you have to make assumptions, agreeing the assumptions, and sticking to these assumptions. So this is a very important matter of defining whether requirement is complete.
And correctness-- of course, do all the requirements correctly reflect input data from safety and high level design process? And for this one, you need to have a validation matrix. That's what ARP is asking for. So most of you may notice this picture. And one of the key issues of the validation data and this matrix is, did we achieve the full validation coverage? And it's the-- and finally, also, you need to provide this in your validation summary.
So the content of the validation matrix is typically the requirement or assumptions, the source of the requirement. Is the requirement derived? And what is the criticality-- the safety of this one which comes from the higher level safety process, in this case from the aircraft functional assessment and preliminary aircraft safety assessment? What safety requirements have been assigned to the certain aircraft function. So-- and--
Great. So thanks a lot, Ulrich. I think then we can jump into the next part, and Marco is going to explain a bit how to do these things in the tools. Thanks a lot.
Yeah. Yeah. So let's quickly see how we can actually do some of the things that Ulrich just explained to us. So the idea here on this block of requirements capturing and validation is essentially where we start with stakeholder requirements that are coming into it, into our environment, or into our design. And we want to basically do a validation of those.
So at the end of this, basically, process that you see here, what we wanted to have is a refined set of requirements, the interface being defined, or reports that essentially-- it's what we call-- it's called before the matrix of the ARP ones for-- in terms of traceability and evidence and the report that justify all of that.
So in terms of capturing the requirements, we could do different things. Of course, as many of you know, we have a Requirements Toolbox where people can author their requirement directly into that. But we understand that customers might use other requirements management tool.
So basically, what we are trying to do here is essentially provide a means for people to ingest the requirement in our environment where they can take it-- they can be taken further for analysis and then link with the model and then backfill the traceability so you can then generate the matrix. Because at the end of the day, you need to understand and you need to show as an evidence that each of the-- each and every requirement is being verified and then is linked to the appropriate design, as we showed you before.
We do support many of the third-party tools in terms of requirements management, either with ReqIF or with a direct integration. Like, we do have a direct integration with Doors, both Classic and Next Gen, and Polarion and CodeBeamer, through direct plugins that we have built, either ourselves or with the vendors. With the rest, for the rest, we are actually using, of course, ReqIF.
Now, let's assume we have essentially ingest those requirement into our environment. At this point, we have essentially two major ways to validate this requirement. The first and most obvious is to build a Simulink model that lets understand our stakeholder expectation and refine them.
So if we make an example where, for example, we want to build-- we have a requirements for a quadcopter that has to be a certain range and it has to accelerate from a certain velocity, from zero to a certain velocity, in, let's say, eight seconds. So we have a range and timing as a parameter to build.
Well, what we can do-- we can build a virtual prototyping of it, so essentially, it's a Simulink model, to the best understanding that we have at this point in time. And we can then spring up an optimization, essentially, analysis. So we can write those merit function, optimization functions, and try to optimize the design of the quadcopter and try to fulfill those parameters in terms of range and acceleration time.
And so what's the outcome of this? The outcome of this is a refined set of requirements. Basically, what we can try to understand out of this process is whether with the current technology we can actually achieve what the requirements are asking for. This is the first thing. This is the first cut off. Can we achieve it, yes or no? Are those requirements incompatible, even in the first place, because of the range and the acceleration that we are requiring?
We can also figure it out if we have any missing requirements, so if we have to put any constraints. Now, the result of this is effectively two things-- a report that summarizes how we've done the analysis and what the result is, and the other thing is, it should be, a revised set of refined requirements, that you agree with your stakeholders. And of course, the third thing is the validation matrix that you have it as an evidence.
Now, this requires you to create a Simulink model. The other way, which cannot be applied for every requirement, is essentially to have a specification model which formalizes the requirement with the aim for-- of validating requirements completeness and consistency.
And the idea of this, as we will see this, how this can be done, is that this specification model can be then run and analyzed through formal method, for example, Simulink Design Verifier, which look for corner cases in terms of incompleteness and inconsistency. The good thing about this is that you can actually then generate automatically tests out of it because you effectively have a specification model that is at certain semantics.
Now, this is how this would look like in our environment. We basically can formalize requirements through what is called Requirements Table. Essentially, a Requirements Table is working in a way where each line is a requirement. So basically, the entire table constitutes your requirement set. And what you're looking for in this case is completeness and consistency. So do you have the entire set? And are there requirements conflicting amongst each other? So you're looking for what you see in the-- on the right-hand side of the screen, inconsistency and incompleteness.
The table gets analyzed through Simulink Design Verifier, which is a formal method, as many of you know, and is looking for inconsistency and incompleteness in the mathematical formulation of those requirements.
Now, if you look closely to the picture, you see that behind-- beside the tab Requirements, there is a tab called Assumption, which is what Ulrich was describing before. Assumption and especially assumption management is very important in the ARP. And the fact that you can actually log assumptions in there on your, for example, physical limitation of your model or generic assumption that you are having on your system would basically restrict the design space that the Simulink Design Verifier will analyze.
But what this gives you at the end of the day is also your assumption management, kind of for free, because at the end of the day, every assumption needs to become either a requirement or needs to be clear. But this gives you a formal and a-- let's say, a structured way to manage your requirement and manage your assumption.
And the idea of how to use this table is to launch the analysis, find if you have inconsistency and incompleteness, and then basically resolve that by, for example, adding requirement or change requirement.
So again, in this case, the output is an analysis that gives you the evidence that you have done the analysis for completeness and consistency, and the report and the traceability matrix and in evidence, plus the assumption.
So if you go to the next slide, this is an attempt to basically map the ARP objectives to our toolchain. So what we can see at the moment is through modeling, we can basically use model to capture interface, and we can use Requirements Table to model and analyze the requirement thanks to Simulink Design Verifier.
The Requirements Table analysis basically gives us a verification result and gives most important corner cases which we haven't considered. Because all of this is embedded in our Requirement Toolbox, we can generate all the traceability that we require for evidence in terms of ARP.
Great. Thank you, Marco, again, and thank you, Ulrich. So we are not doing great with time, so let's do a-- quickly, a check on the Q and A. So one of the questions is for the design and manufacture purpose of any DFMEA or PFMEA, the tool is used-- oh, I guess it's which tool is used for accurate design at the infant stage rather than a mature stage? Is that? So yes, you can use a tools or processes like the DFMEA and PFMEA. Now, we won't have the entire time to look at this, but you should know that we have recently launched a new product called Simulink Fault Analyzer, which, right, the name is maybe not fitting for DFMEA, won't tell you right away the answer.
But the idea is-- of this tool is that you can actually do FMEA and FMECA. And you can also customize the tool to do a DFMEA and PFMEA for that. So if you are interested, please feel free to contact us, and we can give you a demonstration on how this can be done.
Great. Then another very quick, does this webinar have a live session on MATLAB? I'm afraid not, so we don't have time for that.
No, time for that.
Feel free to contact us, and we can have more. There is a-- we sometimes use the term design verification. It sounds like this is synonymous with that-- what you've called design validation. Is that correct, or is there a difference? Is there a difference?
I think-- Ulrich, you can correct me if I'm wrong, but basically the differences between the wording, "validation" and "verification," in terms-- have you build the-- what you should be building, and which is the validation, so if you build what the customer really want, and the verification is if you build what you said you do. So have you implemented your requirements correctly?
Very quickly then, validation is asking for, did we built the right system? And verification is asking for, have we build the system right? So validation looks into the future, saying, OK, we have the right requirements. We have a good design. And all the requirements are reflected in the design. And as we said before, so-- and then verification looks for implementation, where you can also do a lot of mistakes.
So as I said before, don't confuse testing with only-- that it is only a verification matter. Also, testing is-- you need to do a lot of tests to validate your requirements and your design. So therefore, again, do we build the right system? Is validation? And have we build the system right? Looking for implementation errors is the verification. That's the context in ARP.
Great. Thanks a lot. Please, I think some people are sending questions in the chat. Please send them in the Q and A. It's a lot easier for us to follow up. Another question, and I think it's going to be the last, we need to switch to the next block, is, can correctness be checked with a mix of Simulink and textual requirements?
So yes. You can check the correctness, combining-- basically, you will need to combine both technique that I showed you before, whatever you can model through formal method in the Requirement Table-- those kind of requirements will be there. And the other one you will need to model through an equivalent Simulink model which is representative of the requirements that you cannot actually-- that you can't model through the requirements table. That's a short answer, very short.
Yeah. So let's say-- we can have time for more questions later. We will start with the next block. So please, Ulrich?
Yes. Thank you. So we go now for-- after we have set a complete, robust set of requirements. And the next thing is allocating of-- can you hit the next one please, Juan? We are looking for-- to allocate. We are here in this area now in the ARP, so we have requirements. And the next is then the location of system requirements to subsystem functions.
And same principle as before. As we said also, it is when the design-- you have a novel design, it is strongly recommended to go through a conceptual design phase, rather than jumping directly to the final solution. And then you go for the allocation of system functions to subsystem.
And therefore, your system architecture will evolve-- will start dividing several subfunctions into the top-level requirements and against the top-level requirements. So this is the point where we start with-- where the system architecture evolves. Next, please.
OK. So this one is-- yeah, of course, is a very rather more cyclic rather than a sequential process. Also, you have to do here a lot of terms before you find it. Typically, you also have in this area incremental baselines, which then requires also the software and hardware design.
Also, for formulating the requirements, before you can start with your software and hardware process, typically, it is not done by a straightforward process. You have to go-- you have to go through several rounds before you can find your optimum solution. Next one, please.
So this is a kind of starting to develop aircraft development, an architectural development. So what you do first typically is you're starting with a generic modeling of the general function or the system you are going to build later on.
So therefore, Simulink is very, very suitable for doing this one because one of the big advantages of Simulink is you have there a very powerful closed loop simulation tools where you can start, as saying, simulate the closed loop.
And this is the area where you're defining your stability and margins, whether you have sufficient stability margin. You can, in the first instance, looking for sufficient transient behavior. Where are your steady state behaviors, OK? You find out your timing requirements. You will later on apply it to your system when you're starting to design your system, so you're allocating timing requirements for the software to the other components. These are typically the way-- what you do in this, in the first stage, where-- what we have seen.
And the second one, then, is-- what you need to do is-- and the dark green boxes, these are typically the starting point for forming the system requirements. And from this one, you start to design your system and then leave what the system will get implemented, so taking into account your safety requirements, of course.
But these are the typical way how you design a system, starting from the top level, where you're still in the Laplacian state space or in an intrinsic model, closed loop. And then you're doing your refinement, the next step, and we're really putting the steps. We see the green boxes, realizing this one in more precise requirements and in design.
So you see here, we have several functions, right? For example, measure parameters, model or calculate control parameters, model to a final parameter selection, sometimes not, and create the system. So we'll do the next one, please, Juan.
Important to say is the first one is recommended by EASA, but the second one will be required. And EASA is a-- has a certification memorandum about this one in work.
So the very often most question about where is ARP applicable and where is it not? So the first one is the recommendation. If you use it, it will help you, but it will not be required by EASA or FAA. The second one, in the future, it will be. As I said, there will be a certification memorandum issued soon. And so therefore, you have to follow the ARP4754. A or B, I don't know yet. I think it will be B because we're there. So these are the-- what I can say to the system architectural design.
And the next step, then, is allocation of system requirements to the items. And the architectural design before I would like to mention is the principles. What are we doing within the architectural design?
In the architectural design, we are trying to find a way using, finding, subdividing the functions into architectural elements which can be developed in isolation as much as possible. Emphasis is on "as much as possible." But then finally, you'll get integrated by the architectural constraints and the architectural design. This is the architectural components and informing the whole system.
And Simulink is also very good and very suitable tool to make your analysis to do your architectural design is in a manner that you are minimizing surprises, open problems later on during testing. architectural design is in the phase of the very beginning. And the mistakes you do there, most probably you will not find when you are designing your and developing your items. You will find them later on in the system or in the system integration stage.
So for minimization of design errors and requirement and implementation errors, Simulink is a very powerful tool, so location of items to systems-- yes.
This is then, finally, the next step. From the architectural elements, you design your items. Items are these things when we're talking about implementation. Before, we talk a bit more about design, functional expressions, item design is-- that includes the implementation.
And this one here is then the next step, the final step, before we go with the software process and the hardware design process, where we derive from this one, then, the requirements allocated to software and the requirements allocated to the hardware. OK. Onto the next, Juan. Yeah.
OK. So then-- thanks, Ulrich. Then let's see how we can actually do architecture development in our toolchain. So again, similar flowchart or process like the one we had before, where basically, in this case, what we wanted to do is to-- coming from the requirements that we have previously validated, start developing an architecture, allocating the requirements to the different parts of the architecture, update the interfaces that we've done before, and refine the traceability, so the requirements are linked to design.
So our tool for modeling architecture is called System Composer. Basically, what we can do in System Composer is describing an architecture using basic metamodel element, like components, port, interface, and connectors. But of course, we can extend them to add custom properties. For example, we will see that those custom properties are used for different things, like analysis and filtering.
Now, as the architecture becomes more complicated, we have developed a specific technique called views to filter complex architecture. So one can actually have a different viewpoints on the architecture to understand a specific problem, like, for example, if you are a function owner, you can filter the entire architecture end to end for that functionality.
Now, when it comes to architecture, we will need also to perform analysis. And the idea is basically those stereotype can actually be useful, and we can leverage the stereotype to perform analyzing-- analysis using the power of MATLAB.
Like in the example-- in this example here, we are trying to calculate the endurance, the mass, and the power draw of a quadcopter architecture. Later on, we will do another type of example, where we also include design variation to see how we can handle that.
So the basic idea of developing an architecture, as we explained as well, is that you don't start from requirement, you develop the architecture, and then job done. It's an iterative process.
So what we're trying to do here is to sketch the architecture and the interface and increment accordingly. So you see here, for example, a sketch of the architecture in terms of blocking component. I've defined the interfaces. And then I have allocated the interfaces to the different parts of the architecture.
Now, as I said before, we can use stereotype to extend our-- the basic element and capture metadata. In this case, we are capturing, for example, the endurance, the power draw, mass, and cost of each and every component of the architecture because we wanted to do this endurance, essentially, analysis. But you can imagine here capturing things like the functional development assurance level at this point in time as a metadata as well as other metadata that are relevant for your analysis.
Ulrich talked to you about the location of system requirements to item. So once you have imported the requirements in our environment, those requirements can be linked, as is showed here in the picture, to the different model element. And actually, you can track the requirement implementation as well as the verification, because let's not forget that at the beginning, we have written the verification cases for the test.
And you can check whether the requirement are linked, are verified, and if you have executed the test for that verification, which we'll talk about in the next section, those requirements are-- whether the test associated with that requirement is passed or failed.
So let's have a specific-- let's have a specific look at what we can do in terms of the MATLAB analysis. So one thing you can do in this case is if you have an architecture with properties, you can do some analysis to understand what kind of endurance, what mass and power draw you have. This is, for example, an idea.
But I wanted to talk to you about, today, a different case. So let's assume that we are having, as we have in this architecture, and the objective of this is to design a quadcopter that has 10 minutes hover capacity. We have some design choices. We have some-- we have three motors with low, medium, and high thrust. We have two GPS radius and two position determination system. Those are what we can play with.
And we wanted to optimize the battery capacity, so we wanted to calculate what the battery capacity would look like. In total, basically, we have 12 unique variant combinations and one choice, which is the battery.
Now, whenever we are changing the battery capacity, we have a cascade effect on weights, on the required motor power, on the altitude, et cetera, so we need to understand how those effects are playing together.
So for that, we can have a simple workflow. We create an architecture, and we extend the date-- the metadata of the architecture. We can assign the stereotype and populate the value that we want. We can pick a variant combination out of all that we have. And we can perform the analysis. So this is essentially where we are using a MATLAB Optimization Toolbox to minimize the battery capacity while respecting these 10 minutes hover.
Now, this is for one combination. We iterate over the whole combination, and we capture the result. So how that would look like in practice is in this case, we have designed a MATLAB app that can spring up this analysis. And the result of this process that we have done are three curves that basically are describing the battery discharging case in a different-- with a different capacity. That's actually what we can do in this case as an analysis perspective.
So back then, again, to the mapping to the objective, essentially, we could use System Composer to capture the model and analyze the architecture using even the power of Simulink. We can then use the Requirements Toolbox and the link in between the requirements and different elements to basically keep an eye on the allocation between the requirements and the architecture.
And I think next we wanted to take your view on a poll.
Yes, please. The same. If you can share this with us, this would be great, and we can see how you're doing.
Great. So then in the interest of time, I think we are going to switch to the last block, quickly, and then let's see we have a time for the questions. So I would ask the presenters to speed up a bit.
Right. Thank you. So the last block is, as you said, verification. Verification, as we said before, we built the right system, where the main-- the key, major reason for verification is finally is the system-- does the system safety assessment or does the implemented system-- for the implemented system, does the safety assessment remains relevant? That's the key point for verification. So looking for any implementation errors-- before, we look for requirement errors, design errors in the validation periods.
Now-- also, this includes implementation design, of course. But now we are looking for implementation errors. So we have here a design. And we need to implement a design. And to do this one, we need to define certain so-called verification cases, this one. To develop them, we need data from the design. And we need to have implementation data as well in order to design, test cases. So I'm just putting the emphasis on the word designing test-- designing the verification cases.
This is the important thing. So the question for designing verification cases-- is the verification case correct and complete and if we have achieved the full verification coverage? For this one, we have also in ARP the verification matrix which will answer the questions about-- help you to answer the questions about have we achieved full verification coverage.
Then we need to run the verification cases, and we need to evaluate the verification results. So have all tests been conducted is one question, and are all the final test cases explained? If you cannot explain failed test cases, we need to find the reason and we need to define corrective actions against it. So this falls, then, under configuration management and control. I'm not going to talk about it today.
So this is the key task for the verification. So as I said, very important thing is to understand that your verification cases really doing-- stimulating the system, let me say, in a way that you get valid verification results. So it's a very high emphasis you need to place on this one to define your-- design your right verification cases.
And then as I said before, we need to check whether all the failed test cases explained. Explaining failed test cases can only go through the safety assessment, so meaning the first safety assessment you do is when you completed your design. You have a safety assessment, which is saying that's the optimum. That's the So here, you have a verification cases and verification results telling you where the system is not implemented, as per design intents.
Yes, then--
In this, you have to finally go to the-- explain them and provide the justification through the classification and-- so sorry through this process.
Thanks a lot, Ulrich. So then I think let's wrap it up with Marco, and then we can say thank you.
OK. So let's have a look, then, on the last part of the implementation and the verification. So now we've gone the-- we've got the architecture, after all the studies that we've done before. So whenever we are ready for the architecture, we can just add an implementation to it. So there is a way to add directly an implementation. We create a brand new Simulink model or link, for example, an existing one.
Now, the important thing is the interfaces will be pushed down to the implementation level, so whatever you have defined at system level will be pushed down, and you can then refine it later on when you enter in the software-- in the software process. There is also a way to define physical implementation, if you wanted to do-- if you wanted to do so.
For what concern the implementation verification, essentially, here we have three ways to do things. So basically, we can author test case and manage them in the framework of Simulink Test and execute them in what we call MIL and SIL and eventually then PIL if you are going to the DO process, where it's basically Model In the Loop, Software In the Loop, and Processor In the Loop.
And of course, then you can actually then use the hardware in the loop simulation, fully integrated with Simulink Test, and verify that your system actually works in the context of your environment.
So here is an example where basically, we are verifying a system in its SIL level, and we are just verifying that the system behaves as expected using the model that we've been using before.
Now, throughout the seminar and the webinar, we have stressed the importance of generating evidence and matrices and things like that. So clearly, we can generate report out of everything. In this case, I'm just putting the screenshot where we can generate a test specification report and a test result report.
We can generate architecture report. Many of you knows we can generate coverage report, et cetera. So we have-- in our toolchain, we have the complete landscape which allows us to generate all the reports that we need for providing evidence to the standard and actually that we demonstrate that we've done the right job.
Lastly, I want to-- just, again, on this mapping of the objectives. So we've seen how Simulink, Simulink Design Verifier, System Composer can be used to then manage the implementation and Simulink Test to execute the test. And we didn't touch too much about that on the last point, but we could use fault injection, FMEA, and FHA analysis to verify a safety requirement with the new tool box that I've just briefly talked about, which is the Simulink Fault Analyzer. So this is also playing a role, as we mentioned before, the system-- the safety requirement, of course.
I would like to take the opportunity, again, to thank the presenters. So thanks a lot for your presentation.
Thank you, everyone, for attending. Thanks, Ulrich.
Thanks a lot.
Thank you. Thank you.
Bye.
And thank you, everyone, for coming. Bye-bye.
Bye-bye.