Digital Engineering for System of Systems - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 32:33
Loaded: 0.51%
Stream Type LIVE
Remaining Time 32:33
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
  • en (Main), selected
    Video length is 32:33

    Digital Engineering for System of Systems

    Overview

    The added complexity in new aerospace and defence programs challenges traditional ways of design and collaboration. One of the main drivers of this complexity increase is the amount of data generated, not only during operation, but also at design time.

    Being able to guarantee a digital continuity among the different design phases is crucial to cope with this data complexity while enabling collaboration amongst teams and companies. A good way to enable digital continuity is working and communicating through models, using different fidelity levels when appropriate.

    Having a unified digital environment becomes crucial to deliver high-quality systems quickly, and cost-effectively.

    Have you ever wondered what you would be able to perform in a digital environment that can offer a simulated and representative environment to?

    • define operational and mission-oriented requirements
    • define and allocate resources to achieve different missions
    • define and manage relationships with supplier organizations
    • anticipate integration challenges focusing on the definition of interfaces
    • ensure a continuous evaluation of the compliance with the operational requirements along the development process

    Highlights

    In this presentation, you will learn how MathWorks solutions can support your digital transformation and enable various ways of collaboration:

    • Identify mission objectives and the required assets
    • Anticipate objectives, contributions and performance needed
    • Validate assumptions and demonstrate early through regular MVPs
    • Accelerate the readiness of your requirements, models and testing environment

    About the Presenter

    Alex Shin is a Principal Application Engineer at MathWorks. He specialises in helping customers in the area of simulation, verification and automatic code generation, primarily for commercial/production projects. His recent work includes defining simulation and verification process and implementation of Model-Based Design tools in large organisations. Mr. Shin received Bachelor’s degree from University of New South Wales in mechatronics engineering.

    Recorded: 27 Oct 2022

    In this talk, we will be providing an overview of how model based design and model based system engineering offer a digital continuity crucial for the design of system of systems, from missions to individual systems and components. Welcome to the talk. My name is Alex Shin, a principal application engineer at MathWorks.

    Whether you are walking on a land system, space system, civil, or military outcrops, man or unmanned, next generation aerospace and defense programs will bring you some common challenges. New programs go beyond single system design to focus on system of systems that require a higher level of collaboration among teams and partner companies.

    The creation of common working environments that facilitate artifact exchange methods and tool compatibility are crucial for the success of these programs. In these environments, the possibility of providing early proof of concept and enabling fast iteration saves an enormous amount of time and budget. Other technical challenges that are gaining importance are the collaboration between humans and machines, covering different levels of autonomy, or the increasing amount of data used, which is converting every design and design methodology into a data-centric challenge.

    And here, security is also crucial. All this costs increase of software and hardware content to manage complexity in more efficient ways. Today, we'll focus on the upper layer of the design lifecycle, including support for mission definition, model-based system engineering, and how simulation can enrich your high level analysis.

    When dealing with a system of systems, it is crucial to understand and differentiate the objective of the different design levels and their relations among the different stakeholders. This is both per level and cross level in the development lifecycle. This can be seen in a pyramid. In the mission engineer pyramid, that goes from mission to component design.

    When we look at this pyramid from the top, we can see the different development life cycles at the different levels. We'll see how the traditional methodologies follow very much a spiral process with little room for iterations. Model-based design engineering and model-based design enable a digital continuity based on model exchange and full traceability of artifacts, as well as automated report generation, or enablement of fast iterative loops.

    This way, it is possible to establish and manage a connection among different levels and perspectives and to identify conflicts and assess the behavior of the system of systems used in a variety of scenarios. For that, digital continuity becomes the enabler of the collaboration in several dimensions. This is what we will address today.

    Now, if we focus on different stages done during the design of a system, the main purpose of enabling connectivity among the different phases and create a digital thread is mainly to avoid working in silos. It's crucial that exchanging mechanism within the cross space are agile and allow several iterative loops, as well as showing an accurate story of your design choices that can be traced to requirements and use cases. Working with models will enable this digital thread at different levels.

    Now, let's see how this thread can enable concrete advantages over the development cycle. After this, we will also look at an example of how this can be practically applied. All right, here are some important objectives of digital engineering and how programs would be able to manage the invisible before it is visible.

    It is important to detect early in the development cycle any functional issues or conflicting aspects of the mission, using simulation for functional assessment and performance allocation, set up the engineering framework to enable a high level quality of production and a clear contractual decomposition through requirements-driven development, and to enable a multi-dimensional collaboration to reduce the risk of delays of the delivery of acceptance through incremental and regular integration capabilities.

    We'd like to start from this traditional engineering workflow, processing and refining the requirements to define the composition, architecture, scope, and dependencies between the items that constitute the considered systems. The traditional engineering workflow is usually based on textual requirements. Teams have established working groups for qualification reviews to reduce the textual ambiguities within the requirements at each level of engineering or mission design.

    This of course, is time consuming and inefficient. Many experts need to agree on what is required, requested, feasible, and eventually performant for the mission objectives. But at that stage, no resulting assessment is visible. Everything is on paper or statically defined.

    Then the time it comes to evaluate if the implementations are matching the objectives and the requirements that have been defined and clarified. And often, the system of systems are being integrated together to evaluate the compliance, the mission scenarios, based on the mission requirements from the bottom to the top, assessing that each system is compliant, communicating with the others, and matching the mission purpose to do the right thing.

    This is where the major issues become visible as projects start to face integration challenges, and the mission scenarios are only partially met. And this is also the time where the teams realize that the textual requirements are either incomplete or inconsistent.

    It may also be the case that some requirements have ambiguity, resulting in different interpretations by engineers involved. If we could remove the boundaries of the teams that are usually associated to the level of definition, we reduce the risks and miscommunication.

    Models combined with the simulation are the enabler of communication to make all that is invisible in the texture requirements and aesthetic description visible to the stakeholders and the teams from the very early phase of the mission design. Indeed, models and the capability to be executed in the virtual environment is a keystone and a powerful capability for clarification between the teams in every and each level of the engineering development lifecycle.

    Having models at each step of the way, is an enabler for collaboration and communication. For an example, with the stakeholders, models and their execution capability allow the actors of the phase to demonstrate, confirm, or validate that the system objectives are achieved. Domain experts can demonstrate the functional purpose, the precision ranges, and the system engineers can confirm or evaluate assumptions, run functional validation tests, and coverage assessment. All these activities can be regularly conducted with the customers or even to the operators, to assess if the system will match the expectation or still need some refinements.

    At the management level, when looking at the system activities to pave the way of an efficient implementation, discussions with the architects become really must-have. The system engineers can confirm some of the design choices and allocations. Domain experts will be kept in the loop to prevent any side effects and assess the performance in terms of precisions or reliability, using simulation or even analysis capabilities, like Monte Carlo simulation or trade-off analysis. Models are also indicators of progress, for the teams to know how they are progressing in regard to the plan and what will be of interest for the program managers.

    Now, during the implementation phases, teams can become larger and not necessarily have all the domain knowledge. This raised some functional technical challenges. However, models can be very beneficial to the teams at this stage.

    The system engineers provide an executable functional reference for the software engineers and architects that need to design the most efficient architecture and implementation. Software engineers can highlight or clarify the software or technical constraints they could encounter, particularly when that may have a system-level impact. Models also enable to get the appropriate indicators of progress for the teams, and they know how they are progressing and how the coverage of the implementations are.

    There's one last audience that we would like to highlight. It's the partners. Models and the ability to be executed in a simulated environment also give access to much larger virtual environment. Each model acts as a digital twin as part of the system of systems, and this can expand the possibilities of integration in the virtual world.

    This is how models are the digital keystone to a collaborative and more efficient development. This is this even more beneficial when the teams are following agile methodology and need to be able to perform regular demonstrations or need to be able to deliver a valuable intermediate version for contractual reasons or even for early integration processes.

    So far, we've discussed some of the challenges and how using models can be beneficial to the team's development workflow and the engineers and organizations involved. Now let's look at an example of the introduced workflow. In today's talk, we will use a high level example to illustrate the workflow. We would like to look at some possible steps and how they fit into overall design lifecycle at each level. Of course, there will be many iterations and intermediate steps in real engineering process that we'll not mention today.

    Let's imagine that we have a search and rescue mission where the main goal is identifying and rescuing certain targets in a given area. In this case, we assume all targets are on the ground. Then we'll have to define different conditions and assumptions. This is, for instance, being able to identify the target, given certain features.

    We'll have certain time constraints for the mission to be successful, or we might need both air and ground support to be able to cover a certain amount of terrain, depending on time limitations. Then we can start capturing mission requirements. This is the area to be covered-- time constraints, amount of assets dedicated to the mission, the need to overcome certain types of obstacles due to terrain characteristics, what features of target need to be known so that it can be identified. Where is the decision making done? Is it centralized, and et cetera?

    Based on that, we can start capturing information about the environments. We can create 3D environments and process them. In this case, we can see an artificial 3D environment where it is possible to add trees, buildings, and other parts of the terrain.

    You can also process these environments and generate 2D rised maps, modify them at weight, and et cetera, to then use them for definition of your mission, as well as for later simulation. These scenarios can then be used to define your requirements, as I'll mention in the next step. We can also define use cases based on requirements and information from the previous scenarios.

    We can then define use cases and see how these use cases contribute to the different missions. You can define goals, preconditions, host conditions, and information from what needs to be done in each scenario and start inventing your use cases. Then you can get traceability diagram for your mission, to observe how the different use cases contribute to the overall missions and how one use case extends to others.

    Imagine if you focus on one use case. You can then start identifying which assets you'll need to fulfill your missions with. In this case, for instance, we have a main aircraft, land vehicle, and several drones. But since we already had the graph linking the use case to the overall mission, now we can see how the different assets contribute to this use case.

    Then we can also start defining functions, and we can start allocating these functions to our assets. You can still see that in the relationship map, that we saw before, where you can see the different functions and assets. But you also can see that in the allocation matrix, to define the different configurations, like we see now.

    By now, you know what the mission is to do. You have divided that into different use cases. You know the assets required to fulfill them and what functions these assets will perform. However, it is important to validate these assumptions, refine these requirements, and conduct analysis. And for that, there are different types of analysis you can do at this level.

    So a useful thing to do is merging information you already have about functions, assets, and scenarios to start refining mission requirements. This can be the number of assets required, distribution, land, air, and et cetera. And the idea will also be to define requirements for the different individual systems, like, for instance, entry autonomy, in case of having electric drones used for surveillance or latency needed for communication links, communication bandwidth to share images, and et cetera.

    This can be done by, for instance, performing some mission planning examples and some architecture trade-off analysis. In order to perform an architecture analysis, it is possible to capture your architecture and attributes in the form of stereotypes to your blocks, interfaces, and so on. And using the available APIs, it is possible to add different types of analysis using Matlab functions, and the result of analysis, in this case, for example, being that you need another drone to be able to cover the whole area that you are studying.

    Now, if we start using some mission-planning examples to support the architecture and requirements that-- refinement, at this stage, you don't need to have high-fidelity models of your systems. But to understand contracts among them, use state-of-the-art data for feasibility studies. Of course, if you or your partners have already behavior models, it is very easy to use simulation to enhance analysis at this level. This way, it is possible to identify early potential conflict or issues in requirements among systems and evaluate the feasibility of your overall mission.

    For instance, let's say you need to bound the number of aerial assets you would use for surveillance, like we've mentioned before. Then you will be able to use scenario information created and use some path planning algorithms. In this case, the example shows a generic algorithm to identify typical routes and distances time responses. Of course, you can use higher-fidelity models later to refine and validate this analysis. This is actually one of the great advantages of using models in a highly connected and traceable environment, so that it is possible to run hundreds of scenarios.

    So by now, you know what the mission needs to do. You have divided that into these different use cases. You know the assets required to fulfill them and functions these assets need to perform. You've also validated these requirements running use cases, and you have identified requirements for your individual assets.

    Then you can continue capturing requirements for your individual systems. These requirements can be linked to your system architecture models, as well as to your behavioral models. It is important to highlight, that apart from altering requirements in text, it is also possible to model your requirements. This way, it is possible to formalize them so that they are mathematically rigorous, so that they can be used as assertion in simulation, like as shown in this example, or even performing formal property proving, using behavioral models.

    One way of formalizing requirements is using tabular format, that we are showing in this example. In this way, you can specify preconditions, postconditions, comments, assumptions. And this way, it is possible to look for inconsistencies, like, for instance, data types, conflicting conditions, and et cetera. You can, of course, add new requirements, work on your assumptions, and for instance, link them to the textual version of the requirements that you are working on. So these are all possibilities for your requirements analysis at this level.

    With these requirements, you can start capturing the architectural model of your individual systems. Once you have your system architecture done, functions mapped, requirements clear, you can continue running different types of analysis. This is similar to what we already did at a system of systems level, but focusing on one individual system.

    And in this case, just to see an example, we are focusing on one of the drones we saw before, to see the types of propulsion it could have. In this case, we are going to see a performance analysis of its battery. You can add different variants to your models and compare results based on physical characteristics. You can trade off different architectures and configurations. And as you can see, in this case, we have the battery discharging curves under different configurations.

    Based on that, we can create reports with different results captured from the different variants to lift evidence for the design choices. This will prove very helpful for other programs, decision justification, and become an important articate And then we can create a report that includes snapshots of all components in the architecture definition of interfaces and allocations.

    While having high-fidelity models from your assets is not necessary for the initial trade of analysis, bringing models to the picture can be, of course, very beneficial. The availability of models of the different parts of the system depends a lot on the pace of the project where we are at. However, it is important to mention that the use of legacy models, even if they are not final, simple models, like state machines, that define behavior and contracts among different parts, can make a difference for the analysis that you are performing.

    It is also important to highlight that it is possible to share models in a protected way so that you can protect your IP in case models are exchanged among different companies or organizations. This protection can be applied to different capabilities. Models can be shared, for example, that can be visualized, but cannot be simulated or models that can simulate, but you cannot generate code out of the model, and et cetera.

    Now, but that was a quick example workflow demonstration. Now, a question we might have now is, how can we continuously evaluate the system under the design? Modeling and simulation makes the engineering activities easier as the complexity grows. In addition, many companies are increasingly adopting continuous integration approach to evaluate the overall design development activities, to further improve the overall development efficiency.

    OK, so we went through the different phases and levels of definition required for a system definition involved in a mission definition. We shared how simulation and different analysis can be executed and add values to the system development. Now let's further look at the values gained in the validation and integration process to see how automation can support those activities, using the concept of DevOps for the system of systems. And our intention is to include system of systems in the continuous integration process by expanding the continuity between the development and operation tests.

    Let's dig into the system and get a system perspective that will be in charge of the UAV design. Here we are at the UAV design level, needing to validate the functional requirements and the interfaces of the system. On the left, you can see the phases of development using model-based design process, including different axes with which Matlab assembly can integrate to manage the version management or the continuous integration process.

    The validation of the UAV is automated and continuously evaluated, based on the model, but also, based on what would be integrated directly on the processor and executed on the processor. What if we could keep our system on the test at this level, but validate it from a higher level and check with other systems from a more operational scenario perspective? We'll take a new hypothesis, in this case, that test bench are remotely accessible.

    Then evaluate your UAV design at a higher level. And then integrate your control part of the UAV with other parts of the design. And that would enable you to have the software, software integration, but also, different parts of software and hardware integration with other devices, or even already available components.

    At a system level, going up in the level, the environment would probably be partially virtualized with the environment models and partially with the real equipment components, like communication or any dedicated devices that would represent the redundancy of the system in the conditions. Note that the virtualized part of the system environment can be either run locally, on the bench, on a computer, or on a container, or remotely even.

    As far as the representation of your environment is sufficient for a system validation, that is totally acceptable and feasible. Then you can run the test that you have a system level and still evaluate your UAV design from a larger perspective. Similarly, you can expand this level up to the system of systems level or even to the mission scenarios, if that is part of your requirements.

    Systems are defined following an incremental development process. So having a way to have an incremental evaluation and incremental integration, and even a demonstration, can really become crucial and really valuable for the system development. This enables being able to demonstrate to any other collaborators or partners where you are in the development.

    But more importantly, you can check with them if the system is still on track functionally and regarding the interfaces, so that in the end, the mission can be achieved. And we appreciate to think about the operation right-hand side of that cycle as being not only the end user, or operator side, but also, all the other systems that would interact with the system and the development.

    OK, so in conclusion, using models for digital engineering of systems of systems can significantly reduce your development and communication efforts and also reduce risks. Models can be used to support operational engineering. Architecture design can be evaluated early in the development cycle, as they are executables.

    Models have been used by many projects to facilitate communications for collaborative work. Models will also provide digital continuity through the engineering workflow. To learn more, please visit the web pages on Model-based system engineering, as well as Model-based design, at MathWorks. Thank you very much for your attention.