Validating Financial Models in a Complex Multi-Language Environment - MATLAB
Video Player is loading.
Current Time 0:00
Duration 24:16
Loaded: 0.00%
Stream Type LIVE
Remaining Time 24:16
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 24:16

    Validating Financial Models in a Complex Multi-Language Environment

    Model validators are expected to deal with an increasingly complex mix of models from a range of divisions, in a whole array of tooling, with code and documentation of varying quality.

    See how the MathWorks® model validation solution provides a single platform from which to manage the model validation workflow. Regardless of implementation language, the validator can interact with models and data directly from the platform—pulling in data sets, executing models locally or remotely via microservices, running sophisticated analytics, and finally, collating results and automatically updating documentation.

    Published: 9 May 2022

    Hello, and welcome to this webinar on validating financial models in a complex multi-language environment. I'm Lawrence, an application engineer at MathWorks. Let's imagine that you're a model validator in a large financial institution and you receive models from various teams in various programming languages. When you are looking for-- as a model validator, you want to verify a model's accuracy, robustness, or fairness.

    Some of the challenges that you might have as a validator are setting up the environments and getting to run the models. In some cases, you might not even have access to the models, but just the documentation and have to run the models. Wouldn't it be great if you had a seamless experience validating models in multiple programming languages? That's exactly what we're going to see here today.

    Today, we'll see the MathWorks Model Validation Environment, which is a unified environment that you access via a web browser. From the Welcome page, model validators can see and access the models that they'd like to validate or have already validated. These models could be in any programming language-- let's say C, C++, Java, Python, et cetera. Associated with each of these models, there are two buttons.

    One is the History, which keeps a trace of every action that have been conducted on that model in the past. And the other button is called the Review, which you will click to perform the review. Now, before submitting a review, you see this form to the right, which you fill out with your general comments, materiality, and your observations on model weaknesses, mitigating factors, and risk creating.

    Once you submit the review, you now have a new data point in your review environment on what you have done on this particular model. Before we go further, let's have a look at different stages of a model. So models go through the development phase followed by validation. Once they are validated, these models are then tested. Once they are tested, they are put into production or execution.

    And once they are in production, they are constantly monitored. Regulations on model risk stipulates that there be governance and control mechanisms. And that MathWorks model risk manager has different modules that correspond to each phase of the model lifecycle. Today, our focus is going to be just on validation.

    So here's our agenda for today. We'll start by having a quick look at the unified environment for validating models. And then, we'll pass on to explore a simple model validation scenario. And from there, we'll look at how to validate a model by benchmarking or by building challenger models. And finally, we'll look at have to adapt MathWorks Model Validation Environment to your own business activities and complexity.

    So let's start with the first one, which is the unified environment for validating models in different programming languages. Before we head to the environment, let's have a quick look at some of the challenges. So as a model validator working in a large financial institution, you have different teams working in a variety of different programming languages.

    Now, the way these model developers work, that could vary. The environment could vary. So you need to be able to set these up and also keep trace of models that you have validated along with the dates, the version, and your findings in a nice way. So this is what we'll be seeing in the model validation environment.

    We'll access the MathWorks Model Validation Environment via a web page. We'll run models in different programming languages. And we'll document our findings. And then, we'll validate it will by submitting this button, which is Validate & Notify, which will then send an email to the necessary stakeholders. Right, so let's head over to the MathWorks Model Validation Environment.

    For that, let me fire up the browser. I go to the environment. So here's the MathWorks Model Validation Environment, where you can already see a couple of models that have undergone validation or are going to be validated. So let's Request Review here.

    So let's Request Review here. Now, you can see that this is the model that we're going to review here. There are two different buttons here-- History, so that gives me a look at what are the reviews or what are the activities that have been reviewed on this particular model. So over here, I can see the comments and thoughts left in the last validation. So let's head back. Let's start with Review.

    So let's start with the first exercise. What you have over here, that you have these models over here, and you've got the Submit Review. So you access and run the models from here. And then, once you're happy with the results, you go to the Submit button. And then, you submit that review. So before we get into the exercise, let's have a look at what's the objective of this validation exercise.

    So what we're going to be doing is we are going to price and upended barrier option using different libraries. So let's say we've got a model in Python. We'll be checking that price with some internal libraries in C and perhaps something in MATLAB. And then, a particular scenario where we'll be accessing data provided by a data vendor using a REST API. So let's set up these inputs, set our parameters.

    Now, let's get the price for this option over here. So that's a price there. So what we have over here is that we've got the model that has been submitted and we've got the Submit button here. So let's start by accessing the model. So the objective of this exercise is that we are going to price an upended barrier option from different programming languages.

    So let's start with that, so set parameters there. Let's get the price here in Python. Let's compare it with the price in C, C++, same thing in MATLAB. And here, we have a special case where we are cross-checking the pricing from an external source using a REST API. So imagine this scenario. You have a subscription with a data provider where you can send in the characteristics of your barrier option.

    And in response, you could get the price of the barrier option and the Greeks associated with it. So let's get that done as well. So now, I add some extra information, which is my validator ID, validator nodes over here. Let's export that.

    So let's view the PDF that we just created. You can see that you've got a neatly-formatted PDF that you can click through and get to that associated section. Note that this PDF could also be configured to your organizational format, as well.

    Now that we've done that, we can go to Review, submit a review, and say over here, let's approve this. Prices seem to be consistent within a range. It seems to be consistent within the range of 5%, USD $1 million.

    Let's leave that blank for the moment. Let's put that High, Submit Review. Now that we've completed the review, the Review button has disappeared. And if I go into the History, I've got a new review with the information that we have added in here.

    So let's head back to our presentation. So what did we see? We saw a unified validation experience via a web browser where the validator was able to access and run models in different programming languages and also keep a trace of what activities had been done on that model in the past.

    Now, for the second part, let's explore a simple model validation scenario. Now, let's go into the challenges. So prior to validating a model, as a model validator, you need to ensure access to the repository along with the data. Now, once you have done that, you need to test and document all trials and capture findings in a way that's easily accessible to everyone.

    So we'll see how to do that using the review environment. The example that we're going to do for that is by using this calibration exercise that have been done by the model developers. So the exercise over here is that the model developers have calibrated this market data for these strikes and for these expiries using these auction prices. And they have calibrated a Heston model using this data.

    As a validator, we are going to see if the parameters that were found by the model developers can roughly reproduce the market prices. So you can see the market price over here. And we can see that these prices in red, which is the prices predicted by the model. Let's head over to the review environment.

    So let's Request Review. Review. This time, we are going to be looking at calibration.

    So if I were to have a quick look at the repository, I could do so by looking, so Repository. That takes me to where the model repository is stored. So here, I can see the model definition. Now, let's go ahead and read the Excel file that was used by the model developers.

    So I can see over here for these strikes and for these expiries, these are the option prices. I could then either decide to either use a subset-- so let's say I could use a subset of just the first two expiries-- or the entire set, which then I can feed them to my calibration exercises. So let's get the parameters for that. So I feed into that, Get Params. We get a set of parameters.

    We can use these parameters to fit onto a model and get the prices from the model. So what you see over here is that this is the data from the market, and this is the data from the model. Now, let's go ahead and plot these two to see how they match up.

    So as you can see over here, you've got the market data in here. And you can see the model data, which seems to superimpose nicely on the market data. So let's go ahead and put our findings in here. 1, Prices seem to match up.

    Let's take a note of the optimization of the algorithm that was done by the model developers. If I go here and I look at Calibrate, I can see what library was used and what optimization algorithm was used. So let's copy that. "Optimizer used is," that one.

    At this point, I could go ahead and save this or export it to a PDF document. And if I export it, I could easily get, quickly get a PDF document that I could then download, print, or send an email to any of my colleagues or anyone who's interested in getting this document.

    So now that you've seen the results, let's go ahead and review this. So let's submit this. Approved, yes. Prices seem to match up. Materiality, let's put $1 million USD from earlier. Let's leave this blank for the moment. High, Submit Review.

    So now that we have completed the review, we can see that we no longer have the Review button. But if I go to the History, it has captured a new review in here. So what did we see? We saw that as a reviewer, you don't have to set up a repository or the environment. All you need is the link to the model's repository.

    And from there, we saw how easy it was to test and document your findings on the go. At the end of it, we saw how easy it was to create a PDF document or Word document and email it to all stakeholders. Now, let's move to the next section where we'll be looking at validating a model by benchmarking or by building challenger models.

    Now, when you talk about validating by building a challenger model, there are a couple of options for a model validator. So you could either validate the soundness of assumptions, where you'd look at is a particular assumption for a particular model good? Or you could look at is the model that has been implemented correct?

    Or you could do a full-blown challenge, which is called as the Effective Challenge by Federal Reserve, where you challenge every aspect of the model. So you'd be looking at-- is the data correct, is the correct data source used, is the data manipulation correct, how about the theory, are the assumptions behind the model correct, is the model addressing the complexity of the problem, is the implementation correct?

    So you could look all the aspects of the model prior to validating that model. So let's look at the first one over here, which is the soundness of assumptions. So let's say that you're pricing options using Monte Carlo simulations.

    You run through scenarios and time to calculate option prices. As a validator, you have the possibility to verify that the prices match up, as we saw previously or that the implied volatility is not far off from the market, the observed implied volatility in the market.

    Now, in each of these plots, each vertex corresponds to a financial instrument, which you would need to price individually using Monte Carlo simulations. These calculations can take a lot of time, which can be shortened by running the calculations in parallel. MathWorks offers tools for making these calculations in parallel that you could use to shorten the time of calculation.

    Now, let's see other tools at the disposal of validator for challenging models. So let's say that you have received documentation from model developers and you have identified the biggest contributing factor in the model. Now, you want to test this out yourself.

    So in order to do that, you could write a model yourself by programming, or you could use one of these interactive tools at your disposal for building a model by pointing and clicking. So as you can see over here, we got two of these interactive tools over here. One is Classification Learner the other is a Deep Network Designer.

    So let's say the problem that you have at hand is a classification type of problem. You could either write different families of classification yourself, or you could use this Classification Learner app and choose the different models by clicking on a button and fit different models on your data set and identify issues or challenge a particular model.

    Now, for the final bit, which is the effective challenge, where you want to challenge the data, the theory, and implementation. Let's revisit the example where the model validator was validating the calibration of Heston model for market data. Now, let's say that you want to challenge this model from scratch. Perhaps you would want to construct and maybe use a Heston model via deep learning.

    Now, for doing that, you've got documentation that's easily accessible online. If I click through here, we can see documentation on how to use deep learning to approximate barrier option prices with Heston model. So you can see over here, you've got some definition on barrier option, the Heston model, what the parameters are, how to define the parameters, how to use multicolor simulation, and further below, how to interpret the results.

    So let's summarize what we just saw. As part of challenging models, you would perhaps want to make some sanity checks rapidly. And for facilitating that, you can make calculations in parallel using the tools at your disposal. Now, if you want to build challenger models, you could also use the inbuilt apps, such as the Classification Learner, Regression Learner apps, which you could use to build models intuitively.

    And then, you have access to well-documented quantitative finance and statistics and machine learning libraries with examples that walk you through obtaining data, building a model, and interpreting the results. And finally, we have to adapt MathWorks Model Validation Environment to your own business activities and complexity.

    Financial institutions are unique in their characteristics. Based on the risk exposure of business activities or any other factors, what you have seen may or may not be directly suited to you, which is why you can customize it. Now, let's have a look at one of the customizations. When submitting a review, we saw a review form where we filled out the general comments, materiality, model weaknesses, et cetera.

    The implementation of MathWorks Model Validation Environment is done together with you. This means that you can customize it to however you want the workflow to be. The solution is built on building blocks that talk to each other nicely. This ensures that MathWorks Model Validation Environment will integrate seamlessly. What you saw thus far is a window-based front end. Underneath is user interfaces.

    You also have APIs, which will help integrate MathWorks Model Validation Environment smoothly. Now, this from below here is an example of customization. In this case, let's say you have come across some adverse event, and you want to review models with a custom flag that say "Severe impact, trigger for review."

    Now, this foirm lets you do exactly that. And if you want to change the workflow of it-- let's say instead of submitting a review and notifying the regular set of people, you want to notify all the stakeholders, you could just swap out the Submit Review with Submit & Notify All button, which then submits and notifies everybody involved in this model.

    So let's summarize. Due to the diversity of financial institutions based on their activities, or business activities, or risk exposure, there's no one size fits all for everyone. This is a solution that can be easily customized. And we work together with you to customize it.

    So with the customizable validation templates, we can easily capture the corporate workflow requirements. So let's conclude what we just saw together. We saw a unified web-based environment for validating models in different programming languages or modeling environments.

    For challenging models, we saw that we have access to pre-built libraries, interactive tools, and rich documents which help build challenger models. And finally, to customize these validation workflows and artifacts to your requirements.

    And finally, we saw that it is possible to customize validation workflows and artifacts to your requirements. And with that, I'm happy to take your questions. Thank you.

    Related Products