I am trying to run a Monte Carlo simulation on a very complex equation (19 input variables) at about 1-10k iterations. Some of the inputs remain constant throughout the computation, while others (8 variables) are randomised at each iteration using the norminv function and a random probability. As you might imagine, this simulation becomes very expensive as the amount of iterations are increased.
Presently, I am using the parfor function on a 4-core laptop with each run needing around 5min per 1000 iterations. The 8 variable inputs are recalculated at each iteration and then passed to a function that calculates the final answer (although I previously had this in the body of the loop and it didn't make much difference).
My question is, how can I optimise this computation? I would need to run several of these simulations for my project and would like to trim down the time.
0 Comments
Sign in to comment.