The CISO’s toolkit for data driven risk management

Taking a Chance with Monte Carlo

When evaluating opportunities for investment, businesses must undertake some form of cost-benefit analysis in order to ascertain potential ROI. While R&D, Marketing and other well established business functions can communicate the potential benefits of a £100,000 spend in financial terms, security professionals are left reporting on a perceived risk and as a result can only report on eventualities with no hard figures.

Allocation of resources on a new DDoS mitigation tool might decrease the chances of a successful attack: It also might not. What is the ROI?

Monte Carlo simulation is a computerised mathematical technique used to provide estimates for complex problems where there is significant uncertainty. It is employed to aid decision making throughout industry, from healthcare, where it is used to model radiation therapy procedures, to estimating oil well reserves.

For a given model, Monte Carlo simulation works by assigning all unknown variables a probability distribution. The next step is to sample values randomly from the probability distributions and run the model. This process is repeated thousands of times until we are left with a distribution of potential outcomes. From this we are able to infer the most likely outcomes as well as external scenarios.

By performing Monte Carlo simulations we not only discover what might happen but also how likely it is to happen.

The main blocker in using this technique and particularly in the cyber-industry, is choosing the correct distribution to sample from. If on average, I am the victim of 10 DDoS attempts per year, this could be modelled by a Poisson distribution with mean 10. This is not so easily done when I need to model the cost of potential brand damage from a data breach.

In this instance, we could seek the help of subject matter experts to provide an estimate and then factor in any uncertainty using Monte Carlo. If the SME estimates that the cost of brand damage is £200,000 but is equally likely to be between £150,000 to £250,000, we can use a uniform distribution to model the uncertainty.

The technique is an enhancement to, not a replacement for, existing frameworks. We can use Monte Carlo simulation alongside CyberVaR, Factor Analysis of Information Risk (FAIR), FIPS65 and many more.


Consider a case where, having been subject to 10 DDoS attacks in the previous year, a firm must weigh-up the purchase of a new tool to help prevent further attacks. Apart from the £100,000 cost of the tool, there are very few hard figures to conduct a cost-benefit analysis with. After enlisting the help of an SME it’s suggested that purchasing the new technology would decrease the number of successful attacks from 10 to between 3 and 6 per year while the downtime is approximated to be between 2 and 24 hours, with 6 hours being the most likely time.

A consultation with the wider business has estimated that the cost of brand damage from each successful attack lies between £50,000 and £150,00 and the loss in revenue whilst the servers are down being £1000 per hour.


Using this additional information, we can build a model to estimate the cost savings from implementing the new software.


In this example Annualised Loss is given by the cost of a DDoS attack, which is a summation of the brand damage and the loss of revenue from downtime.

In order to model the SME estimate of downtime, we sample from the PERT distribution. The PERT distribution is a modified beta distribution used exclusively for modelling expert estimates, where the expert has provided guesses for minimum, maximum and most likely values. The value of this distribution compared to others used to model expert opinion is its similarity to the normal and lognormal distributions. At times when there is a lack of real data, assuming that our variable of interest follows a normal distribution makes good sense.

Attack frequency is assumed to follow a Poisson distribution. The Poisson distribution is intuitively a good fit in this case, as it’s used to express the probability that a certain number of events N, happens in a given timeframe (with N being an integer greater than or equal to zero).

Variables that are thought to be equally likely between some maximum and minimum values are assumed to come from a uniform distribution.

Monte Carlo Simulation allows us to account for our expert’s uncertainty about the efficacy of the new technology. Instead of a fixed expected frequency, we instead sample it from a uniform distribution with limits set to the bounds of the estimates.

The Expected Loss from an attack is then multiplied by the estimated annual frequency in order to predict the loss per annum.


The table above displays the findings from running both models 1,000 times. The results show that the increased spend is offset by a nearly five-fold decline in ALE. Furthermore, the loss expectancy at the 90th percentile has fallen in excess of £600,000 with a benefit-cost ratio of 1.65.

The results from this model should be taken with a pinch of salt! However, I hope that I’ve managed to convey the merits of using Monte Carlo simulation to model cyber-risk. In doing so, security professionals will be able to use concrete figures to communicate with the wider business.


No comments yet. Be the first one?

Leave a Reply