Mazars – welcome to our Digital Classrooms! We are opening our Digital Classrooms to individual registrations, making our world-leading financial...
This article is part 2 in the series “How to speed up your models”, to see Part 1 click here. Many of the ideas discussed in this blog series are applications of the Guiding themes of the Mazars financial modelling methodology, which you can read more about here.
In the previous post in this series we examined the topic of function selection and how the way functions are coded can make dramatic differences to model performance and transparency. In this post we will take a step back and look at how models can be structured more efficiently to improve performance.
“Simplicity is the ultimate sophistication”
Leonardo Da Vinci
A model should do what is required of it and no more. Additional complexity and functionality just create clutter that interferes with the user experience and can significantly impact performance. From a structural perspective, a lot of what slows down models has to do with unnecessary functionality and complexity which doesn’t provide any benefit to the end user. Below we provide some examples of this and lay out our approach to “keeping it simple”.
Always keep the end user of the model firmly in mind and design the model to achieve this purpose using the least functionality possible. As Albert Einstein said, “everything should be made as simple as possible, but not simpler”. A good example of this is model periodicity; if you consider an asset with a 30-year life this can mean either 30, 120 or 360 columns of calculations depending on whether annual, quarterly or monthly calculations are being created, which will clearly have a huge impact on calculation speed. A monthly periodicity should only be selected if actually required by the end user. Similarly, the length of the timeline is another area to be considered: a longer forecast isn’t always better; if you aren’t confident in the longer-term forecast and it isn’t required by the model it may be better to not include it at all, to avoid giving a misleading impression of accuracy.
Some other common trouble areas are depreciation and tax: How many depreciation categories are required to achieve the level of accuracy required? How important is deferred tax to the accuracy of the model, and how granular will the forecasts of tax differences need to be? These calculations can become very extensive and complex so if they won’t have a material impact on model results, they can be simplified significantly. This is a fundamental principle which applies to every aspect of modelling: if you always keep the end user in mind it becomes a lot easier to identify and cut unnecessary complexity that they don’t want or need, and consequently to develop a cleaner and more efficient model.
Never perform the same calculation twice. This is a simple principle and yet is routinely violated in most models. The more obvious examples of this are re-calculating dates and flags on each sheet and escalation indices wherever they are applied, however there are also more subtle instances such as calculating pro-rated amounts, applied interest rates, days in period and repeating calculation elements throughout a row whose results will never vary between cells. Below we offer some advice on how to avoid these types of unnecessary duplication.
Formulae should be clear and concise, with any complex calculations broken down into their component parts, as trying to do too many calculations in one cell will increases the risk of error, reduces transparency and will often negatively impacts performance. Splitting out the calculation components will also allow the component calculations to be utilised in other areas of the model, reducing the need to repeat these calculations.
Calculate timing items on a Timing sheet
Dates, flags, days in period, proportion of period, escalation indices and other timing-related items should be calculated on a separate Timing sheet and linked to any other area of the model as required. None of these items should be re-calculated elsewhere in the model.
Use a single cash flow waterfall
Calculations of cash available at different levels of seniority in the cash flow waterfall/cascade are quite commonly used, especially in Project Finance models. Frequently modellers will re-calculate the entire waterfall to the required point for each calculation, creating a lot of unnecessary duplication. Instead, we recommend setting up a consolidated cash flow waterfall which calculates the entire cascade of cash flows, which can then be linked to the various calculations as required.
Use helper columns
Any element of a calculation whose results will not vary by cell should not be re-calculated for each cell. For example, dividing an annual fixed cost by 12 to calculate the monthly amount should instead utilise a “helper column” which performs this calculation just once, the result of which can then be referenced within each cell.
The same concept applies to all manner of more complex calculations as well. For instance, an INDEX-MATCH which matches the location of the row label within a list will always produce the same match number, therefore model efficiency would be improved by moving the MATCH function to a helper column and just using an INDEX function which references the match number calculated in the helper column. The below images show the difference between repeating the INDEX-MATCH in each cell and using a helper column.
MATCH calculated in helper column
When building complex financial models, we are often required to build calculations that need to be repeated multiple times, for example multiple debt accounts for a number of active facilities. When creating these calculations, we tend to build out several copies of the same set of mechanics, potentially using “INDEX” or “CHOOSE” to make this replication easier, as per the example below.
Separate debt accounts
An alternative to this approach is to structure these mechanics using 2D blocks, which group the specific calculations elements together for all facilities. This can improve calculation speeds by reducing the number of unique formulae, is less prone to error, easier to review and update, and has the benefit of no longer requiring the INDEX function as referenced items are typically usually listed sequentially and can be linked directly, (see example below). Furthermore, 2D blocks can also save on the number of rows required for more complex mechanics as headers do not need to be repeated for each block and the balance b/f can also be removed for corkscrew accounts.
Debt accounts as 2D blocks
It is common when modelling to work from a template or precedent model. This will typically result in lots of “legacy calculations” which are retained due to fear of breaking the model or lack of time to review. These legacy calculations can create a significant drag on performance and reduce transparency in a model. We recommend thoroughly reviewing any calculations included in a model and removing any superfluous functionality. This process can be greatly assisted through some Excel add-ins to help identify unused calculations and trace multiple dependents/precedents to quickly understand and manage the impact of deleting certain calculations. For navigation we would recommend Arixcel Explorer, (paid), whilst Inquire, (free), is very helpful in locating unused calculations.
Depending on how your data is being used within the model, simply sorting it in ascending/descending order can dramatically improve the performance of different lookup functions by moving the desired items closer to the start of the lookup range. Similarly, the data can be arranged to match the timeline of the model, (removing the need for date lookups), or grouped or separated in different ways to reduce the reference range for different lookups. The application of these principles can be very particular to the situation, but it’s worth considering in data-heavy models as it can have a significant impact on performance.
Additionally, for functions that reference ranges such as SUMIFS, the calculation time is proportional to the number of used cells being referenced. Unused cells are not examined – but used and irrelevant cells are – so whole column references can be relatively efficient if the column only contains relevant information, but often are not. We usually recommend to not use column references but instead set up a table which holds the reference data. The table functionality in Excel is quite dynamic and will automatically adjust the dimensions of the table as data is added or removed, maintaining the smallest possible reference range and therefore fastest calculation time.
Thanks for reading through to the end of the article. There will be one more blog post in this series on model efficiency which will focus on Excel’s more computationally intensive functionality, so check back in the near future for that and other upcoming posts!
This website cannot function properly without these cookies.
Analytical cookies help us enhance our website by collecting information on its usage.
We use marketing cookies to increase the relevancy of our advertising campaigns.