Crucis Consulting Limited logo

I build web, data and cloud solutions for small businesses with big ambitions

How I built this site

SQL ServerSQL Server
Azure DevOpsAzure DevOps

Contact info


Follow me


Aon: Calculation engine

Aon are a global leader in risk and reinsurance, health and wealth management solutions. They also specialise in providing corporate consultancy. One of the areas involves production of market-leading reports on reward and compensation packages.

Api: true; Check-screen:false; Error-state:false


AonAon spend a significant amount of resource capturing and aggregating reward data from a large number of companies, by country and region. This data is then anonymised and obfuscated to produce reports of various types for its customers, to enable them to make competitive rewards packages so they can attract and retain the best talent.

The volume of data, the complexity of the reports and computations involved, and the need for strict confidentiality makes these reports difficult and time-consuming to generate.

Report data computations were implemented with SQL CLR on Microsoft SQL Server, and could take anything from 12h to 36h to run depending on the nature of the report, volume of data and filters invoked.

The implementation itself was also complicated and difficult to maintain, frequently overloading the database, and suffered from instability. Often many hours into the generation process, the report procedures would fail, needing to be completely restarted.

Aon were trying to implement a more robust system that could be used to generate the reports without overloading the database, and would be less error-prone. They were also looking for ways to speed up the generation of report documents in PDF or Excel format.


The main challenge was to reduce load on SQL Server, make the report generation faster, and less likely to fail.


I worked with Aon's database and development teams to produce a .NET Core application that would pull data from the database, and process the reports on the application server, rather than the database.

This vastly reduced load on the database. By pulling data in configurable batches, the amount of data being drawn from the database could also be managed.

To reduce the possibility of process aborts late in the processing, I implemented a system of request validation. That means that if a report could potentially throw an error, it would be flagged right at the start or as early as possible.

I also implemented asynchronous processing, allowing parts of the report to be generated concurrently, and then assembled into the anticipated format at the end. This was done by writing the report results into a temporary file during processing.

I additionally introduced a report buffering algorithm to limit the amount of file I/O. Large report files were therefore written far more efficiently.

Having moved most processing and computation onto .NET, I then ran the risk of overloading the application servers.  To mitigate this, I introduced configurable parameters to allow report administrators to manage CPU and memory utilisation.

I also introduced detailed logging and error handling to allow operations teams and analysts to pinpoint issues if and when they arose.


The new process was substantially more stable. Critical issues in the report were generally picked up as soon as the report was triggered, because of the request validation process, saving countless hours of wasted processing.

The report itself, was optimised not just in code, but also by asynchronous processing, and brought down the most complex reports to around 13h, and the simpler reports to single-figure hours.

Reports flushed faster to the final output file because of the I/O buffering.

The ability to balance CPU and memory usage meant that the application servers never got overloaded, even whilst running complex reports.

Do you need to do complex data processing? If so, get in touch!

!Anchor cb-32