New JP Morgan Supercomputer Runs Complete Risk Analysis In 12 Seconds (Instead Of 8 Hours Before)!!!

“The project took JP Morgan around three years, and the bank is now looking to push it into other areas of the business, such as high frequency trading.”


JP Morgan supercomputer offers risk analysis in near real-time (Computerworld UK, July 11, 2011):

JP Morgan is now able to run risk analysis and price its global credit portfolio in near real-time after implementing application-led, High Performance Computing (HPC) capabilities developed by Maxeler Technologies.

The investment bank worked with HPC solutions provider Maxeler Technologies to develop an application-led, HPC system based on Field-Programmable Gate Array (FPGA) technology that would allow it to run complex banking algorithms on its credit book faster.

JP Morgan uses mainly C++ for its pure analytical models and Python programming for the facilitation. For the new Maxeler system, it flattened the C++ code down to a Java code. The company also supports Excel and all different versions of Linux.

Prior to the implementation, JP Morgan would take eight hours to do a complete risk run, and an hour to run a present value, on its entire book. If anything went wrong with the analysis, there was no time to re-run it.

It has now reduced that to about 238 seconds, with an FPGA time of 12 seconds.

“Being able to run the book in 12 seconds end-to-end and get a value on our multi-million dollar book within 12 seconds is a huge commercial advantage for us,” Stephen Weston, global head of the Applied Analytics group in the investment banking division of JP Morgan, said at a recent lecture to Stanford University students.

“If we can compress space, time and energy required to do these calculations then it has hard business values for us. It gives us ultimately a competitive edge, the ability to run our risk more frequently, and extracting more value from our books by understanding more fully is a real commercial advantage for us.”

The faster processing times means that JP Morgan can now respond to changes in its risk position more rapidly, rather than just looking back at the risk profile of the previous day, which was produced by overnight analyses.

The speed also allows the bank to identify potential problems and try to deal with them in advance. For example, JP Morgan can now run potential scenarios to assess its exposure to problems such as the Irish or Greek bank problems, which Weston said “wouldn’t have even been thinkable” before.

“We’re very sensitive to defaults, so we need to run a lot of scenarios to find out [how sensitive we are]. It still takes a couple of hours to run, but we can get the answers back,” he said.

Weston added that the bank was able to have a better idea of the “character” of its risk.

“Where we thought the risk in the book was, was more or less where it was, but it had a different character to it. We found that we had a particularly interesting sensitivity to the ordering of our defaults, which we had never been able to explore before.

“It gives us the ability to look at things that we couldn’t look at before and that’s extremely valuable to us,” he said.

As well as being faster, JP Morgan required a system that was more energy efficient. The company has just under a million square foot of raised floor space in data centres, and power consumption was a particular problem.

“Our data centres don’t run out of space, but they do run out of power, power to run the machines, to cool the machines. We needed a solution that was fast, efficient, reliable and less power-hungry.” said Weston.

‘Pipelining’

Instead of using existing standard multi-core machines, JP Morgan adopted FPGA technology to enable it to ‘pipeline’ instructions. This means calculations can be executed very quickly by breaking down calculations into simple components that can then be built into ‘pipelines’.

“[Being super pipelined] we can do a huge volume of calculations more than we can in a traditional CPU environment,” Weston said.

In an initial proof-of-concept project on 450 trades that was representative of the value of the global portfolio, JP Morgan found that it was able to run calculations 30 times faster on a single node. It then built a 10-node box, with two FPGAs in each node.

“We put our whole portfolio (several hundred thousand trades) on that [and] we managed to get the book to run 130 times faster,” said Weston.

“We then built a 40-node computer, which is basically a supercomputer.”

The project took JP Morgan around three years, and the bank is now looking to push it into other areas of the business, such as high frequency trading.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.