Home → Techniques and Tips → @RISK Performance → GPU Computations to Speed up @RISK?
Applies to: @RISK 5.x–7.x
Does @RISK take advantage of CUDA functionality, using the GPU (graphics processing unit) in addition to the main CPU to increase simulation speed? As I understand, the graphic card CPUs are very good at parallel processing, which is what is needed to increase simulation speed.
CUDA is one type of GPGPU (general-purpose computation on graphics processing units), and is specific to NVidia GPUs. AMD has a different scheme, called OpenCL.
In a typical simulation, most of the compute power is used not by @RISK but by Excel, in recalculating all open workbooks for each iteration. And as of this writing (July 2017), Excel versions up through Excel 2016 don't use GPGPU. @RISK 5.x–7.x do use multiple threads, to try to use all CPU resources available, but not GPGPU. There are few calculations within @RISK itself that could benefit from GPGPU.
We will continue to re-evaluate this issue as technology advances.
Last edited: 2017-07-28