Palisade Knowledge Base

HomeTechniques and TipsPrinter Friendly Version

Techniques and Tips

How to accomplish specific tasks.

1. All Products

1.1. Palisade products are NOT impacted by the Log4J vulnerability

Applies to: All Palisade software versions

Issue: A zero-day exploit was recently identified within the Apache log4j logging library, which can potentially be used by hackers to take over entire servers via logging messages.

Statement: Palisade's products (@RISK, DecisionTools Suite, Palisade Server Manager) do not utilize the open-source java Log4j library that has recently been identified as having vulnerabilities, and therefore are not impacted.
 
 
 
Last Update: 2021-12-16

1.2. Do @RISK and DecisionTools Suite use TLS encryption?

Applies to: @RISK and DecisionTools Suite (All versions)

 

All Palisade software uses and supports all TLS recent versions (versions 1.0/1.1/1.2).

If you are in need of disabling older versions of TLS from your servers to replace it for any newer one, the software will still work without any issues. 

In case this article doesn't answer your question, you can email Tech Support, don't forget to include your license serial number.

 

Last Update: 2021-03-05

1.3. What happens when you edit your model in Excel 365 and then open it in an older version of Excel?

Applies to: @RISK 7.x and newer

Dynamic arrays are supported in the latest versions of Excel 365. Dynamic array formulas can automatically populate or "spill" into neighboring blank cells and eliminate the need for legacy Ctrl+Shift+Enter (CSE) array formulas.

When opening a workbook that contains dynamic array formulas in an older version of Excel, they show as a legacy CSE formula. If new dynamic array functions are used, spill range references get prefixed with _xlfn to indicate that this functionality is not supported. A spill range ref sign (#) is replaced with the ANCHORARRAY function.

Most dynamic array formulas (but not all!) will keep displaying their results in legacy Excel until you make any changes to them. Editing a formula immediately breaks it and displays one or more #NAME? error values.

So, if you know you will be sharing dynamic array formula enabled workbooks with someone using non-dynamic aware Excel, it’s better to avoid using features that aren't available for them.

What about @RISK functions?

When you open a workbook containing @RISK functions in an older version of Excel, it is automatically converted to a conventional array formula enclosed in {curly braces}.

So, there are two possible scenarios to analyze considering the @RISK version used:

  1. @RISK 8. The model will run without problems.
  2. @RISK 7.x and older. You will get next error message:

 

 

One way to solve this problem is inserting the “@” character (aka implicit intersection operator) at the beginning of the @RISK functions when editing the model in Excel 365.

In Excel 365, all formulas are regarded as array formulas by default. The implicit intersection operator is used to prevent the array behavior if you do not want it in a specific formula. In other words, this is done to force the formula to behave the same way as it did in older versions.

So, you can do this manually or programmatically by creating a VBA macro.

There is an alternative option using the Swap-Out functionality available in @RISK, the procedure is explained below:

  1. On Excel 365, use the Swap-Out functionality of @RISK to preserve the current state of all @RISK functions. No need to include reports so you may want to skip those options.
  2. Save a copy of this workbook and then open it in an older version of Excel which has @RISK v7 or below.
  3. It may immediately prompt you to swap in all @RISK functions found. If it doesn’t, close the model and open a blank workbook instead. Run the Swap-Out functionality on the blank workbook and then re-open your model.
  4. Follow the instructions on screen to complete the Swap-In process.


Last Update: 2020-09-08

1.4. What's New in the Knowledge Base?

This page lists the principal changes in the Palisade Technical Support Knowledge Base, from most recent to oldest. Each entry shows the "book", chapter, and article titles, with live links. Titles of new articles are in boldface and marked with ★.

October 5th, 2022 — February 10th, 2023

New Articles:

 

Updated Articles:

 

Deleted Articles:

 

August 5th, 2022 — October 5th, 2022

New Articles:

  • Home → Soluciones → Todos los productos: de inicio → No pudo adjuntar la copia en ejecución de Microsoft Excel ya que ésta está invisible o no responde. (Spanish version of Could not attach to the already running copy of Microsoft Excel KB)

Updated Articles:

February 11th,2022 August 5th, 2022

New Articles:

  • No new articles as of 08/05/2022

Updated Articles:

Home → Técnicas y Consejos → Rendimiento de @RISK → Para simulaciones más rápidas (Spanish version of “For faster simulation” article, updated Office versions described in troubleshooting steps and validate all links are still functional)

November 15th, 2021 — February 11th,2022

New Articles:

Updated Articles:

July 29th November 15th, 2021

New Articles:

Updated Articles:

Home → End User Setup → Further Information (Updated link for EULA to Palisade website)

April 1st 2021July 29th 2021

New Articles:

Updated Articles:

March 19th 2021April 1st 2021

New Articles:

  • Home → Troubleshooting → All Products: Startup → Running DecisionTools Add-in “as Administrator” Can Block Future Access to Registry Keys that Store Preferences and Other Information
  • Home → Soluciones → Todos los productos: de inicio → Ejecutar el complemento DecisionTools "como administrador" puede bloquear el acceso futuro a las claves de registro que almacenan preferencias y otra información

 

March 5th 2021March 19th 2021

New Articles:

Updated Articles:

 

December 8th 6th 2020March 5th 2021

New Articles:

Updated Articles:

Additional Changes:

@RISK and DecisionTools Suite 8.1.1 was released on March 4th, 2021. Automatic updates from the software's previous versions will be turned on early next week. Release notes are now published in https://help.palisade.com/v8_1/en/Release-Notes.htm

 

October 6th 2020December 8th 2020

New Articles:

Updated Articles:

Additional Changes:

In our website www.palisade.com > Company > Contact there is a section in which you can redirect our customers to fill out a form to open a support ticket, this way they can input their license and issue details to maximize response times in their cases.

 

July 24 2020October 6th 2020

New Articles:

Additional Changes:

In the Palisade Help Resources (help.palisade.com) customer’s will be able to find our Knowledge Base link as well, this way they can find all online support information in one place.

 

July 7 2020July 24 2020

New Articles:

Updated Articles:

 

Home → Soluciones → Todos los productos: de inicio → Nada sucede cuando inicio el Software

 

June 5 2020July 7 2020

New Articles:

Updated Articles:

 

 

April 27 2020June 5 2020

New Articles:

Updated Articles:

 

April 17 2020April 27 2020

New Articles:

Updated Articles:

 

Home → Troubleshooting → All Products: Startup → "Timeout error starting ... PalFlexServer.exe"

 

April 2 2020April 17 2020

New Articles:

Updated Articles:

 

 

March 12 2020April 2 2020

New Articles:

Updated Articles:

 

 

Jan 30 2020March 12 2020

New Articles:

Updated Articles:

  • HomeEnd User SetupBefore You InstallWindows and Office Versions Supported by Palisade (Adding Windows 10 Version 1909 to list of compatible Windows versions with Palisade Software)

 

Jan 10 2020Jan 30 2020

New Articles:

HomeLicencias individualesActivación (6.x/7.x) → Obtener una licencia Individual certificada ★

 

05 Nov 2019 — 10 Jan 2020

New Articles:

 

Updated Articles:

 

29 Aug - 4 Nov 2019

New Articles:

Updated Articles:


17 Aug - 28 Aug 2019

New Article:

Updated Articles:

 

2 July - 16 Aug 2019

New Article:

 

29 June - 1 July 2019

Updated Articles:

 

18 June - 27 June 2019

Article updated:

 

07 May - 17 June 2019

Changes in Stand alone EULA and Network EULA

 

02 May - 06 May 2019

New article:

 
30 April - 01 May 2019

Updates on Maintenance Policy:

 

10 March - 29 April 2019

New Articles:

Updates and Translated articles in Spanish:
 

3–9 March 2019

24 Feb–2 March 2019

17–23 Feb 2019

10–16 Feb 2019

3–9 Feb 2019

27 Jan–2 Feb 2019

20–26 Jan 2019

13–19 Jan 2019

6-12 Jan 2019

30 Dec 2018–5 Jan 2019: no updates of note.

23–29 Dec 2018

16–22 Dec 2018

9–15 Dec 2018

2–8 Dec 2018

25 Nov–1 Dec 2018

18–24 Nov 2018: no updates of note.

11–17 Nov 2018

4–10 Nov 2018

28 Oct–3 Nov 2018

21–27 Oct 2018

14–20 Oct 2018

7–13 Oct 2018—These articles have significant updates for software release 7.6.0:

30 Sept–6 Oct 2018: no updates of note.

23–29 Sept 2018

16–22 Sept 2018

9–15 Sept 2018

2–8 Sept 2018

26 Aug–1 Sept 2018

19–25 Aug 2018

12–18 Aug 2018

5–11 Aug 2018

29 July–4 Aug 2018

22–28 July 2018

15–21 July 2018

8–14 July 2018

1–7 July 2018

24–30 June 2018

17–23 June 2018

10–16 June 2018

3–9 June 2018

27 May–2 June 2018

20–26 May 2018

13–19 May 2018: no updates of note.

6–12 May 2018

29 April–5 May 2018

22–28 April 2018

15–21 April 2018

8–14 April 2018

1–7 April 2018

25–31 March 2018

18–24 March 2018

11–17 March 2018

4–10 March 2018

25 Feb–3 Mar 2018

18–24 Feb 2018

11–17 Feb 2018

4–10 Feb 2018: no updates of note.

28 Jan–3 Feb 2018

21–27 Jan 2018

14–20 Jan 2018 (updates for software release 7.5.2)

7–13 Jan 2018

31 Dec 2017–6 Jan 2018

24–30 Dec 2017: no updates of note.

17–23 Dec 2017

10–16 Dec 2017

3–9 Dec 2017

26 Nov–2 Dec 2017

19–25 Nov 2017: no updates of note.

12–18 Nov 2017

5–11 Nov 2017

29 Oct–4 Nov 2017

22–28 Oct 2017

15–21 Oct 2017

8–14 Oct 2017

1–7 Oct 2017

24–30 Sept 2017

17–23 Sept 2017

10–16 Sept 2017

3–9 Sept 2017

27 Aug–2 Sept 2017

20–26 Aug 2017

13–19 Aug 2017: no updates of note.

6–12 Aug 2017

30 July–5 Aug

23–29 July 2017

16–22 July 2017

9–15 July 2017

2–8 July 2017

25 June–1 July 2017

18–24 June 2017

11–17 June 2017

4–10 June 2017

28 May–3 June 2017

21–27 May 2017: no updates of note.

14–20 May 2017

7–13 May 2017

30 Apr–6 May 2017

23–29 Apr 2017

16–22 Apr 2017

9–15 Apr 2017

2–8 Apr 2017

26 Mar–1 Apr 2017

19–25 Mar 2017

12–18 Mar 2017

5–11 Mar 2017

26 Feb–4 Mar 2017

19–25 Feb 2017

12–18 Feb 2017

5–11 Feb 2017

29 Jan–4 Feb 2017

22–28 Jan 2017

15–21 Jan 2017

8–14 Jan 2017

1–7 Jan 2017

25–31 Dec 2016

18–24 Dec 2017

11–17 Dec 2016

4–10 Dec 2016: no updates of note.

27 Nov–3 Dec 2016

20–26 Nov 2016

13–19 Nov 2016

6–12 Nov 2016: no updates of note.

30 Oct–5 Nov 2016

23–29 Oct 2016

16–22 Oct 2016

9–15 Oct 2016

2–8 Oct 2016

25 Sept–1 Oct 2016

18–24 Sept 2016

11–17 Sept 2016

4–10 Sept 2016

28 Aug–3 Sept 2016

21–27 Aug 2016

14–20 Aug 2016

7–13 Aug 2016

31 July–6 Aug 2016

24–30 July 2016

17–23 July 2016: no updates of note.

10–16 July 2016

Release 7.5.0 of our software was issued this week.

2–9 July 2016

26 June–2 July 2016

19–25 June 2016

12–18 June 2016

5–11 June 2016

29 May–4 June 2016

★ indicates a new article.

Last edited: 2023-02-10

Additional keywords: 1479

1.5. How to Find Your Serial Number

También disponible en español: ¿Cómo encuentro mi número de serie?

Finding Version 8.x serial number

Standalone:

Please launch @RISK or your other Palisade product. In the @RISK ribbon or menu, click About @RISK. The 7-digit number next to S/N is your serial number. If you can't start the software, but you have your Activation ID, it begins with three letters (DPS, DNS, RPS, or RNS) and 7 digits; the 7 digits are your serial number. If you don't know your Activation ID, please get your serial number from the email you received when you bought the software. If all those methods fail, your Palisade sales office (not Tech Support) may be able to look up your serial number.

Concurrent Network License (Activatable or Certificate)

To find your license details, open Palisade Server Manager and refer to the section "Active Licenses", the 7 digits serial number can be found after the first 3 letters "DNQ, DNF, DPQ, DPF, RPF, RNF, RNQ or RPQ" and will start with the number 8.

Example: RPQ-8XXXXXX-...

Textbook:

If you have a textbook license, the textbook details will appear instead of a serial number.

Course License (Academic)

If you have a course license through your college or university, the serial number may or may not appear. If no serial number appears, you can find it in the Palisade_Course.lic file that you received from your school; it's the 7-digit number just after "SN=". If you've already installed the DecisionTools Suite course license, the Palisade_Course.lic file will be in C:\Program Files (x86)\Palisade\System or C:\Program Files\Palisade\System.

Finding Version 6.x/7.x Serial Number

Standalone:

Please launch @RISK or your other Palisade product. In the @RISK ribbon or menu, click Help, then About. The 7-digit number next to S/N is your serial number. If you can't start the software, but you have your Activation ID, it begins with three letters and 7 digits; the 7 digits are your serial number. If you don't know your Activation ID, please get your serial number from the email you received when you bought the software. If all those methods fail, your Palisade sales office (not Tech Support) may be able to look up your serial number.

Concurrent Network License (Activatable or Certificate)

To find your license details, open Palisade Server Manager and refer to the section "Active Licenses", the 7 digits serial number can be found after the first 3 letters "DNF, DPF, RPF, RNF" and will start with the number 7.

Example: RPF-7XXXXXX-...

Note: Single DecisionTool products will have their initial letter in their Activation IDs:

Example:

  • ENF-7XXXXXX (Evolver)
  • SNF-7XXXXXX (StatTools)
  • NNF-7XXXXXX (NeuralTools)

Textbook

If you have a textbook license, the textbook details will appear instead of a serial number.

Course License (Academic)

If you have a course license through your college or university, the serial number may or may not appear. If no serial number appears, you can find it in the Palisade_Course.lic file that you received from your school; it's the 7-digit number just after "SN=". If you've already installed the DecisionTools Suite course license, the Palisade_Course.lic file will be in C:\Program Files (x86)\Palisade\System or C:\Program Files\Palisade\System.

Finding Version 5.x Serial Number

Please launch your Palisade product. In the Excel menu, click @RISK (or the appropriate product name), then Help, then License Activation. Look at the Activation ID that appears. The serial number is the group of seven digits starting with a 5.

If no Activation ID appears, then either this is a trial version or it is a bought version that has not yet been activated. Please check your back emails for the serial number.

We have a video illustrating this procedure.

Finding Version 1.x or 4.x Serial Number

Please launch your Palisade software. In the Excel or Project menu line (the one that starts File, Edit, View) click on @RISK (or another Palisade product name). A submenu will drop down. In that submenu, click Help. A third menu will open at the side. Click on About @RISK.

In the About screen, look for S/N. The number after that is your serial number.

Last edited: 2022-02-09

1.6. New user interface in Palisade License Manager

Applies to:  DTS and @RISK 8.2 and newer.

Available in version 8.2, Palisade License Manager will have slight changes in its user interface: 

  • The advance option window has been replaced for a drop-down menu.
  • Faster download and progress count.
  • New option to Install or Save installer for later after an update is downloaded.

 

 

Last Update: 2021-07-29

1.7. Which Version of Excel Is Opened by Palisade Software?

Applies to:
@RISK For Excel 4.5–7.x
BigPicture 1.x(7.x)
Evolver 4.0–7.x
NeuralTools 1.0–7.x
PrecisionTree 1.0–7.x
RISKOptimizer 1.0, 5.x
StatTools 1.1–7.x
TopRank 5.0–7.x

I have multiple versions of Excel on my computer. @RISK (PrecisionTree, StatTools, ...) opens one version of Excel, but I want it to open the other version.

Or,

My Excel was recently upgraded and now when I launch @RISK it can't start Excel.

The simplest solution is to open Excel and then launch the Palisade software. If @RISK (PrecisionTree, StatTools, ...) finds a copy of Excel running, it will attach itself to that copy.

If you want a more permanent solution that will cause your Palisade software to open your desired copy of Excel, you can make an edit to the System Registry, as follows:

  1. Close Excel if it's running. Locate the "Excel.exe" file and take note of the full file path. Caution: you need the full path, including the program name and ".exe" extension. Some examples are
    C:\Program Files (x86)\Microsoft Office\OFFICE14\Excel.exe and
    C:\Program Files\Microsoft Office\OFFICE11\Excel.exe

  2. To open the Registry Editor, click the Windows Start button, then Run. Type REGEDIT and click the OK button.

  3. When the Registry Editor window appears, navigate to
    HKEY_LOCAL_MACHINE\Software\Palisade
    in the left-hand pane, or if you have 64-bit Windows then navigate to
    HKEY_LOCAL_MACHINE\Software\WOW6432Node\Palisade
    In the right-hand pane, you will see two string values called Main Directory and System Directory, and possibly some additional values.

  4. If Excel Path appears in the right-hand pane, double-click it and edit the path to match the path you noted in step 1.

  5. If Excel Path does not appear in the right-hand pane, right-click in the right-hand pane and select New » String Value. Name the new string value Excel Path, with a space between the two words. Double-click the name Excel Path and edit in the path that you saved in Step 1.

  6. Test your edit by launching the Palisade software when no version of Excel is running. If the correct Excel does not come up, edit the value of the Excel Path string. If the correct Excel comes up, close the Registry Editor by clicking File » Exit.

Reminder: This Registry setting is used only when you launch our software while Excel is not running. If Excel is already running and you click a shortcut or icon for our software, it will attach itself to the running copy of Excel, regardless of version.

Last edited: 2017-12-21

1.8. Which Version of Excel Am I Running?

Tech Support wants to know which version of Excel I'm running, and whether it's 32-bit or 64-bit Excel. How do I find that?

There are different menu selections for this in the different versions of Excel, but you can also get clues from the appearance of Excel. Just follow along with this questionnaire.

  1. Do you see old-style menus, as opposed to the new ribbon? Then you are running Excel 2003 or earlier, and it is 32-bit Excel. (@RISK 7 does not run in Excel 2003. @RISK 6 does, but not in earlier Excels. See the full compatibility matrix.)

  2. Is there a round "Office button" at the top left of the Excel window, as opposed to the word File? Then you are running 32-bit Excel 2007.

  3. Does the word FILE, in all capitals, appear at the top left of the Excel window? Then you are running Excel 2013. To find whether it is 32-bit or 64-bit Excel, click FILE » Account » About Excel, and look at the top line of the "About Microsoft Excel" box that opens.

  4. Otherwise, click File in the ribbon, and look at the selections that appear under File.

    • If you see Account under File, you are running Excel 2016. To find whether it is 32-bit or 64-bit Excel, click Account » About Excel, and look at the top line of the untitled box that opens.
    • If you don't have Account under File, you are running Excel 2010. Click Help under File, and look at the first line under "About Microsoft Excel" to find whether you're running 32-bit or 64-bit Excel 2010.

See also: Microsoft's documents What version of Office am I using? and Find details for other versions of Office.

Last edited: 2016-04-26

1.9. Getting Better Performance from Excel

Disponible en español: Conseguir un mayor rendimiento de Excel
Disponível em português: Obtendo o melhor desempenho do Excel

Applies to: @RISK and other Palisade add-ins to Excel

How much of the calculation time in my model is actually spent by Excel? Can I make my Excel worksheets calculate more efficiently?

The impact varies. @RISK (particularly RISK Optimizer), Evolver, PrecisionTree with linked trees, and TopRank are most affected by Excel calculation speed.

  • @RISK and TopRank recalculate all open workbooks ("Excel recalc") once per iteration.
  • PrecisionTree does a couple of Excel recalcs while analyzing the tree. With a linked tree, PrecisionTree also does an Excel recalc for each possible path through the tree (each end node).
  • Evolver does an Excel recalc once per trial.
  • StatTools does virtually all of its calculations outside of Excel, so tuning Excel will have little effect on the speed of its operations.
  • NeuralTools does virtually all of its calculations outside of Excel, so tuning Excel will have little effect on the speed of training or testing a network.

Microsoft has a number of suggestions for how to get better performance out of your Excel model:

See also:

Last edited: 2018-01-30

1.10. Opening a Second Instance of Excel

While I'm using Palisade software, can I open a workbook and not have the Palisade software in the ribbon for that workbook?

Yes, you can, and this lets you work in that second copy of Excel while @RISK (Evolver, NeuralTools, ...) is running its analysis in the first copy of Excel. In that second copy of Excel, don't run a Palisade product.  (If you want to work on a workbook that contains @RISK functions, they will all appear in the cells as #NAME.  However, you can edit the formulas in the formula bar, copy/paste formulas, and so on.)

The terminology is important here—opening a second instance of Excel is not the same thing as opening a second workbook in Excel. If you open a second workbook, the existing copy of Excel opens it, so you have one copy of Excel running and there's one Excel line in Task Manager. You can have multiple workbooks open when running our software, but don't switch workbooks while a simulation or other analysis is running. By contrast, when you open a second instance, Windows loads a fresh second copy of Excel, and Task Manager shows two Excel lines. Our software will fail with "Object initialized twice" or another message if you try to open it in a second instance of Excel.

Confusingly, Excel 2013 and newer look like second instances when you simply open a second workbook. They show multiple taskbar icons, usually stacked. The Windows actions to switch to a different program will switch between those workbooks, even though they're open in the same program. The only way to be sure is to look at Task Manager (Ctrl+Shift+Esc) to determine whether there's one line for Excel, or more than one.

Only these specific actions will open a second instance of Excel:

  • With Excel 2003 through 2010, launch a second instance in the usual Windows way.  The Start menu always works. With Windows 7 and Windows 8, you can also press and hold the Shift key and click the Excel box in the taskbar.

  • With Excel 2013, 2016, 2019, and 365, press and hold the Alt key, right-click the Excel icon in the Windows taskbar and click the Excel icon above the Pin or Unpin option. You can release the mouse button right away, but continue holding down the Alt key until you get a prompt asking "Do you want to start a new instance of Excel?" Release the Alt key and click Yes.

Important: Don't attempt to run any Palisade software in that second instance of Excel.

I followed directions, but @RISK appeared in the second copy of Excel anyway.

First, press Ctrl+Shift+Esc to open Task Manager, and verify that there are two Excel lines. If not, you probably let go of the Alt key too soon. Assuming there are two Excels, @RISK opened in the second instance because you have it set to launch whenever Excel launches.

Remember, you can't have two copies of Excel both running palisade software, even different Palisade applications. If you want to use multiple instances of Excel, you must prevent that, as follows:

  1. Close one copy of Excel.
  2. In the other copy, click File » Options » Add-Ins. (In Excel 2007, click the round Office button, then Excel Options » Add-Ins.)
  3. At the bottom of the right-hand panel, click the Go button next to Manage: Excel Add-Ins.
  4. Remove the tick marks on all Palisade software.

Last edited: 2018-11-28

1.11. Using Excel During Simulation or Optimization

Applies to:
@RISK for Excel 5.x–7.x
RISKOptimizer 5.x (6.x and newer are part of @RISK)
Evolver 5.x–7.x

My simulation or optimization takes some time to run.  During that time, I would like to work on another workbook.  Is there any way I can use Excel for something else during a simulation or optimization?

Yes, you can open a second instance of Excel and do anything in that instance, with one exception: Don't run any Palisade product in that second instance of Excel.  (If you want to work on a workbook that contains @RISK functions, they will all appear in the cells as #NAME.  However, you can edit the formulas in the formula bar, copy/paste formulas, and so on.)

To open a second instance of your version of Excel, please see Opening a Second Instance of Excel.

Last edited: 2017-11-28

1.12. Identical Settings for Multiple Computers

Disponible en español: Ajustes Iguales para Computadores Múltiples
Disponível em português: Utilizando as mesmas configurações para vários computadores

Applies to:
@RISK, Evolver, NeuralTools, PrecisionTree, and StatTools, releases 5.x–7.x
RISKOptimizer 5.x (merged in @RISK starting with 6.0)

I'm a site administrator, and I want to ensure that everyone has the same settings for @RISK or any of the applications in the DecisionTools Suite. Is there any way I can do this?

Yes, this is easy to do. This article will give you the detailed procedure for @RISK, followed by the variations for the other applications.

You can create a policy file, RiskSettings.rsf, in the RISK5, RISK6, or RISK7 folder under your Palisade installation folder. If this policy file is present when @RISK starts up, the program will silently import the Application Settings and users will not be able to change them. Application Settings actually include two types of settings:

  • Default Simulation Settings, such as number of iterations, whether distribution samples are collected, and whether multiple CPUs are enabled. These are applied automatically to any new model that the user creates. However, the user does have the ability to change the Simulation Settings and save the model with the changed settings. If the user opens an existing model, created by that user or by someone else, @RISK will use the Simulation Settings stored with that model, not the default Simulation Settings from the policy file.

  • Global options for @RISK itself, such as whether to show the welcome screen and whether to save simulation results in the workbook. These settings are "frozen" by the policy file: the user can't change them, and they're not affected by anything in a workbook.

How to create a policy file for @RISK:

  1. Run @RISK, and on the @RISK Utilities menu select Application Settings.
  2. Make your changes to the settings that are displayed. (If a particular Simulation Setting is not shown here, then this version of @RISK does not allow setting a default.)
  3. Click the Reset/File Utilities icon at the bottom of the dialog, and select Export to File. Use the suggested name of RiskSettings.rsf.
  4. Move or copy the file to the RISK5, RISK6, or RISK7 folder under the user's Palisade installation folder.

If you want to provide an optional settings file rather than a mandatory policy file, create the RiskSettings.rsf file as above but don't put it in the RISK5, RISK6, or RISK7 folder. Users can then import the settings by opening Application Settings, clicking the Reset/File Utilities icon, and selecting Import from File.

Policy files for other applications:

The procedure is the same; only the locations of the policy files change.

Settings File NameSettings File Location
RiskSettings.rsf RISK7, RISK6, or RISK5
EvolverSettings.rsf Evolver7, Evolver6, or Evolver5
NeuralToolsSettings.rsf NeuralTools7, NeuralTools6, or NeuralTools5
PTreeSettings.rsf PrecisionTree7, PrecisionTree6, or PrecisionTree5
RISKOptimizerSettings.rsf RISKOptimizer5 only
StatToolsSettings.rsf StatTools7, StatTools6, or StatTools5
(RISKOptimizer 6 and 7 settings are merged in RiskSettings.rsf. BigPicture and TopRank do not support policy files.)

See also: Transferring Settings to Other Models or Other Computers

Additional keywords: RSF file, .RSF file, pre-defined settings

Last edited: 2015-06-08

1.13. Transferring Settings to Other Models or Other Computers

Applies to: All products, releases 5.x–7.x

After I adjust the Application Settings to my liking, how can I copy these settings into other models, potentially running on other PCs?

If you are concerned only with the settings for one particular model, these are stored in the workbook and there's no need to do anything special to export them. If you want to export defaults to be applied to all new models, follow this procedure:

  1. In the Utilities » Application Settings window, click on the small disk icon at the bottom of the dialog and choose Export to File. Make note of the file location and name that you choose.

  2. In another session or on another computer, first load your model and then click Utilities » Application Settings. Click that same disk icon and choose Import from File to load the file that you saved earlier.

You can also create a policy file with application settings that should be common to all users. Please see Identical Settings for Multiple Computers.

Last edited: 2015-06-08

1.14. Running "Out of Process"

Applies to:
@RISK 7.5.2 in 32-bit Excel 2013 or 2016
@RISK 7.6.x in 32-bit Excel 2013, 2016, or 2019
Evolver 7.5.2 in 32-bit Excel 2013 or 2016
Evolver 7.6.x in 32-bit Excel 2013, 2016, or 2019
NeuralTools 7.5.2 in 32-bit Excel 2013 or 2016
NeuralTools 7.6.x in 32-bit Excel 2013, 2016, or 2019
PrecisionTree 7.5.2 in 32-bit Excel 2013 or 2016
PrecisionTree 7.6.x in 32-bit Excel 2013, 2016, or 2019
StatTools 7.5.2 in 32-bit Excel 2013 or 2016
StatTools 7.6.x in 32-bit Excel 2013, 2016, or 2019

Does not apply to:
BigPicture
TopRank
64-bit Excel, or 32-bit Excel 2010 or 2007

What does it mean to run out of process or in process, and what difference does it make to me?

@RISK and our other add-ins are 32-bit code, and each product has a bridge to 64-bit Excel. Those bridges are called RiskOutOfProcessServer7.exe, EvolverOutOfProcessServer7.exe, NeuralToolsOutOfProcessServer7.exe, PrecisionTreeOutOfProcessServer7.exe, and StatToolsOutOfProcessServer7.exe. We say that @RISK (Evolver, NeuralTools, ...) is running out of process, meaning that it doesn't run directly as part of 64-bit Excel's process, but instead routes communications with Excel through the bridge.

When you're interfacing 32-bit code with 64-bit code, that's normal. However, in 7.5.1 and earlier releases, even with 32-bit Excel 2013 and 2016, @RISK and the other tools ran out of process. With 7.5.2, that changed: when running with 32-bit Excel, the tools listed above now run in process by default. This removes a layer of code and should provide better performance and greater stability, since there's no longer a separate "out of process server" layer.

If your simulations involve Microsoft Project, you'll notice a very significant speedup from running @RISK 7.6 in process.

But there are many builds of Excel 2016 out there, not to mention future Excel updates, and it's not possible to test with all of them. It's possible, though not likely, that your particular build or configuration might experience a problem with running in process, such as flashing windows or windows not appearing at all, or other issues identified by Palisade Technical Support. If this happens, you can set Palisade software to run out of process and avoid the problems.

How do I set the software to run out of process?

These settings are recorded in the current user's profile. If people log in to this computer under different Windows usernames, the others will continue running in process unless they also follow one of these two methods.

Method A: If you can launch the software, and get into Utilities » Application Settings, it's easy. In the Advanced section at the end of Application Settings, change Operating Mode to out-of-process, and click OK.

TIP: If you have the DecisionTools Suite, and some tools are working, change this setting in one of the working tools, and it will offer to change it in the others, working and non-working. Remember to close Excel before re-testing the tool that has problems.

Method B: If you can't launch the software, or the Application Settings dialog won't come up, you can use the attached OutOfProcess Registry file.

  1. Close all open instances of Excel and Project.
  2. Download the attached OutOfProcess file.
  3. Change the extension from TXT to REG. (If you can't see the .TXT extension, see Making File Extensions Visible.)
  4. Double-click the REG file.

The REG file will set Registry keys for the five products listed above. If you don't have the DecisionTools Suite but only one or more individual products, the extra keys will do no harm.

What if I want to go back to running in process?

In Utilities » Application Settings » Advanced, set Operating Mode to in-process, or use the attached InProcess file.

Are there any known issues when running in process?

Here's what we've identified so far:

Last edited: 2018-11-28

1.15. Removing Outdated References to Office from the System Registry

Disponible en español: Quitar referencias obsoletas de Office del Registro del Sistema
Disponível em português: Removendo referências ultrapassadas para o Office a partir do Editor de Registro

Removing a version of Microsoft Office can sometimes leave behind "orphan" keys in the System Registry. These references to products that are no longer installed can prevent Palisade add-ins from working correctly with Excel, Project, or both — you may see messages such as "Application-defined or object-defined error", "Automation error: Library not registered", "Error in loading DLL", "Could not contact the Microsoft Excel application", "File name or class name not found during Automation operation", or "Object variable or with block variable not set". Results graphs or other graphs may not appear as expected.

To remove the outdated references, you will need to edit the System Registry, as detailed below. If you'd rather not edit the System Registry, or you don't have sufficient privilege, you may be able to work around the problem by starting Excel first and then the Palisade software. If you'd like to make Palisade software start automatically whenever Excel starts, please see Opening Palisade Software Automatically Whenever Excel Opens. Otherwise, please proceed as follows:

  1. Close Excel and Project.

  2. Click Start » Run, type REGEDIT and click OK.

{00020813-0000-0000-C000-000000000046} Key for Excel

  1. Click on Computer at the top of the left-hand panel, then press Ctrl+F to bring up the search window. Paste this string, including the curly braces {...}, into the search window:
    {00020813-0000-0000-C000-000000000046}
    Check (tick) the Keys box and Match whole string only; clear Values and Data.

  2. Click the + sign at the left of {00020813-0000-0000-C000-000000000046} to expand it. You will see one or more subkeys:

    • 1.5 for Excel 2003.
    • 1.6 for Excel 2007.
    • 1.7 for Excel 2010.
    • 1.8 for Excel 2013.
    • 1.9 for Excel 2016.

    Identify the one(s) that do not match the version(s) of Excel you actually have installed. If all of them do match installed Excel versions, omit steps 5 and 6.

  3. You are about to delete the key(s) that correspond to versions of Microsoft Excel that you do not have. For safety's sake, you may want to back them up first. Right-click on {00020813-0000-0000-C000-000000000046}, select Export, and save the file where you'll be able to find it.

  4. Right-click the 1.something key that does not belong, select Delete, and confirm the deletion. Repeat for each 1.something key that does not belong.

  5. The {00020813-0000-0000-C000-000000000046} key can occur in more places. Usually they all have the same subkeys, but not always, so you need to examine each instance. Tap the F3 key to get to each of the others in turn. For each one, repeat steps 4 through 6 (click the + sign, export the key to a new file, and delete the orphaned 1.something entries).

{2DF8D04C-5BFA-101B-BDE5-00AA0044DE52} Key for Office

  1. Click on Computer at the top of the left-hand panel, then press Ctrl+F to bring up the search window. Paste this string, including the curly braces {...}, into the search window:
    {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52}
    Check (tick) the Keys box and Match whole string only; clear Values and Data.

  2. Click the + sign to expand the key. You will see one or more subkeys:

    • 2.3 for Office 2003.
    • 2.4 for Office 2007.
    • 2.5 for Office 2010.
    • 2.6 and 2.7 for Office 2013. (2.6 and 2.7 are okay for Office 2016 as well, if there is a reference to Office16 under 2.7.)
    • 2.8 for Office 2016.

    Identify the one(s) that do not match the version(s) of Office you actually have installed. If all of them do match installed Office versions, omit steps 10 and 11.

  3. You are about to delete the key(s) that correspond to versions of Microsoft Office that you do not have. For safety's sake, you may want to back them up first. Right-click on {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52}, select Export, and save the file where you'll be able to find it. (Choose a different name for this file, such as Key2.)

  4. Right-click the 2.something key that does not belong, select Delete, and confirm the deletion. Repeat for each 2.something key that does not belong.

  5. The {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52} key can occur in more places. Usually they all have the same subkeys, but not always, so you need to examine each instance. Tap the F3 key to get to each of the others in turn. For each one, repeat steps 9 through 11 (click the + sign, export the key to a new file, and delete the orphaned 2.something entries).

  6. Close the Registry Editor.

If you run @RISK with Microsoft Project, please follow the additional steps in Removing Outdated References to Project from the System Registry to find and remove outdated references to Microsoft Project.

The software should now run normally. After verifying @RISK (PrecisionTree, etc.), and running Excel independently of our software, you can delete the saved .REG files.

Last edited: 2016-04-01

1.16. Removing Outdated References to Project from the System Registry

Removing a version of Microsoft Project can sometimes leave behind "orphan" keys in the System Registry. These references to products that are no longer installed can prevent Palisade add-ins from working correctly with Project — you may see messages such as "Application-defined or object-defined error", "Automation error: Library not registered", "Error in loading DLL", "Could not contact the Microsoft Excel application", "File name or class name not found during Automation operation", or "Object variable or with block variable not set". Results graphs or other graphs may not appear as expected.

To search for and remove outdated COM Type Library registrations relating to Microsoft Project, please follow this procedure, which requires administrative rights:

  1. Close Excel and Project.

  2. Click » Start » Run, enter the command REGEDIT and click OK.

  3. Click on Computer at the top of the left-hand panel, then press Ctrl+F to bring up the search window. Paste this string, including the {...}, into the search window:
    {A7107640-94DF-1068-855E-00DD01075445}
    Check (tick) the Keys box and Match whole string only; clear Values and Data.

    If the key does not exist, Microsoft Project may be able to re-create it. Close Registry Editor and open Project. There's no need to open an .MPP file. Close Project, then reopen Registry Editor and search for the key. If the key still does not exist, see Microsoft Project Installation below.

  4. If the key does exist, click the + sign at the left to expand it.  You will see one or more subkeys:

    • 4.5 for Project 2003
    • 4.6 for Project 2007
    • 4.7 for Project 2010
    • 4.8 for Project 2013
    • 4.9 for Project 2016

    Identify the one(s) that do not match the version(s) of Microsoft Project you actually have installed. If all of them do match installed Project versions, omit steps 5 and 6.

  5. You are about to delete the key(s) that correspond to versions of Microsoft Project you do not have. For safety's sake, you may want to back them up first.  Right-click on {A7107640-94DF-1068-855E-00DD01075445}, select Export, and save the file where you'll be able to find it.

  6. Right-click the 4.something key that does not belong, select Delete, and confirm the deletion.  Repeat for each 4.something key that does not belong.

  7. The {A7107640-94DF-1068-855E-00DD01075445} key can occur in more places. Usually they all have the same subkeys, but not always, so you need to examine each instance. Tap the F3 key to get to each of the others in turn. For each one, repeat steps 4 through 6 (click the + sign, export the key to a new file, and delete the orphaned 4.something entries).

  8. Close Registry Editor.

Launch @RISK, and you should now be able to import .MPP files.  After verifying @RISK, and running Project independently of @RISK, you can delete the saved .REG file.

Microsoft Project Installation

If the {A7107640-94DF-1068-855E-00DD01075445} key does not exist, and Microsoft Project did not re-create it, you may have a problem with your installation of Project.

  1. In Project, make sure you have the latest service pack installed. If not, download it and install it.

  2. In Control Panel » Programs and Features (or Add or Remove Programs), do a Repair of Microsoft Project. (If Project, Microsoft Project, or Microsoft Office Project does not appear on a separate line, it is part of the Microsoft Office line and you should repair that.)

  3. If all else fails, uninstall and reinstall Project, or uninstall and reinstall Office if Project is part of the Office install. (We had one customer with this issue, and none of the above worked for him, but the uninstall and reinstall solved the problem.)

Excel's COM Registrations

If the above don't solve the problem, the culprit could be Excel's COM registrations rather than Project's. Please see Removing Outdated References to Office from the System Registry to check for incorrect Excel COM registrations and remove them. The procedure is similar to the procedure in the first section of this article, but the keys are different.

Last edited: 2016-04-01

1.17. Programming Languages and History of @RISK

Applies to:
All products, releases 6.x–7.x

I've always wondered -- what language are your products written in? What programming language do you use? When was @RISK first released?

Current versions use a mix of C++, Visual Basic 6, Visual Basic for Applications, Visual Basic .NET, and C#.

The first version of @RISK for Lotus 1-2-3 was released at the beginning of October 1987. Its predecessor product, PRISM for DOS, was released in April of 1984. Earlier versions of PRISM for Apple II were in use starting in 1982, before Palisade was organized as a company in 1984.

@RISK for Excel first appeared some time before July 1993—the earliest user manual in our archives has that date for @RISK 1.1.1—and @RISK for Project in 1994. In July 2012, @RISK 6.0 integrated support for Excel and Project.

For more Palisade history, please see About Palisade.

Last edited: 2018-06-28

1.18. "Update Available" (7.x)

Disponible en español: "Actualización Disponible"

Applies to: All products, releases 7.x
If you have a 6.x release, even if the update message references a 7.x release, see "An update is available." (6.x).

When I launch my Palisade software, I get a popup telling me a product update is available. I'd like to update, but I have to wait for my IT department to install it; or maybe I just prefer not to update at this time. Can I suppress the popup?

If you just click "Don't update", the reminder will appear again the next time you run the software.  But you can "snooze" it for about a month by clicking "Remind me in 30 days".

To disable the reminder for yourself alone, download the attached DisablePalisadeUpdateAutoCheck(user).reg and double-click it. To check for updates once, click Help » Check for Software Updates. To re-enable the automatic check, download EnablePalisadeUpdateAutoCheck(user).reg and double-click it.

To disable the reminder for all users on a given computer, follow this procedure:

  1. Close Excel and Project.
  2. Click Start » Run, type regedit and press the Enter key.
  3. To suppress updates for all users on this machine, navigate to HKEY_LOCAL_MACHINE\Software\WOW6432Node (if it exists) or HKEY_LOCAL_MACHINE\Software.
  4. Expand that key, and under it click Palisade.
  5. In the right-hand panel, you should see a value called CheckForUpdatesDisabled — if it's not there, right-click an empty area and select New » String Value to create it.
  6. Double-click CheckForUpdatesDisabled and enter the value True to disable update notices.

Users can still check for updates when they wish, by running the software and clicking Help » Check for Software Updates.

To re-enable the automatic check for updates, change the value to False or simply delete CheckForUpdatesDisabled.

Additional keywords: Upgrade available, Upgrade prompt, Update prompt, Upgrade message, Update message

Last edited: 2020-07-28

1.19. "An update is available." (6.x)

Disponible en español: "Hay una actualización disponible" (6.x)
Disponível em português: "Há uma atualização disponível" (6.x)

Applies to:
All products, releases 6.2.0–6.3.1

Scenario:
When you launch your Palisade software, you get a message similar to

An update is available.

A newer @RISK (version 6.3.0) was released on 2014-06-30.
Your maintenance contract entitles you to this update free of charge.

You'd like to update, but you have to wait for your IT department to install it.  Or perhaps you prefer not to update at this time.  Can you suppress the message?

Response:
If you just click "Don't update", the reminder will appear again the next time you run the software.  But you can "snooze" it for about a month by clicking "Remind me in 30 days".

If you want to suppress the reminder for a longer time, you can follow this procedure:

  1. Click the "Remind me in 30 days" button.  (This is an easy way to create the necessary key in the System Registry. But if your Palisade software isn't currently running, you can create the key in step 5 below.)
  2. Close Excel and Project.
  3. Click Start » Run, type regedit and press the Enter key
  4. Navigate to HKEY_CURRENT_USER\Software\Palisade.
  5. In the right-hand panel, you should see a value called SuppressProductUpdateMessages — if it's not there, right-click an empty area and select New » DWORD.
  6. Double-click SuppressProductUpdateMessages and enter a Julian day, such as 41934 for 2014-10-22 or 47848 for the last day of the year 2030. (Easy way to find a Julian day: type a date in Excel and then format it as a number.)

The update notice will not appear again until the selected day.

last edited: 2015-06-17

1.20. Which License Gets Used?

Disponible en español: ¿Qué licencia esta siendo utilizada? (6.x/7.x)
Disponível em português: Múltiplas Licenças - Qual será utilizada? (6.x/7.x)

Applies to: All 6.x/7.x releases, standalone and Concurrent Network client

I have more than one license, possibly even mixed between network client, activated standalone, and trial. How does the software decide which license to use?

In general, each application—@RISK, Evolver, NeuralTools, PrecisionTree, StatTools, TopRank—remembers which license you used last, and tries to reuse it the next time you run that same application. That seems like a simple idea, but in particular situations the rule can work out in surprising ways. This article explains how a Palisade application decides which license to use each time.

The key concept is the "license to use". Each application remembers the license to use, and tries to use the same license that it used last time. If you go into License Manager » Select License and pick a different license, the application remembers the new license to use for you only, not for anyone else who might log in to the same computer.

Each application remembers this separately, so different components of the DecisionTools Suite can use different licenses.

But what about the first time I run @RISK? There's no "license to use" in my user profile, because I've never run the application, so how does @RISK know which license to use?

Each application actually has two "license to use" settings, one at the machine level and one at the user level. The application uses whichever one was set more recently.

The machine-level license to use is set at install time, and the user-level license to use is set at run time. Details:

  • The installer for a standalone license presents a Customer Information screen, and sets the machine-level license to use depending on your selection. If you select "I am upgrading" on the Customer Information screen, the installer doesn't change the license to use.

  • The installer for a Concurrent Network client sets the machine-level license to use to "Network:", which tells the client software to get a license dynamically from servers listed in the System Registry key HKEY_LOCAL_MACHINE\Software\WOW6432Node\FLEXlm License Manager\PALISADE_LICENSE_FILE. (Omit WOW6432Node in 32-bit Windows.)

  • The installer for a course license or a textbook license sets the machine-level license to use to that license.

  • License Manager in each application sets the user-level license to use, when you click OK in Select License or when you activate a license. If you select or activate a DecisionTools Suite license, License Manager sets the user-level license to use for all products in the Suite.

  • When you deactivate a license in License Manager, the license to use is not changed, so the next time you run the application License Manager will appear and prompt you to select or activate a license.

Technical detail: Licenses activated during install are remembered in HKEY_LOCAL_MACHINE in the System Registry, but licenses activated or selected in License Manager are remembered only in HKEY_CURRENT_USER.

If I have a DecisionTools Suite license and an @RISK license, or @RISK Industrial and @RISK Professional licenses, what determines which one is used the first time I run @RISK?

(@RISK is just an example here. All the same rules apply to Evolver, NeuralTools, PrecisionTree, and StatTools. TopRank requires a DecisionTools Suite license.)

The first time you run @RISK, when no user-level license to use has been set, @RISK looks for an available license depending on which installer was run most recently. If the latest or only install was the DecisionTools Suite, then @RISK will try, in order, to use a DecisionTools Industrial license, DecisionTools Professional, @RISK Industrial, @RISK Professional, and @RISK Standard. If the latest or only install was @RISK, then @RISK will try first for an @RISK license and then for a DecisionTools Suite license. Whichever one it finds, it records that as the user-level license to use and will use the same one next time you run @RISK.

When you run a Concurrent Network client version of @RISK for the first time, it goes through the same process if it was set up to use just one license server. If it was set up with multiple license servers, then it looks on all available servers for each type of license, before moving on to look on all available servers for the next type of license. Whichever one it finds, it records the license type but not the specific server in the user-level license to use.

Then @RISK can use a DecisionTools Suite license?

@RISK can use a DecisionTools Suite license, even if only @RISK is installed and not the whole Suite. If the Suite is installed, and you have an @RISK license, you can use @RISK on that license but the other components of the Suite can't use the @RISK license.

This gives you a lot of flexibility. For example, you might have an activated license of @RISK but decide to install the DecisionTools Suite as a trial, or on a short-term training license. Via License Manager » Select License, you can use your activated @RISK license but run the other components of the Suite on your trial or training license.

Concurrent Network "seats" for the DecisionTools Suite are not divisible. If you have a one-user Concurrent Network license, two people can't use the Suite at the same time, whether they're using the same component or different components. If you have a two-user Concurrent Network license, two people can use the Suite at the same time, but they are taking both "seats" between them, whether they're using the same component or different components.

What if the license I was using becomes unavailable—it expires, or it's a Concurrent Network license and all seats happen to be taken? Is there automatic failover if another license is available?

In a Concurrent Network client, the software will automatically fail over to any unexpired license for the same product and edition, if one exists on any server listed in PALISADE_LICENSE_FILE (above), but it won't automatically use license for a different product or edition on any server. In the latter case, the user can still click Select License in License Manager to see if any suitable licenses are available.

For other license types, there's no automatic failover. The software will tell you that the license is no longer usable. In License Manager, you can then click Select License and select the other license. The application will remember that choice next time.

Can you give some examples?

  1. You have DecisionTools Suite Professional (activated) and you install a trial of @RISK Industrial (trial). Whenever you run Evolver, NeuralTools, PrecisionTree, StatTools, or TopRank, it will continue to use the DecisionTools Suite Professional license. The next time you run @RISK, you will get the Industrial trial license, but you can switch to the activated Professional license by clicking Help » License Manager » Select License.

  2. You have @RISK (activated), and you install a DecisionTools Suite Industrial trial. Whenever you run any application in the Suite, including @RISK, it will use the DecisionTools Suite Industrial trial license. If you are preparing a presentation and want to avoid the Trial watermarks in your graphs, you can switch @RISK to the activated license by clicking Help » License Manager » Select License. After that, @RISK will continue to use the activated license, but the other components will use the trial license.

  3. You install @RISK without activating it, so all user profiles are running on a trial license. Later, you activate the software. @RISK remembers to use the activated license for you, but it still remembers the trial for another user who previously ran on the trial license. That user can select the activated license via Select License in License Manager. See One User Doesn't Get the Activated License.

  4. A Concurrent Network client of the DecisionTools Suite was installed on your computer, and you launch @RISK. Your company's license server has an @RISK Industrial license and a DecisionTools Suite Professional license. Since the last product installed was the Suite, @RISK uses the DecisionTools Suite Professional license. However, you can open License Manager » Select License and select the @RISK Industrial license, and @RISK will remember your selection the next time.

  5. Your company has two license servers, A and B, and the Evolver install on your computer is set up to use both of them in that order. A has a Concurrent Network license for the DecisionTools Suite Professional, and B has Evolver Industrial. The first you run Evolver, even though server A is listed first Evolver will use the Evolver Industrial license from server B, because the software tries to choose a license that matches exactly what was installed.

  6. You are a university IT administrator, and you install standalone copies of the DecisionTools Suite in your computer lab, with the course license. A year later, you place the next year's license on all the lab computers. Each student who tries to run gets a message that no license is available, and must use Select License to elect the new license. (This happens because the license to use is separately stored for each user. To override the old setting for all users, you must reinstall the software with the new license, or use the System Registry edit shown in Changing Standalone Workstation to Concurrent Client.)

Last edited: 2015-07-30

1.21. Automating Palisade Software

Applies to:
Palisade Custom Runtime (PCR)
Palisade's Excel add-ins, releases 5.x–7.x

How can I control @RISK or my other products through Visual Basic for Applications (VBA)?

The Excel Developer Kits (XDKs) ship with the Professional and Industrial Editions of our products. These are Visual Basic libraries that let you control @RISK and our other applications. They let you exercise maximum control with minimum development time, but your user must purchase and install the Palisade application. For an introduction to using an XDKs, run the product and click Help » Developer Kit (XDK) » Automation Guide. For a complete reference to all objects, properties, and methods, see the XDK Reference in the same menu.

Can I include the calculations in my own applications with my own user interface instead of Palisade's?

For this requirement, we offer the Palisade Custom Runtime (PCR). The PCR contains the calculation engines of most of our Excel add-ins. This means that PCR applications can run on computers that don't have @RISK or our other add-ins, or even Excel. Applications using the PCR are developed as part of a customized development agreement with Palisade. Please visit our Custom Development page or consult your Palisade sales manager or sales@palisade.com for more information about the PCR and custom development.

Additional keywords: Automation of Palisade software

Last edited: 2017-06-19

1.22. Example Files from Palisade-Published Books

Applies to these titles:
Decision Making under Uncertainty with RISKOptimizer
Decisions Involving Uncertainty: @RISK for the Petroleum Industry
Energy Risk Modeling
Evolver Solutions for Business
Financial Models Using Simulation and Optimization, v1 or v2
Learning Statistics with StatTools
Modelos Financieros com Simulación y Optimización
RISKOptimizer for Business Applications
El Riesgo en la Empresa: Medida y control mediante @RISK
@RISK Bank Credit and Financial Analysis

I bought a book from Palisade. Where can I download the examples used in the book?

Please follow this link to download the examples from any of the listed books.

Last edited: 2019-03-05

1.23. Heartbleed Bug and Palisade Software

Applies to:
All products

Question:
I've been hearing about this Heartbleed security problem in OpenSSL code. Are @RISK and the other Palisade products vulnerable?

Response:
No. The only Internet operations are Automatic Activation (if you select it) and checking for updates, both of which connect with our server.  Our server does not use OpenSSL to support these operations.  We have checked with Flexera Software, which provides our licensing software, and they have verified that the modules that we use are clean.

See also:
Heartbleed is listed as CVE-2014-0160 by the U.S. Department of Homeland Security, for example on these pages:

last edited: 2014-05-07

1.24. Troubleshooter for Releases 7.x: PalDiagnostics7

Applies to:
All releases 7.x
BigPicture releases 1.x and 2016

If you have earlier software, these diagnostics won't work. Use Troubleshooter for Releases 6.x: PalDiagnostics6 or Troubleshooter for Releases 5.x: PalDiagnostics5.

Exception: Enhanced testing for system DEP status and process DEP status, introduced on 2017-01-09, is available only in PalDiagnostics7. Those tests will work even if you have release 5.x or 6.x Palisade software, though many other tests will fail.

This utility will take a snapshot of the license and other settings on your computer for Palisade software release 7.x and for Excel, to help us figure out just what's wrong and how to fix it. Nothing is installed or changed on your computer. This utility simply copies the relevant settings and information to a file called PalDiagnostics7.txt in your temporary folder.

  1. Download the attached file to your desktop. (In your browser, select Save, not Open or Run.)

  2. If you have Windows 7, 8, 10, or Vista, right-click on the file and select Run As Administrator. If you have an earlier version of Windows, double-click on the file to launch the utility. (Either way, it's important to run the diagnostics within the end user's login, because some settings vary from one user profile to another. Ask an IT person to help you if you are the end user and unable to run the utility.)

  3. Click the button Run Tests. (Enable Runtime Logging is not needed.)

  4. When the utility finishes, you'll see a window on your desktop that contains a file called PalDiagnostics7.txt. Click File » Save As, and save the file to your desktop or any convenient location.

  5. Attach the saved PalDiagnostics7.txt file to your email to Tech Support; don't paste the contents of the file into the body of your email.

Please note: The diagnostics utility is meant for your use in conjunction with Palisade Technical Support. While you're certainly free to look at the output, it's not presented in a user-friendly way.

Last edited: 2018-07-26

1.25. Troubleshooter for Releases 6.x: PalDiagnostics6

Applies to: All products, releases 6.x

If you have earlier or later software, these diagnostics won't work. Use Troubleshooter for Releases 7.x: PalDiagnostics7 or Troubleshooter for Releases 5.x: PalDiagnostics5.

Please note: The diagnostics utility is meant for your use in conjunction with Palisade Technical Support. While you're certainly free to look at the output, it's not presented in a user-friendly way.

This utility will take a snapshot of the license and other settings on your computer for Palisade software release 6.x and for Excel, to help us figure out just what's wrong and how to fix it.

Nothing is installed or changed on your computer. This utility simply copies the relevant settings and information to a file called PalDiagnostics6.txt in your temporary folder.

  1. Click this link.

  2. In your browser, select Save, not Open or Run, and save the file to your desktop.

  3. If you have Windows 7, 8, 10, or Vista, right-click on the file and select Run As Administrator. If you have an earlier version of Windows, double-click on the file to launch the utility. (Either way, it's important to run the diagnostics within the end user's login, because some settings vary from one user profile to another. Ask an IT person to help you if you are the end user and unable to run the utility.)

  4. Click the button Run Tests. (Enable Runtime Logging is not needed.)

  5. When the utility finishes, you'll see a window on your desktop that contains a file called PalDiagnostics6.txt. Click File » Save As, and save the file to your desktop or any convenient location.

  6. Attach the saved PalDiagnostics7.txt file to your reply email; don't paste the contents of the file into the body of your email.

Last edited: 2018-05-21

1.26. Troubleshooter for Releases 5.x: PalDiagnostics5

Applies to: All products, releases 5.x

If you have later software these diagnostics won't work. Use Troubleshooter for Releases 7.x: PalDiagnostics7 or Troubleshooter for Releases 6.x: PalDiagnostics6.

Please note: The diagnostics utility is meant for your use in conjunction with Palisade Technical Support. While you're certainly free to look at the output, it's not presented in a user-friendly way.

This utility will take a snapshot of the license and other settings on your computer for Palisade software release 5.x and for Excel, to help us figure out just what's wrong and how to fix it.

Nothing is installed or changed on your computer. This utility simply copies the relevant settings and information to a file called PalDiagnostics5.txt in your temporary folder.

  1. Click this link.

  2. In your browser, select Save, not Open or Run, and save the file to your desktop.

  3. If you have Windows 7, 8, 10, or Vista, right-click on the file and select Run As Administrator. If you have an earlier version of Windows, double-click on the file to launch the utility. (Either way, it's important to run the diagnostics within the end user's login, because some settings vary from one user profile to another. Ask an IT person to help you if you are the end user and unable to run the utility.)

  4. Click the button Run Tests.

  5. When the utility finishes, you'll see a window on your desktop that contains a file called PalDiagnostics5.txt. Click File » Save As, and save the file to your desktop or any convenient location.

  6. Attach the saved PalDiagnostics5.txt file to your reply email; don't paste the contents of the file into the body of your email.

Last edited: 2016-06-01

1.27. Re-Register All Libraries

Applies to:
All products, releases 7.5.x/7.6.x

This file is intended for use under guidance of Palisade Tech Support.

We have seen one or two cases where the installer ran without error, but the interfaces for our software were not registered. We don't know what prevented the registrations from being made during install or broke them after install, but we have a batch file that should re-register everything without running the installer. It also contains extra error checking aimed specifically at this issue. However, it doesn't diagnose missing files–it assumes those are part of products you haven't installed.

If Palisade Technical Support representatives direct you to this article, please follow the directions and report the results back to them.

  1. Save the attached file to any convenient folder.
  2. Open the folder where the file was saved. Press and hold the Shift key, right-click the saved KB1676 file, and select Copy as path. Release the Shift key.
  3. Open an administrative command prompt—see this article if you're not sure how to do it. (You must open an administrative command prompt. It's not enough to right-click the saved file and select Run as Administrator.)
  4. Click into the command prompt window, right-click and select Paste. Press the Enter key.

If the registrations all run successfully, you'll get "Success!" in the window. In that case, retry the operation that was a problem before.

If you get "FAILURE" in the window, make a screen shot of the command window and of any popup window and send them to the Palisade representative.

Last edited: 2018-10-09

2. @RISK: General Questions

2.1. Getting Started with @RISK

I've just installed @RISK. How do I learn to use it? Where do I start?

Welcome to @RISK! If you can represent your problem as a base case in Excel, you can add @RISK to that model to analyze and model uncertainty.

We have videos for every stage of your learning:

  • Are you new to the concepts of risk analysis with Monte Carlo simulation?  View our Introduction to Risk Analysis Using @RISK.
  • Understand risk analysis in general, but new to @RISK? Watch the Quick Start from beginning to end. It's not very long, but it will show you the steps in modeling with our software, and give you some best practices.
  • Need more on particular topics? Take our Guided Tour. When it begins to play, a menu down the left side lets you jump to whatever topic you want to know more about.

@RISK also comes with numerous examples preinstalled. (In the @RISK menu, click Help » Example Spreadsheets.) These are generally small "toy" models to show you particular techniques or illustrate various applications or features of the software.

You'll find answers to a lot of frequently asked questions under Techniques and Tips in our searchable Knowledge Base. And if you see a message you don't understand, chances are good you'll find it, with a solution, in our Troubleshooting section.  Tech Support can also help you with messages that aren't clear, or with particular features of the software.

More intensive training is available in on-demand webinars and live webinars. We also offer in-person regional training.

Last edited: 2018-12-14

2.2. @RISK User Groups

Question:
I'd like to connect with other @RISK users on line. Do you have any kind of user group?

Response:
Yes, there is a LinkedIn group here: "Palisade Risk and Decision Analysis". 

There are also Palisade's own blogs for all products. 

For other venues like Facebook and Twitter, please visit our corporate directory and hover your mouse on SUBSCRIPTION near the top of the page.

last edited: 2021-09-28

2.3. Iterations versus Simulations versus Trials

Applies to:
@RISK 6.x/7.x ("Trials" applies to @RISK Industrial)

What's the difference between iterations and simulations in Simulation Settings?  Which one should I set to which number?

An iteration is a smaller unit within a simulation. At each iteration, @RISK draws a new set of random numbers for the @RISK distribution functions in your model, recalculates all open workbooks or projects, and stores the values of all designated outputs. At the end of a simulation, @RISK prepares any reports you have specified.

For example, if you run 5000 iterations and 3 simulations, then at the end of the analysis you can look at three histograms for each @RISK output. Each histogram summarizes the 5000 values for the 5000 iterations of one of the three simulations.

You can set the numbers of iterations and simulations in the @RISK ribbon, or on the General tab of Simulation Settings. For most analyses, you will want N iterations and 1 simulation. If you use the same set of assumptions for all simulations, you will usually get better results with one simulation of 15000 iterations than with three simulations of 5000 iterations.

But setting simulations greater than 1 is useful in several situations, such as these examples:

  • Suppose one or more unknown quantities are under your control, such as several different prices you might charge or several different raw materials you might use.  You would like to know what the different choices would do to your bottom line.  In this case the different values of the unknown quantity(ies) would be in one or more RiskSimtable functions.  See the topic "Sensitivity Simulation" in your @RISK manual or @RISK help.

  • In a similar way, if you have several assumptions or scenarios you can embed them in one or more RiskSimtable functions and run one simulation on each, all as part of one analysis.

  • To test the stability of your model, you might run several simulations with the same model and without RiskSimtable functions.  If the simulation results are fairly close, you know that your model is stable; if they vary significantly, you know that your model is unstable or you are not running enough iterations.  For simulation settings to set the random number seed, see "Multiple @RISK Simulation Runs" in Random Number Generation, Seed Values, and Reproducibility.

I'm running an optimization with RISKOptimizer. How to trials relate to simulations or iterations? Why is the number of valid trials different from the number of trials?

RISKOptimizer places a set of values in the adjustable cells that you designated in the Model Definition, then runs a simulation. At the end of the simulation, RISKOptimizer looks at the result and decides whether enough progress has been made o declare the optimization finished. That is one trial. On the next trial, RISKOptimizer places a different set of values in the adjustable cells—using the results of earlier trials to decide which values—and then runs another simulation.

The difference between trials and valid trials depends on your hard constraints. A valid trial is one that meets all hard constraints. If a trial is not a valid trial, RISKOptimizer throws away the result of that simulation. If your proportion of valid trials to total trials is small, you may want to look at restructuring your model so that the optimization can make progress faster. For more, see For Faster Optimizations.

Additional keywords: Simtable

Last edited: 2018-06-11

2.4. Simulation versus Optimization

Applies to:
@RISK 5.x and newer, Industrial Edition
Evolver, all releases

What's the difference between simulation and optimization? Does a simulation just add stochasticity to an optimal value?

It kind of goes the other way, actually.

Initially, you probably have a deterministic model in mind. If you want to know what choices you should make in a deterministic setting, you use Evolver to do a deterministic optimization.

But it's more common to take that deterministic model and replace some constants with probability distributions. This reflects your best estimates of the effects of chance — events you can't control. These probability distributions are inputs to @RISK. You also identify outputs of @RISK, Excel cells that are the results of your logic, and whose values you want to track in the simulation. Then, you run an @RISK simulation to determine the range and likelihood of outcomes, taking chance effects into account. This can be done in any edition of @RISK.

See also: Risk Analysis has much more about deterministic and stochastic risk analysis.

An optimization asks a higher-level question while still keeping the probabilistic elements: what about the things you can control? What choices can you make that improve your chances of a favorable outcome? You identify in your model the constants that represent choices you can make; these are called adjustable cells. You can place constraints on those cells, and additional constraints on the model if appropriate. Your model still keeps the probability distributions mentioned above for events that are outside your control. Now you run an optimization in the RISKOptimizer menu within @RISK Industrial Edition.

RISKOptimizer starts with one possible set of choices — one set of adjustable cell values — and then runs a simulation to find out the probabilistic range of outcomes if you made those choices. It chooses another set of adjustable cells and runs a new simulation. The optimizer continues this process, making different sets of choices for the adjustable cells and running a full simulation on each set. Some sets of choices have a better outcome than others, as measured by the target you specified for optimization; this guides @RISK in deciding which sets of choices to try because they're more likely to improve the outcome. Every set of choices gets a full simulation.

At the end of the optimization, you have a set of best values for your adjustable cells. These tell you the choices to make so as to maximize your chance of getting the most favorable outcome, based on your target. And the simulation with that set of adjustable cells tells you the range and probabilities of your outcomes.

Last edited: 2016-03-14

2.5. Different Results with the Same Fixed Seed for various distributions

También disponible en Español: Resultados diferentes con la misma semilla fija para varias distribuciones

Problem:

When comparing the random numbers sequence generated for each distribution in a model, they are different when simulated with these scenarios:

  1. Simulate and compare random number sequences for the same model simulated with @RISK and a Custom Development API run with PCR or SDK.
  2. Simulate and compare the random number sequences for the same model simulated simultaneously with two open workbooks. See also Different Results with Multiple Workbook Copies 


Explanation:

The difference can occur because distributions in the model are defined differently. One model could have more distributions or the same number of distributions, but these were defined in a different order.

Since @RISK is sampling a different number of distributions or a list of distributions ordered differently, they are effectively distinct models, and the same results should not be expected.  The same model will always produce the same results using the same fixed seed.

For example, suppose I am using a fixed seed that happens to generate the following numbers using RiskUniform(0,1):  0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7.  (Of course, a pattern like this is enormously unlikely; it's just chosen to make the example easy to follow.)

If I use that same fixed seed and run a simulation with seven iterations, I will always get those values precisely in that order.  In other words:

        Iteration    value of RiskUniform(0,1)

            1                 0.1
            2                 0.2
            3                 0.3
            4                 0.4
                 

One difference is the number of distributions. So, if I add another RiskUniform(0,1) to the spreadsheet and run seven iterations, the results will be different.  The same seed list is generated and used, but now two distributions are sampled from it.  In other words:

        Iteration    Value of first RiskUniform(0,1)    Value of second RiskUniform(0,1)
            1                 0.1                                          0.2
            2                 0.3                                          0.4
            3                 0.5                                          0.6
            4                 0.7                                          [next value in seeded list]

Another reason for the difference is distributions were defined in a different order. So if I add a RiskPoisson(5) and another RiskUniform(0,1), or first a RiskUniform and then a RiskPoisson and run seven iterations, the results will be different.  The same fixed seed is used, but now three distributions are sampled from it, and these are the numbers generated for the RiskPoisson: 2,4,7,5,1,3,6.

In other words:
 

Iteration   

Value of first RiskUniform(0,1)  

 

Value of second RiskUniform(0,1)

  value of RiskPoisson(5)

            1

0.1

 

0.2

7.0

 

            2

0.4

 

0.5

3.0

 

            3

0.7

 

[next value in seeded list]

   

 

 

 

 

 

 


Or, with a different order

Iteration   

Value of first RiskUniform(0,1)  

 

  Value of RiskPoisson(5)

  Value of second RiskUniform(0,1)

            1

0.1

 

4.0

0.3

            2

0.4

 

1.0

0.6

            3

0.7

 

[next value in seeded list]

 


The first model with only one distribution will always produce the same results for the same fixed seed. 

The other models with two or three distributions will always have the same random number sequence for the same fixed seed. But the samples will not be the same because they are assigned differently. Ultimately, all samples converge to the same desired distributions, that is, the same statistics, making their results correct and comparable.

Also, consider that If Latin Hypercube sampling is in effect, we can no longer draw purely random samples since we need to ensure that a random sample is drawn from each Latin Hypercube bin. For this reason, you can see that samples will not be identical after a few iterations and will only match the last digits. Read more about this in: Latin Hypercube Versus Monte Carlo Sampling 

If identical answers are critical, perhaps another approach is in order. These are some possible solutions:

  1. Use Monte Carlo as your Sampling Type.
  2. Supply the variable data directly.  For instance, distribute a list of numbers saying, "Here are the monthly interest rates for the next five years."
  3. Add the RiskSeed property function to each distribution in the model. This way, each distribution will have its unique sequence of random numbers, no matter their defined order.

 

 

Last update: 2023-02-09

2.6. Placing Number of Iterations in the Worksheet

Applies to: @RISK for Excel 4.x–7.x

@RISK puts the number of iterations into reports and windows, but how can I get it into my worksheet?  I need to use it in calculations.

With @RISK 6.x/7.x, use the formula =RiskSimulationInfo(4).

With @RISK 5.7 and below, place =RiskCurrentIter( ) in any convenient cell, say for example AB345, and then the formula =RiskMax(AB345) in any other cell will give you the number of iterations upon completion of the simulation.

Additional keywords: SimulationInfo, CurrentIter

Last edited: 2015-06-08

2.7. How Many Iterations Do I Need?

Applies to: @RISK for Excel 5.x–7.x

How many iterations do I need to run in my simulation so that the estimate of the mean is calculated within a specific confidence interval?

The answer depends on whether you're using traditional Monte Carlo sampling or the default Latin Hypercube sampling.

Monte Carlo Sampling:

(This part of this article is adapted from "How Many Trials Do We Need?" in the book Simulation Modeling Using @RISKby Wayne L. Winston [Duxbury, 2000].)

The attached example, ConfIntervalWidth2.xls, uses traditional Monte Carlo sampling. Let's suppose that we want to use simulation to estimate the mean of the output in cell B11 and be accurate within 5 units 95% of the time. The number of iterations needed to meet these requirements can be calculated using the following formula:

n = [ zα/2 S / E ] ²

In this formula,

  • n is the number of iterations needed.
  • S is the estimated standard deviation of the output.
  • E is the desired margin of error (in this case, 5 units). The width of the confidence interval is twice the margin of error.
  • zα/2 is the critical value of the normal distribution for α/2, the z value such that the area of the right-hand tail is α/2. It is the number that satisfies P(Z>zα/2) = α/2, where Z follows a normal distribution with mean 0 and standard deviation 1. α/2 can be found by setting the desired confidence level equal to 100(1-α) and solving for α.

For a 95% confidence level, as shown in the attached example, 95 = 100(1-α).  Then α is 0.05 and α/2 is 0.025. To compute zα/2 in Excel, use the NORMSINV function and enter =NORMSINV(1-α/2, 0, 1). Cell E13 of the attached example shows a Z value of approximately 1.96 for a 95% confidence interval.

To obtain an estimate for the standard deviation of the output, the @RISK statistics function RiskStdDev was placed in cell B14 and a simulation was run with just 100 iterations. This gave us a standard deviation of approximately 53.5. If we plug the above information into our formula, we get

n = [ 1.96 × 53.5 / 5 ] ² = 440

Thus, if you use Monte Carlo sampling, you should run at least 440 iterations to be 95% sure that your estimate of the mean of the output in cell B11 is accurate within ±5 units.

Latin Hypercube Sampling:

The Latin Hypercube method produces sample means that are much closer together for the same number of iterations. With the Latin Hypercube method, a smaller number of iterations will be sufficient to produce means within the desired confidence interval, but there's no simple calculation to predict the necessary number. See Latin Hypercube Versus Monte Carlo Sampling, and the section "Confidence Interval with Latin Hypercube Sampling" in Confidence Intervals in @RISK.

Convergence:

Rather than try to pre-compute the necessary number of iterations, you may find it simpler just to set your convergence criteria and let @RISK run until the desired level of confidence has been reached. In Simulation Settings, on the General tab, set the number of iterations to Automatic. Then on the Convergence tab, set your convergence criteria. Notice that the margin of error ("Convergence Tolerance") is set to a percentage of the statistic being estimated, not to a number of units.

Last edited: 2015-06-08

2.8. SQL for @RISK Library on Another Computer

Applies to: @RISK Professional and Industrial Editions, releases 6.2, 6.3, and 7.x

Do I need SQL on my computer to read and write @RISK libraries on other computers?

Yes, the @RISK Library on the local computer needs compatible SQL software to talk to the remote database. If you're not storing @RISK Library databases on your local computer, you could use SQL Native Client or SQL Server to access the remote databases. A computer that hosts an @RISK Library database needs SQL Server, and it must be running a Server version of Windows. Please see SQL Versions and Installation: SQL with @RISK 6.2 and later for more.

Connecting to an existing database on a remote computer:

  1. Make the appropriate selection to open the @RISK Library window:

    • In Define Distribution, click the books icon, "Add Input to Library".
    • After a simulation, in the ribbon click Library » Add Results to Library.
    • If you want to access existing library entries rather than add new ones, in the ribbon click Library » Show @RISK Library.
  2. Once in the @RISK Library window, click the books icon near the middle of the screen. The icon displays "Connect, Create, or Attach to Databases" if you hover your mouse over it.

  3. Click Connect.

  4. On the SQL Connection screen, select an authentication method. "Microsoft Authentication" means your standard Windows login will be used; this is correct for most users. If necessary, change to "SQL Server Authentication" and enter your user name and password.

  5. @RISK will search your network for installed SQL servers; this can take some time.  The list includes all computers with SQL server software installed, whether they actually have any databases or not.

  6. When the list appears, click the name of the computer where you want to access an @RISK Library database. @RISK will then query that server for available databases.

    If no databases appear, either none exist on that server, you don't have privilege to access them, or you didn't specify the correct user name and password. Please see Library Can't Connect to Networked Database for troubleshooting.

  7. Select the desired database and click Connect. The database will be added to your list of Current SQL Server Connections, and @RISK will remember the connection next time.

Creating a new database on a remote computer:

  1. Make the appropriate selection to open the @RISK Library window:

    • In Define Distribution, click the books icon, "Add Input to Library".
    • After a simulation, in the ribbon click Library » Add Results to Library.
  2. Once in the @RISK Library window, click the books icon near the middle of the screen.  The icon displays "Connect, Create, or Attach to Databases" if you hover your mouse over it.

  3. Click Create.

  4. On the SQL Connection screen, select an authentication method. "Microsoft Authentication" means your standard Windows login will be used; this is correct for most users.  If necessary, change to "SQL Server Authentication" and enter your user name and password.

  5. @RISK will search your network for installed SQL servers; this can take some time.

  6. When the list appears, click the name of the computer where you want to create an @RISK Library database.

  7. Type a database name and click Create. The database will be added to your list of Current SQL Server Connections, and @RISK will remember the connection next time.

    If the screen closes when you click Create, but the new database is not shown on the Current SQL Server Connections screen, it was not created. Either you don't have the necessary access rights on that computer, or you didn't enter the correct authentication information. Please see Library Can't Connect to Networked Database for troubleshooting.

See also: "Library" in the Guided Tour of @RISK is a short video that shows you how to save distributions and results in a library and how to make use of them in your model.

Last edited: 2015-03-26

2.9. SSAS and SSRS with @RISK Library

Applies to:
@RISK 5.x–7.x, Professional and Industrial Editions

Since the @RISK Library is an SQL database, can I use SSAS (SQL Server Analysis Services) or SSRS (SQL Server Reporting Services) with it?

The @RISK Library is an SQL database, and to use the @RISK Library you must have SQL Server installed. Since SSAS or SSRS can work with SQL databases, in principle they could work with the @RISK Library. However, it's hard to see what useful information they could obtain.

The @RISK Library is really intended to be accessed only by @RISK, and not by external programs. Therefore, we have not prepared any documentation about the organization of the @RISK Library. External programs definitely should not alter the @RISK Library databases in any way. Technical Support is unable to assist with setting up or debugging SQL database queries, but we have a Custom Development department that can assist you if there is some reason why you need read-only access to the @RISK Library outside of @RISK. Please contact your Palisade sales manager if that is of interest to you.

Last edited: 2018-08-06

2.10. Sharing @RISK Models with Colleagues Who Don't Have @RISK

Disponible en español: Compartir modelos de @RISK con Colegas que no poseen @RISK
Disponível em português: Compartilhar modelos do @RISK com colegas que não possuem @RISK

I have @RISK for Excel. I would like to ship my worksheet with results to a colleague who has Excel but not @RISK. Can I do this?

If you have @RISK 5.0 or later, your colleague doesn't need any special software. The Swap Out Functions feature makes it very easy to share workbooks with colleagues who don't have @RISK.

  • In @RISK 7.x, click "Swap Out @RISK" in the ribbon. This replaces @RISK functions with static numbers (see below). In addition, @RISK will offer to embed thumbnail graphs of functions, set color cells to show inputs and outputs, and add a new worksheet that summarizes all @RISK functions with statistics and graphs.
  • In @RISK 6.x, click Utilities » Swap Out Functions.
  • In @RISK 5.x, click on the "Swap @RISK Functions" or "Swap Functions" icon toward the end of the ribbon. (Depending on your specific 5.x version, this could be a @ with a / mark through it, or it could be the @RISK logo in front of a worksheet grid.)

After you swap out functions, the @RISK functions are all replaced with numbers.  Save your workbook, and your colleague can view it in Excel with no need for other software.  (This replaces the Spreadsheet Viewer that was used with @RISK 4.x.)

When you reopen the workbook, the functions should be swapped back in automatically. If you have any difficulties, please see @RISK Functions Don't Reappear after Swapping Out.

If you have an earlier version of @RISK and you'd like to use this feature, please contact your Palisade sales manager to obtain the current version.

Which numbers does @RISK place in the cells in place of the distribution functions?

By default, @RISK will replace functions with the displayed static values of the functions, as defined in Setting the "Return Value" of a Distribution. But when you request the swap, you can open the Swap Options dialog to override this. In Swap Options, you can specify expected values, most likely values (mode), or a percentile for all functions that don't have RiskStatic property functions defined.

See also: Sharing @RISK Models with Colleagues Who Don't Have @RISK

Last edited: 2017-01-06

2.11. Precedent Checking (Smart Sensitivity Analysis)

Also available in Spanish: Verificación de precedentes (análisis de sensibilidad inteligente)

Applies to: @RISK for Excel version 5.x–7.x

When I run my model in @RISK, it seems to take a long time before the first iteration. The status bar shows that it is checking precedents. Is something wrong?

Precedent checking (also known as precedent tracing or Smart Sensitivity Analysis) is a new feature in 5.0 and later versions. Its purpose is to prevent @RISK inputs from incorrectly showing up in Regression/Sensitivity analysis such as tornado graphs.

For example, consider a simple model with two inputs. The two inputs are correlated – let's say to the full extent, with a correlation coefficient of 1.0. One input is used in a calculation for a RiskOutput. The other input is not involved in any calculation which impacts the RiskOutput. In earlier versions, both inputs would be displayed as having equal impact on the output. With precedent checking, @RISK determines that only one of these inputs contributes, and filters out the other one from graphs, reports, etc.

The tradeoff is that it may take quite a bit of time to go through the full precedent tree before a simulation is run. By turning off data collection for some or all inputs, that process can be sped up, though at the cost of not being able to analyze those inputs.

  • If you set Collect to None, you can still collect certain inputs by designating them as outputs. You will then be able to get statistics and iteration data on them, but they won't be available for sensitivity analysis.

  • You can disable Precedent Checking while still collecting inputs. In Simulation Settings, on the Sampling tab, change Smart Sensitivity Analysis to Disabled to disable precedent checking for a particular model. If you want to change the default for all models, open Utilities » Application Settings and look in the Default Simulation Settings section.

  • You could also use RiskMakeInput( ) functions to exclude some particular inputs from precedent tracing. See Combining Inputs in a Sensitivity Tornado, Excluding an Input from the Sensitivity Tornado, and Same Input Appears Twice in Tornado Graph.

Does Smart Sensitivity Analysis have any limitations?

Certain formulas are correct Excel formulas, but Smart Sensitivity Analysis cannot work with them. Either their values can change at run time in ways that @RISK can't predict, or they are too complex and would take too much time when the simulation starts. These include:

  • INDIRECT( ) functions.

  • OFFSET( ) functions.

  • INDEX( ) when used to return a reference. (When used to return a value, INDEX( ) does not interfere with Smart Sensitivity Analysis.)

  • VLOOKUP( ) and HLOOKUP( ) functions.
    These are a special case in that they don't actually prevent Smart Sensitivity Analysis from happening. However, @RISK can't know in advance which values in the lookup table Excel will return, so it considers every non-constant cell in the cell to be a precedent of the output.

  • 3-D references such as a sum across multiple worksheets.

  • Structured references in tables, such as [column name]—a problem only with @RISK 6.2.1 and older.
    (@RISK 5.x cannot handle through any structured references. @RISK 6.x can trace precedents through all structured references, except that @RISK 6.0.0–6.2.1, in Excel 2010 and 2013 only, cannot trace precedents through formulas that use @ for [#This Row].)

  • References to external workbooks.
    (@RISK 6.1.1 and later don't display a message for these.)

  • References to Internet resources, such as http links.
    (@RISK 6.1.1 and later don't display a message for these.)

In all the above cases, calculations are still done correctly during your simulation; it's just that for these cases @RISK cannot trace precedents.

If your model contains one of these formulas, when you start simulation a message will pop up: "could not be parsed", "invalid formula", or similar. To proceed with the simulation, click the Yes button in the message. If you want to prevent this message from appearing in the future, either change the formula (if you can), or disable Smart Sensitivity Analysis for this model. To disable Smart Sensitivity Analysis, click the Simulation Settings icon, select the Sampling tab, and change Smart Sensitivity Analysis to Disabled. Click OK and save the workbook.

Last edited: 2020-03-20

2.12. Excel Tables and @RISK

Applies to:
@RISK 5.7.1–7.x
TopRank 5.7.1–7.x

Can @RISK and TopRank work with Excel tables?

There are actually three types of tables in Excel: tables, data tables, and pivot tables.

Excel tables and data tables:

@RISK and TopRank can handle Excel tables and data tables without any special action on your part.

If the table contains @RISK functions, it will get re-evaluated in every iteration, which can be time consuming.  Also, if you have @RISK functions in a data table, as @RISK rewrites formulas while setting up the simulation, the data table will get re-evaluated once for each @RISK function, and therefore the simulation will take longer to start.  (For why @RISK must do this, see @RISK Changes Worksheet Formulas.)

If your model's logic really does not need @RISK functions inside a data table, removing them may speed up your simulations.

If you have release 5.7.0 or earlier, you should know about an Excel behavior that looked like a problem in @RISK.  When you enter or edit a formula in a cell adjacent to a table, Excel may expand the table to include the additional row or column.  As mentioned above, when starting a simulation or analysis, @RISK and TopRank rewrite all formulas that include @RISK or TopRank functions.  That rewrite sometimes triggers Excel to expand the table.  This doesn't affect 5.7.1 and newer releases, but if you have an earlier release, either upgrade your software or structure your models with at least one blank row or column between your table and the rest of your model.

Pivot tables:

Pivot tables are not automatically recalculated in an @RISK simulation, and in fact you don't want to recalculate a pivot table if it doesn't depend on any @RISK functions.  If you have any pivot tables that do depend on @RISK functions, create an after-iteration macro that calls Excel's RefreshTable method for each pivot table, and register that macro on the Macros tab of @RISK's Simulation Settings. A very basic example is attached.

TopRank does not provide for executing macros within an analysis.  If you have pivot tables that depend on any of your TopRank inputs, the analysis may not be correct because those pivot tables are not recalculated.  If your pivot tables don't depend on your TopRank inputs, then the analysis will be performed correctly.

Last edited: 2015-06-08

2.13. Avoiding "Do you want to change the current @RISK settings to match those stored?"

Applies to: @RISK for Excel 5.x–7.x

When I open certain Excel files, I get the message

The workbook workbookname has @RISK simulation settings stored in it. Do you want to change the current @RISK settings to match those stored in this workbook?

I try to clear all data via @RISK utilities before opening the workbook, but I still get this message. I need to get rid of this message as it makes all my VBA code stop. How can I suppress it?

The message is telling you that the workbook you're about to open has simulation settings inconsistent with the currently open workbook or with your defaults stored in Application Settings. It wants you to decide which set of settings should be in effect, since all open workbooks must have the same settings. You may be able to eliminate or at least reduce these warnings by adjusting your Application Settings, if you always make the same choices for all models. That is the simplest and safest approach.

You can suppress the warning and either accept or ignore the new workbook's settings by executing a line of Visual Basic code. Please see the @RISK for Excel Developer Kit manual for instructions on setting the required reference to @RISK to allow this code to execute.

  • To open the manual in in @RISK 5.2.0 and newer, in @RISK's Help menu select Developer Kit (XDK) and then @RISK XDK Reference. (The Automation Guide is a good introduction, but to keep thing simple it omits many properties and methods.)
  • To open the manual in in @RISK 5.5.1–6.1.2, in @RISK's Help menu select Developer Kit.
  • To open the manual in @RISK 5.5.0 or earlier, click the Windows Start button, then Programs or All Programs » Palisade DecisionTools » Online Manuals.

(In addition to the specific code functions mentioned below, you will need to create one or more references in the Visual Basic Editor.  Please see Setting References in Visual Basic for the appropriate reference and how to set it.)

To suppress the message and load settings from the new workbook, use the Risk.SimulationSettings.LoadFromWorkbook method. For details and an example, please see "LoadFromWorkbook Method" in the @RISK for Excel Developer Kit manual referenced above.

To suppress the warning and ignore the settings in the new workbook, execute the following code in a macro before opening the workbook:

Risk.DisplayAlerts = False

After you open the workbook, we strongly recommend(*) executing this code:

Risk.DisplayAlerts = True

(*) Caution: Setting DisplayAlerts to False is potentially dangerous, because it suppresses all warnings from @RISK. Therefore, we strongly recommend that your macro set it back to True immediately after opening the workbook.

My problem is similar, but I'm getting that prompt when I open workbooks that didn't have any @RISK functions in them. I don't want to insert this macro in every workbook; what can I do?

Here is how that situation can arise: When you save a workbook while @RISK is running, if the current simulation settings are different from the current Application Settings, @RISK stores the current simulation settings in a hidden sheet in the workbook. This occurs whether or not the workbook contains any @RISK functions (because for all @RISK knows you might intend to add some @RISK functions to it later). If you later open your @RISK model and change some settings, the new stored settings in the @RISK workbook are different from the old stored settings in your non-@RISK workbook, so when you open the non-@RISK workbook you get the prompt.

To solve this, you need to remove the @RISK settings from the non-@RISK workbooks and ensure that they are not written again in the future:

  1. Run @RISK, and open the workbook that contains your @RISK functions plus the workbook(s) that do not.
  2. Change Application settings (in the @RISK Utilities menu) to match simulation settings.
  3. Save the workbook that contains @RISK functions.
  4. In the @RISK Utilities menu, select Clear @RISK Data, tick all four boxes, and click OK.
  5. (This step can be skipped with @RISK 5.7 and above.) In the @RISK Utilities menu, select Unload @RISK Add-in.
  6. Save the workbooks.

If later you want to change simulation settings in your @RISK workbook, do it by changing Application Settings. Remember, when you store a non-@RISK workbook, you want Application Settings and simulation settings to be the same, so that simulation settings don't get stored in the non-@RISK workbook. As an alternative, you can unload the @RISK add-in before storing the non-@RISK workbooks.

See also: @RISK Changes Simulation Settings When Non-@RISK Workbook Is Opened (only with @RISK 6.3)

Last edited: 2015-12-04

2.14. Markov Chains

Question:
Can I use @RISK to build a model with Markov chains?

Response:
With @RISK, either alone or in combination with PrecisionTree, you can create a Markov chain. But you have to create the statefulness yourself, either in Visual Basic code or possibly in RiskData worksheet functions. Please see the attached example, Price Evolution in Markov Chain.

In the @RISK help file and user manual, the section "Reference: Time Series Functions" says "GBM processes have the Markov (memoryless) property, meaning that if the current value is known, the past is irrelevant for predicting the future." But those are unrelated Markovs, not useful in creating a Markov chain.

last edited: 2013-10-22

2.15. Stochastic Dominance in @RISK

Applies to: @RISK 5.x–7.x

Can I use @RISK to test for stochastic dominance?

Yes.  The basic technique is overlaying the two cumulative ascending curves.

In @RISK 6.x/7.x, click Help » Example Spreadsheets » Statistics/Probability and select the last highlighted example, Stochastic Dominance.  If you have @RISK 5.x, the Stochastic Dominance example is not included, but you can download the attached copy.

Last edited: 2015-06-08

2.16. Circular References

Applies to:
@RISK 5.x–7.x
TopRank 5.x–7.x

How do @RISK and TopRank deal with circular references?

@RISK and TopRank are fully able to cope with them, if you have set Excel's option to perform iterative calculations:

  • Excel 2010 and later, File » Options » Formulas » Enable iterative calculation.
  • Excel 2007: click the round Office button and then Excel Options » Formulas » Enable iterative calculation.
  • Excel 2003: Tools » Options » Calculation » Iteration

For more about appropriate settings for this option, see Microsoft's Knowledge Base article Make a circular reference work by changing the number of times that Excel iterates formulas (accessed 2015-06-08, part of Remove or allow a circular reference).

Microsoft provides more detailed information about circular references, including troubleshooting tips and a tutorial on iterative calculation in a different Microsoft Knowledge Base article with the same title: Remove or allow a circular reference (accessed 2015-06-08).

How @RISK and TopRank respond to circular references:

During precedent tracing: When @RISK or TopRank hits a cell that has been previously encountered, it stops tracing precedents in that particular path, to avoid an infinite loop. For example, if A1 depends on B1, and B1 on C1, and C1 on A1, and the program starts tracing at A1, it will find B1 and C1 as precedents, but stop tracing when it hits A1 again. However, when it starts tracing precedents of B1, it will find C1 and A1 as precedents, and so forth. The net result is that when there is a circular reference, @RISK and TopRank treat all members of the circle as precedents of each other.

During calculation: If there are circular references, Excel calculates the model multiple times within each @RISK or TopRank iteration, depending on your Excel settings for circular references. Each @RISK or TopRank function returns the same value through all the recalculations within any given iteration. In other words, Excel recalculations to resolve circular references all use the same samples within any one iteration of @RISK or TopRank.

Last edited: 2015-06-08

2.17. Converting Crystal Ball Models

Applies to: @RISK 6.x/7.x

I have developed a model in Crystal Ball, but I would like to run it in @RISK. Can @RISK run Crystal Ball models?

Starting with @RISK release 6.0, an automatic converter in @RISK lets you open and run risk models that were created in Crystal Ball 7.3 or later.  (You must have Crystal Ball and @RISK on the same computer.) 

When you open a Crystal Ball spreadsheet, @RISK will ask if you want to convert your model; or you can start a conversion with the Utilities » Convert Workbook command.  @RISK will convert Crystal Ball distributions and other model elements to native @RISK functions. 

Requirements:

  • @RISK 6.0 or newer.
    If you have @RISK 5.7 or earlier and you want to convert Crystal Ball models, please see Upgrading Palisade Software.

  • 32-bit Excel and 32-bit Crystal Ball installed on this computer.

  • Model was developed in Crystal Ball 7.3 or newer.
    If your model was developed in Crystal Ball 7.0 or earlier, you cannot do an automated conversion but our consultants may be able to help. Please contact your Palisade sales manager for more information.

Certain features cannot be converted automatically. See the topic "Restrictions and Limitations" in the @RISK help file (Help » Documentation » Help). If @RISK finds one of these features, it will display an error or warning message in the conversion summary.

Last edited: 2020-05-28

2.18. File Compatibility: @RISK 4.5–7.x

Applies to: @RISK 4.5–7.x

I have some models that were developed with an older version of @RISK. Will they work in @RISK 7?

@RISK 4.5, 5, 6, and 7 model files (Excel workbooks) are generally compatible. Models created in an older version of @RISK should run just fine in a later version. Simulation results between the two, on the same model, should be the same within normal statistical variability. They will typically not be identical, iteration for iteration, because of precedent checking and other features introduced in newer releases. (See Random Number Generators for more details on this point.)

There are three caveats with using older models in @RISK 5, 6, or 7:

  • @RISK 4.5 and 5.0 recomputed statistic functions like RiskMean and RiskPercentile in every iteration. @RISK 5.5 and later compute them at the end of simulation. This is more logical, and makes for better performance. But if your model depends on having partial results available in the middle of the simulation, you will want to change your model or change that setting. Please see "No values to graph" Message / All Errors in Simulation Data for more about this.
  • Linked fits (fitted distributions that update automatically when the underlying data change) from @RISK 4.5 will have to be re-run in later versions of @RISK. But the program itself will prompt you for this the first time you run a simulation.
  • Functions with Multi in their names are legacy functions and will get #NAME errors in later @RISK, unless you also have TopRank loaded. To solve the problem, simply remove Multi from the function names.

Models created in a newer version of @RISK should run fine in an older version, as long as they use only features that were available in the older version. If you define a model using new features of @RISK, you probably will not be able to use that model with older versions of @RISK. Two notes:

  • @RISK 4.5 will ignore distribution property functions that were added in later, such as RiskUnits. However, functions that contain those property functions will sample properly.
  • Newer functions, such as RiskCompound and RiskTheoMean, will return #NAME in older releases of @RISK.

I saved simulation results with the older version of @RISK. Can the new version read them?

@RISK 5, 6, and 7 can read each other's simulation results, whether stored in the Excel workbook or in an external .RSK5 file, but they cannot read @RISK 4.5 simulation results.

Exception: If you filtered simulation results (Define Filter command) in @RISK 5 or @RISK 6 and stored the filtered results in the Excel workbook, @RISK 7.0.0 can't read the file. This is fixed in 7.0.1, but if you still have 7.0.0 please see "Could not read data from file ...tmp." for a workaround.

@RISK 4.5 cannot read simulation results that were created by later releases, whether stored in the Excel workbook or in an .RSK5 file.

See also:

Last edited: 2016-08-17

2.19. Latin Hypercube Versus Monte Carlo Sampling

También disponible en Español: Latin Hypercube Versus Muestreo Monte Carlo

The @RISK and RISKOptimizer manuals state, "We recommend using Latin Hypercube, the default sampling type setting, unless your modeling situation specifically calls for Monte Carlo sampling."  But what's the actual difference?

About Monte Carlo sampling

Monte Carlo sampling refers to the traditional technique for using random or pseudo-random numbers to sample from a probability distribution. Monte Carlo sampling techniques are entirely random in principle — that is, any given sample value may fall anywhere within the range of the input distribution. With enough iterations, Monte Carlo sampling recreates the input distributions through sampling. A problem of clustering, however, arises when a small number of iterations are performed.

Each simulation in @RISK or RISKOptimizer represents a random sample from each input distribution. The question naturally arises, how much separation between the sample mean and the distribution mean do we expect? Or, to look at it another way, how likely are we to get a sample mean that's a given distance away from the distribution mean?

The Central Limit Theorem of statistics (CLT) answers this question with the concept of the standard error of the mean (SEM). One SEM is the standard deviation of the input distribution, divided by the square root of the number of iterations per simulation. For example, with RiskNormal(655,20) the standard deviation is 20. If you have 100 iterations, the standard error is 20/√100 = 2. The CLT tells us that about 68% of sample means should occur within one standard error above or below the distribution mean, and 95% should occur within two standard errors above or below. In practice, sampling with the Monte Carlo sampling method follows this pattern quite closely.

About Latin Hypercube sampling

By contrast, Latin Hypercube sampling stratifies the input probability distributions. With this sampling type, @RISK or RISKOptimizer divides the cumulative curve into equal intervals on the cumulative probability scale, then takes a random value from each interval of the input distribution. (The number of intervals equals the number of iterations.) We no longer have pure random samples and the CLT no longer applies. Instead, we have stratified random samples.

The effect is that each sample (the data of each simulation) is constrained to match the input distribution very closely. This is true for all iterations of a simulation, taken as a group; it is usually not true for any particular sub-sequence of iterations.

Therefore, even for modest numbers of iterations, the Latin Hypercube method makes all or nearly all sample means fall within a small fraction of the standard error. This is usually desirable, particularly in @RISK when you are performing just one simulation. And when you're performing multiple simulations, their means will be much closer together with Latin Hypercube than with Monte Carlo; this is how the Latin Hypercube method makes simulations converge faster than Monte Carlo.

Comparisons

The easiest distributions for seeing the difference are those where all possibilities are equally likely. We chose five integer distributions, each with 72 possibilities, and a Uniform(0:72) continuous distribution with 72 bins. The two attached workbooks show the result of simulating with 720 iterations (72×10), both the Monte Carlo sampling method and the Latin Hypercube method. For convenience, the workbooks already contain graphs, but you can run simulations yourself too.

Of course, those are artificial cases. The other attached workbooks let you explore the how the distribution of simulated means is different between the Monte Carlo and Latin Hypercube sampling methods. (Select the StandardErrorLHandMC file that matches your version of @RISK.) Select your sample size and number of simulations and click "Run Comparison". If you wish, you can change the mean and standard deviation of the input distribution, or even select a completely different distribution to explore. Under every combination we've tested, the sample means are much, much closer together with the Latin Hypercube sampling method than with the Monte Carlo method.

If you'd like to know more about the theory of Monte Carlo and Latin Hypercube sampling methods, please look at the technical appendices of the @RISK manual.

See also:

Last edited: 2023-02-10

2.20. Random Number Generators

Which random number generator does @RISK use? Can I choose a generator?

By default, @RISK/RISKOptimizer 5.0 and later use the Mersenne Twister.  Earlier versions of @RISK and RISKOptimizer used RAN3I.

In @RISK 5.0 and later, you can select a random number generator on the Sampling tab of the Simulation Settings dialog. The random number generator is not user selectable in RISKOptimizer 5.x or in earlier versions of either product.

Mersenne Twister is superior to RAN3I in that it has been more extensively studied and characterized. It has been proved that random numbers generated by the Mersenne Twister are equi-distributed up to 623 dimensions, and that its period is 2^19937 - 1, which is more than 10^6000. Please see What is Mersenne Twister (MT)? (accessed 2013-04-09) for more information.

Can I duplicate @RISK 4.5 simulation results by setting the random number generator to RAN3I?

If you run an @RISK 4.5 model in @RISK 5.x or 6.x with any random number generator, simulation results should be the same within normal statistical variability. But the simulation data will typically not be identical iteration for iteration, even with a fixed seed and the RAN3I generator, because of @RISK 5.x's new precedent checking and other features.

See also Random Number Generation, Seed Values, and Reproducibility.

What about versions within 5.x, 6.x, and 7.x?

From one version to the next, new features, and improvements in our code, may cause distributions to be evaluated in a different order.  Thus, you cannot count on reproducing @RISK 5.0 results iteration for iteration in 5.5, or reproducing @RISK 5.7.1 results iteration for iteration in 6.0.1, and so forth.  Of course, the results will always match within normal statistical variability.

last edited: 2015-06-16

2.21. Random Number Generation, Seed Values, and Reproducibility

Applies to:
@RISK for Excel 4 and newer
@RISK for Project 4.x
@RISK Developer's Kit 4.x

Tell me more about the algorithm that generates random numbers in @RISK. What is the difference between a fixed seed and a random seed? How does this work when executing a multiple simulation run? Why might my model not be reproducible even though I am using a fixed seed?

Generation Algorithm:

The random number generator used in @RISK is a portable random number generator based on a subtractive method, not linear congruential. The cycle time is long enough that in our testing the cycle time has had no effect on our simulations. Press et al (References, below) say that the period is effectively infinite. The starting seed (if not set manually) is clock dependent, not machine dependent. The method used to generate the random variables for all distributions is inverse transform, but the exact algorithms are proprietary.

Seed Values:

In the @RISK Simulation Settings dialog box, you can set the random number seed. The seed value may be chosen randomly in Simulation Settings by activating the Choose Randomly option, or you can specify a fixed seed by activating the Fixed option and then entering a seed value that is an integer between 1 and 2147483647. If the Fixed option is chosen, the result from your simulation will not change each time it is run (unless you have changed your model or added some random factor out of @RISK's control). If the Choose Randomly option is active, a random seed is chosen based on the computer's clock.

Why choose a fixed seed? There are two main reasons. When you are developing your model, or making changes to an existing model, if you have a fixed random number seed then you can see clearly how any changes in your model affected the results.. With a finished model, you can send the model to someone else and know that if they run a simulation they will get the same results you got. (Both of these statements assume that you're using the same release of @RISK on the identical model and that nothing in the model is volatile; see Reproducibility, below.)

You can also use a RiskSeed() property function on an input distribution to give that distribution its own sequence of random numbers, independent of the seed used for the overall simulation. (RiskSeed() is ignored when used with correlated distributions.)

Multiple @RISK Simulation Runs:

  • If the Multiple Simulations Use Different Seed Values box is checked, and the Choose Randomly option is active, @RISK will use a different seed each simulation in a multiple simulation run.
  • If the Multiple Simulations Use Different Seed Values box is checked, and the Fixed option is active, each simulation in a multiple simulation run will use a different seed, but the same sequence of seed values will be used each time the run is executed.
  • If the Multiple Simulations Use Different Seed Values box is not checked, and the Choose Randomly option is active, each simulation within a multiple simulation run will use the same seed, but a different seed will be used for each run.
  • If the Multiple Simulations Use Different Seed Values box is not checked, and the Fixed option is active, the same seed will be used both within and between multiple simulation runs.

@RISK Monte Carlo vs. Latin Hypercube:

The sampling done to generate random numbers during a simulation in @RISK may be Monte Carlo, or it may be Latin Hypercube, depending on which Sampling Type is chosen in the @RISK Simulation Settings dialog. See Latin Hypercube Versus Monte Carlo Sampling or the @RISK manual for more details.

Number of Iterations:

If you change the number of iterations, you have a different model even if nothing else has changed. The overall results will be similar (within normal statistical variation) but not identical. Even the data drawn during the initial iterations may not be the same. For example, if you have a 100-iteration model and increase the number of iterations to 500, the distributions in the new model may sample different values in the first 100 iterations than they had in the 100 iterations of the old model.

If you have a RiskSeed() property function in any distributions, those will preserve the same sequence. For example, if you have a 100-iteration model and increase the number of iterations to 500, the distributions with their own RiskSeed() functions will show the same data for the first 100 iterations as they did for the 100 iterations of the original simulation. (RiskSeed() is ignored when used with correlated distributions.)

Reproducibility:

The results of a simulation are reproducible from run to run if you use a fixed seed value, if your model has not been changed between runs, and if you avoid the following pitfalls:

  • The Excel function =RAND(). The numbers generated by these functions are controlled by the spreadsheet, which uses its own independent random number stream. Instead, use RiskUniform(). Consider replacing RAND() functions with RiskBernoulli() or RiskUniform().
  • Other volatile Excel functions like NOW() or TODAY().
  • Macros that run during the simulation, if the macro code itself is not reproducible from run to run.
  • Adding or removing worksheets or opening additional workbooks, even if they don't contain @RISK functions. Results of the new simulation may not be identical because @RISK's order of scanning may be affected. The same applies if you move things around within a worksheet or within a workbook, even if the cells that you moved don't contain @RISK functions.
  • References between iterations, when you have Multiple CPU enabled; please see the next section.

Any @RISK inputs that have RiskSeed() property functions will be reproducible, even if the model is changed. Exception: RiskSeed() has no effect in correlated distributions.

See also: Different Results with Same Fixed Seed, Different Results with Multiple Workbook Copies, and What Was My Random Number Seed?

Single or Multiple CPU:

Assuming the model is otherwise reproducible, results should be identical whether the simulation runs with multiple CPU enabled or disabled.

There's an important exception. When you're running multiple CPUs, the master CPU parcels out iterations to one or more worker CPUs. During a simulation, one CPU doesn't know the data that were developed by another CPU. So if you have anything in your model that refers to another iteration, directly or indirectly, a simulation with multiple CPUs will not behave as expected. (It won't just be irreproducible; it will be wrong.) Examples would be RiskData() functions that are used in formulas, statistics functions like RiskMean( \) and RiskPercentile() that are used in formulas if you have them set to be computed at every iteration, and macro code that stores data in the workbook or in static variables. In such cases, it is necessary to disable multiple CPU in Simulation Settings.

Versions of @RISK:

Results from a given release of @RISK Standard, Professional, and Industrial should be the same, assuming the model is otherwise reproducible. Trial version versus activated version makes no difference.

Results from different versions of @RISK on the same model will typically match within normal statistical variation, if you use the same random number generator. For the relationship between @RISK 4.x and 5.x random number generation, please see Random Number Generators.

References:

  • Donald E. Knuth: Seminumerical Algorithms: Third Edition (1998, Addison-Wesley), vol. 2 of The Art of Computer Programming.
  • William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling: Numerical Recipes, The Art of Scientific Computing (1986, Cambridge University Press), pages 198 and 199.

Last edited: 2019-02-15

2.22. Different Results with Multiple Workbook Copies

También disponible en Español: Resultados diferentes con múltiples copias de libros de trabajo

Applies to: @RISK 5.x–newer

I'm simulating multiple copies of a workbook in one @RISK session. There are no links between the workbooks, so I'd expect every workbook to get the identical results that it gets when it's simulated alone. Why doesn't that happen?

All open workbooks are part of an @RISK simulation, and @RISK draws all the random numbers for iteration 1, then all the random numbers for iteration 2, and so on. These numbers all come from a single stream, which you can specify as a fixed seed on the Sampling tab of Simulation Settings. (It's more complicated if any inputs are correlated, but the principle is the same.)

Suppose you have 100 distribution functions in your workbook. When you simulate that workbook by itself, iteration 1 gets the first 100 random numbers from the stream, iteration 2 gets random numbers 101–200, iteration 3 gets random numbers 201–300, and so in. But when you simulate two copies in the same run, those two copies together consume the first 200 random numbers in iteration 1, random numbers 201–400 in iteration 2, and so on. Thus the results will be different, though still within expected statistical variability for your number of iterations.

You can overcome this by putting RiskSeed( ) property functions in the distributions. RiskSeed( ) gives that input its own random-number sequence, separate from the single stream for the simulation. Two identical distributions with identical seeds will produce the same identical sequence of data, iteration by iteration. For instance,

=RiskTriang(15,30,70,RiskSeed(1,271828))

will always produce the same sequence of random numbers, no matter what other formulas the workbook may contain. If you have two open workbooks that contain that same function, both copies will produce an identical sequence of iterations.

RiskSeed( ) is not effective with correlated inputs.

See also: Random Number Generation, Seed Values, and Reproducibility

Last edited: 2023-02-10

2.23. Statistical Calculations in @RISK and Excel

Applies to:
@RISK, all releases

Question:
I have come across a research paper that details some problems in Excel's statistical calculations. Is there anything to this, and is @RISK affected? How can I validate the generation of random numbers in various distributions by @RISK?

Response:
The computations in all @RISK functions are done by Palisade's own program code and do not rely on Excel's numerical functions in any way. By way of example, here are some details about the two types of functions we are most often asked about:

  • Probability distributions (@RISK inputs): @RISK generates all its own random numbers, and these calculations are completely independent of Excel. During a simulation, @RISK produces the random numbers using Palisade program code. Excel's role in a simulation is simply to perform the computations in the Excel formulas in your worksheet.

    You can easily examine the random numbers produced by @RISK. After a simulation, open the Simulation Data window (x-subscript-i icon). This will give one column per input or output variable. You can copy these numbers in the usual way and perform any desired statistical tests on them.

  • Summary statistics functions such as RiskMean, RiskPercentile (also called RiskPtoX), and RiskCorrel: @RISK uses Palisade program code, not Excel, to compute all of these. Again, you can verify these calculations from the raw data in the Simulation Data window.

Microsoft has acknowledged some issues with some statistical calculations in Excel 2007 and earlier, but has addressed these beginning in Excel 2010. Microsoft gives details in the paper Function Improvements in Microsoft Office Excel 2010 (PDF). But again, none of these issues affect @RISK in any version of Excel, because @RISK does its own statistical calculations for every @RISK finction and does not use Excel functions for them.

last edited: 2013-03-20

2.24. Wilkie Investment Model

All editions of @RISK can easily model time series according to the Wilkie model, using parameters that you select. With @RISK Industrial, you also have the option to fit to historical data using Time Series. The attached prototype builds two Wilkie models, Retail Price Index (RPI) and Share Dividend Yield (SY), to illustrate those techniques.

Let's start with the RPI model. Here you can either set the parameters yourself — recommended values from the literature are shown on the 'Wilkie Models' sheet — or use @RISK to estimate them using Time Series fitting with the AR1 model.  @RISK lets you estimate the parameters for the price index model (mean, standard deviation, and autoregressive parameter), but in this case we fitted the transformed historical data set in column C of the 'Data' sheet and extracted those parameters from the AR1 fit; see the 'Parameters RPI' sheet. Notice that Wilkie model requires a logarithmic transformation and first order differencing detrend.  Once you have found the parameters by running a fit, or picked them from the table, you can easily create the time series model with @RISK, as shown on the 'RPI' sheet.

For the SY model, we used parameters that are recommended in the literature and constructed the model directly.  Please see the 'SY' sheet.

last edited: 2014-03-13

2.25. Geometric Mean in @RISK

Applies to:
@RISK, all releases

Question:
How can I obtain the geometric mean of an output in @RISK?

Response:

Unfortunately, @RISK doesn't have a function to get the geometric mean from an output directly but it is possible to compute it using the arithmetic mean of logarithms; it can be expressed as the exponential of the arithmetic mean of logarithms as described in the model attached.

last edited: 2019-04-16

3. @RISK Distributions

3.1. Which Distribution Should I Use?

How do I know which probability distribution I should use?  Do you have some book you can refer me to?

Different industries tend to prefer a different selection of distributions.  We don't have any one book that directly addresses your question, but we do have a number of resources to offer, both within @RISK and externally.

One very powerful tool, assuming you have the Professional or Industrial Edition, is distribution fitting.  This lets you enter historical data; then @RISK attempts to fit every relevant distribution to the data.  You can instantly compare different distributions to see which one seems best suited to the data set.

If you're not fitting to existing data, here are some things to think about:

  • Your first decision is whether you need a continuous or discrete distribution. Continuous distributions can return any values within a specified range, but discrete distributions can return only predefined values, usually whole numbers. The Define Distributions dialog has separate tabs for discrete and continuous distributions.

  • Then ask whether your distribution should be bounded on both sides, bounded on the left and unbounded on the right, or unbounded on both sides. The thumbnails in Define Distributions will give you an idea of whether each distribution is bounded,

  • Finally, do you have a general idea of the shape of distribution you want — symmetric or skewed? strong central peak or not? The shapes in Define Distributions are just a partial guide for this, because changing the numeric parameters of some distributions can change the shape drastically.

There's more than one way to specify a distribution. Every distribution has standard parameters that you can enter explicitly, as numbers or cell references. But you might prefer to specify distributions by means of percentiles, as a way of specifying those parameters implicitly. To do this, in Define Distributions select the Alt Parameters tab, and you'll see the distributions that can be specified by means of percentiles. On the other hand, if you know the mean, standard deviation or variance, skewness, and kurtosis that you want, the RiskJohnsonMoments distribution may be a good choice.

After you select a distribution, the Define Distribution window gives you instant feedback about the shape and statistics of the distribution as you alter the parameters or even the functions themselves.

Additional resources:

Additional keywords: Johnson Moments, Choose a distribution, Picking a distribution

Last edited: 2017-06-19

3.2. Swap Overlayed Distributions

Applies to: @RISK 8.2 onward

I'm not sure about which distribution would be best for my input, is there a quick way to compare several distributions at once?

Starting in version 8.2, @RISK allows for overlays to be swapped in the Define Distribution. To do this you first need to add an overlay to the currently defined distribution. This is done by clicking the Overlays button in the bottom left hand corner of the window, and choosing Add Overlay from the menu.

You then add the desired distribution from the window, and edit the parameters on the left hand side of the window. The original distribution will remain as a solid image, and the overlay will appear over the top as an unfilled line. You can include many overlays if this is useful for you to compare the shape and limits of multiple distributions.

To change the distribution you click the three dots next to the overlay you think fits your needs best, and then choose Set as Main Distribution. This will change the overlay to the solid filled distribution on the graph. You can either close the Define Distribution window at this stage, or you can remove the overlays by choosing Clear Overlays from the Overlays button.

Last edited: 2021-07-23

3.3. Cell References in Distributions

Applies to:  @RISK 4.x–7.x

Must I specify the X's and P's as fixed numbers in the RiskDiscrete, RiskCumul, RiskCumulD, RiskDUniform, RiskHistogrm, or RiskGeneral distribution, or can I replace them with cell references?  What about RiskSimtable — can I use cell references instead of fixed numbers?

You can replace the list of numbers with cell references without braces, but the referenced cells must be a contiguous array in a row or a column.  It's not possible to collect cells from multiple locations in the workbook.

Example 1:

=RiskCumul(0, 10, {1,5,9}, {.1,.7,.9})

If the probabilities are in cells C1, C2, C3, then you replace the second set of braces and numbers with an array reference, like this:

=RiskCumul(0, 10, {1,5,9}, C1:C3)

If the numbers (X values) are in cells D1, D2, D3, then you replace the first set of braces and numbers with an array reference, like this:

=RiskCumul(0, 10, D1:D3, {.1,.7,.9})

And you can replace both the X's and the P's with array references, like this:

=RiskCumul(0, 10, D1:D3, C1:C3)

Example 2:

=RiskSimtable({10,20,30,40})

If the scenario numbers are in cells IP201 through IS201, then you replace the braces and numbers with an array reference, like this:

=RiskSimtable(IP201:IS201)

These rules might seem arbitrary, but they're actually standard Excel.  The @RISK distribution functions mentioned above take one or two array arguments.  Excel lets you specify a constant array, which is a series of numbers enclosed in braces; or you can specify a range of cells, which is a contiguous series of cells in one row or one column. Excel doesn't have any provision for making an array out of scattered cells.

Additional keywords:  Simtable, Discrete distribution, Cumul distribution, CumulD distribution, DUniform distribution, Histogram distribution, General distribution

Last edited: 2015-06-19

3.4. Setting the "Return Value" of a Distribution

Applies to: @RISK 5.x–7.x

For cells in my model that are probabilistic (directly or indirectly), how do I change the value that is displayed in the cell when a simulation is not running?

By default, when a simulation is not running you will see static values: the displayed values of @RISK distributions won't change during an Excel recalculation. The default static value for continuous distributions is the mean value (expected value). For discrete distributions, the default static value is not the true expected value but rather the value within the distribution that is closest to the expected value: for example, RiskBinomial(9,0.7) will display 6 rather than 6.3.

Outputs, and other values computed from the inputs, generally don't display their own mean or expected value but rather the value computed from the displayed values of inputs. Please see Static Value of Output Differs from Simulated Mean.

There are several ways you can change the displayed values of input distributions, and the resulting displayed values of outputs.  None of these methods will affect a simulation in any way.

Which static values are displayed?

@RISK lets you choose to display the expected value, true expected value, mode, or a selected percentile for all distributions. You can make this choice in either of two places:

  • Simulation Settings, bottom half of the General tab: applies to all distributions in all open workbooks.

  • Utilities » Application Settings » Default Simulation Settings section » Standard Recalc: applies to all distributions in all open workbooks and to any new workbooks you create in the future when @RISK is running.

What's the difference between "expected value" and "true expected value"?

For continuous distributions, "expected value" and "true expected value" are the same, the mean of the distribution.

For discrete distributions, "true expected value" is the mean of the distribution, but "expected value" is the mean rounded to the nearest value that is a member of the distribution. For example, RiskBinomial(3, .44) has a mean = "true expected value" of 1.32, but an "expected value" of 1 because that is the nearest to the mean out of the distribution's possible values 0, 1, 2, 3. In other words, for the mean or expected value of discrete distributions, as defined in textbooks, you need to set @RISK to display the "true expected value".

You can also put a RiskStatic( ) function in an individual distribution to override the general settings, for that distribution only. Example:

=RiskNormal(100, 10, RiskStatic(25) )

Display random values instead of static values

You can suppress the static values and have @RISK generate new random values for each distribution when Excel does an automatic recalculation or when you press F9 to force a manual recalculation. To switch between random and static values for all open workbooks use either method:

  • "Rolling dice" icon (Random/Static Standard (F9) Recalculation).

  • Simulation Settings, bottom half of General tab, select Random Values or Static Values. (The "rolling dice" icon switches between these two.)

You can also change @RISK's default from static values to random values in the Application Settings dialog, as mentioned above.

Last edited: 2017-10-10

3.5. Generating Values from a Distribution

Applies to: @RISK 4.x and newer

I'd like to generate 10,000 sample values from a particular distribution, for example RiskLogLogistic(-0.0044898, 0.045333, 2.5862). Is there some way to do it?

Here is a choice of five methods.

Simulation methods:

Create an empty workbook and put that distribution in a cell. Set iterations to the desired number of values, and run a simulation. Then, do one of the following:

  • Click the x-subscript-i icon to open the Simulation Data window. Right-click anywhere in the data column, and select Copy, then paste the values into your Excel sheet.
  • Click Excel Reports » Simulation Data (Inputs) or Simulation Data. @RISK will create a new sheet with your data.
  • Click Browse Results, click on the cell, and use the drop-down arrow at upper right to change Statistics grid to Data Grid. Right-click on the column heading and select Copy, then paste the values into your Excel sheet.

Since you are running an actual simulation, these methods use the Sampling Type you have set in Simulation Settings. By default, that is Latin Hypercube, which is better than traditional Monte Carlo sampling at matching all percentiles to the theoretical cumulative probability of a distribution. Also, correlations are honored in a simulation.

Non-simulation methods:

You can also sample a distribution without running a simulation. In this case, the Sampling Type is always Monte Carlo, regardless of your Simulation Settings, and any correlations are disregarded.

Here are two methods that don't involve running a simulation:

  • If you have @RISK Industrial or Professional, you can write a simple loop in Visual Basic to perform the sampling and save the random numbers. The Risk.Sample method is explained in Sampling @RISK Distributions in VBA Code.

  • Click the Random/Static "rolling dice" icon to make it active. Insert your distribution function in a cell, and then click and drag to create as many duplicates as you need random numbers. Highlight those values, and press Ctrl+C to copy, then Paste Special » Values. (You can either paste the values in the same cells to overwrite the formulas, or paste them in other cells if you want to keep the formulas.)

Last edited: 2015-11-09

3.6. Specifying a Descriptive Name for a Distribution

Applies to: @RISK 4.x–8.x

In the sensitivity analysis or tornado charts, I'm observing some odd descriptions for the bars. My model is built with the cell description in the cell to the left of that cell's formula. After running the simulation, in the tornado graphs I usually see the respective cell descriptions. However, I'm currently observing that some parameters seem to be using other text from the worksheet instead of the description next to the cell formula. Is there some way to tell @RISK what descriptions to use for the bars in tornado charts?

Every @RISK input distribution has a name for use in graphs and reports as well as the @RISK Model Window. When you first define a distribution, either in the Define Distribution window or by directly entering a formula in the worksheet, @RISK assigns it a default name. This default name comes from text that @RISK finds in your worksheet and interprets as row and column headings. If the name is acceptable, you don't need to do anything. If it's not acceptable, or if it's blank because @RISK couldn't find any suitable text, you can easily change it, using any of these methods:

  • In the first box in the Define Distribution window, enter the desired name. (You can do this when first creating the distribution. For an existing distribution, click on the cell and then click Define Distribution to reopen the window.)

  • Enter or change the name in the Model Window. Click anywhere in the row for the desired input, right-click, and select Function Properties. The Name box is first in the Properties dialog.

  • Edit the formula directly, in Excel, to insert a RiskName property function.  For example,

    =RiskPoisson(.234)

    might become

    =RiskPoisson(.234, RiskName("Number of claims"))

    or, with the name in a separate cell,

    =RiskPoisson(.234, RiskName(A17))

    Avoid complex name formulas that use Excel Table Notation as not all versions of @Risk are able to interpret it correctly. You can use regular cell specification as below

    =RiskPoisson(.234, RiskName(A17&" "&A18))

Save your workbook after entering or editing any names. The new names will be used in subsequent graphs and reports.  (Graphs and reports of simulation results will use the new names when you run a new simulation.)

Last edited:  2021-09-30

3.7. Shift Factor in a Distribution

Applies to: @RISK 4.x–7.x

What is the shift factor of a distribution, and why it is used?

The shift factor of a distribution is shown in the RiskShift( ) property function. It moves the function toward the right on the x-axis (positive shift factor) or toward the left on the x-axis (negative shift factor). In other words, it shifts the domain of the distribution. This is equivalent to taking every point on the distribution and adding the shift factor to it, in the case of a positive shift. With a negative shift, that amount is subtracted from every point on the distribution.

Shift factor in defined distributions

When you're defining a distribution, click the down arrow next to "Parameters: Standard", and select Shift Factor on the pop-up dialog. The shift factor is now added to the Define Distributions dialog for this distribution, and you can enter various values and see how they change the distribution. If you don't want to have to do that, go into Utilities » Application Settings » Distribution Entry and change Shift Factor to Always Displayed.

You can always add a shift factor to an existing distribution by editing the Excel formula directly. For example, if you change =RiskLognorm(10,10) to RiskLognorm(10,10,RiskShift(3.7)), the entire distribution shifts 3.7 units to the right.

In general the shift factor should only be used in cases where the distribution function itself does not contain a location parameter. For example, you shouldn't use a shift factor for a normal distribution, since the mean of the normal is already a location parameter.

Shift factor in fitted distributions

In fitting distributions to data, the purpose of the shift factor is to allow fitting a particular distribution type because it has the right shape, even though the values in the fitted distribution might actually violate the defined parameter limits for that distribution.

For example, the 2-parameter log-normal distribution defined in @RISK cannot return negative numbers. But suppose your data have a log-normal shape but contain negative numbers. @RISK inserts a negative shift factor in the fitted distribution, thus shifting it from the usual position of a log-normal to the position that best approximates your data. In effect, this makes a 3-pararameter version of the log-normal distribution.

See also: Truncate and Shift in the Same Distribution

Additional keywords: log normal, Lognorm, RiskLognorm

Last edited: 2017-03-29

3.8. Cutting Off a Distribution at Left or Right

Applies to:  @RISK 5.x–7.x

I have a regular distribution, but I want to truncate one tail. For example, maybe I have a RiskNormal(50,10) and I want to ensure that it never goes below 0.

The RiskTruncate property function limits the sampling of a distribution. Specify only a lower bound, only an upper bound, or both lower and upper bounds. With any such truncation, the "lost" probability is redistributed proportionally across the remaining range of the interval. This is better than using Excel's MIN and MAX functions, which distort the distribution by taking all the probability beyond the truncation point and adding it to the truncation point.

You can set truncation limits by editing formulas, or in the Define Distribution dialog (later in this article). Either way, the limits can be fixed numbers or cell references, although the examples in this article all use fixed numbers.

  • To specify only a minimum, with maximum unbounded, just omit the maximum argument of the RiskTruncate function. For example, in the RiskTruncate function below, the minimum has been specified as 0, but the maximum is +∞:

    RiskNormal(10, 5, RiskTruncate(0, ) )

    The comma is optional when you specify only a minimum:

    RiskNormal(10, 5, RiskTruncate(0) )

  • Likewise, you can specify only a maximum, with minimum unbounded, by omitting the minimum argument. (Notice the required comma before the maximum.) This example specifies a minimum of −∞ and maximum of 15:

    RiskNormal(10, 5, RiskTruncate(, 15) )

  • Finally, you can specify both minimum and maximum, by supplying both arguments. This example specifies a minimum of 2 with a maximum of 15:

    RiskNormal(10, 5, RiskTruncate(2, 15) )

You're not limited to naturally unbounded functions; you can also truncate a bounded function like a RiskTriang. For example, if you want a RiskTriang(100,200,300) shape, but with no values above 250, code it this way:

RiskTriang(100, 200, 300, RiskTruncate(, 250) )

If you prefer not to edit property functions in formulas, you can enter one-sided or two-sided truncations in the Define Distribution window:

  1. Right-click the Excel cell with the distribution function you want to truncate, and choose @RISK » Define Distributions from the popup menu; or, left-click the cell and then click the Define Distributions icon in the ribbon. The Define Distribution dialog appears.

  2. In the left-hand section of the dialog, find the Parameters entry and click into the box that says Standard.

  3. A drop-down arrow appears at the right of that box; click the arrow.

  4. Check (tick) the "Truncation Limits" box, and select Values or Percentiles at the right. Click OK.

  5. Specify a minimum by entering it in the box labeled "Trunc. Min", or specify a maximum by entering it in the box labeled "Trunc. Max". Again, these can be fixed numbers or cell references. If you leave the minimum empty, the distribution will not be truncated at left; if you leave the maximum empty, the distribution will not be truncated at right.

    As soon as you enter either a minimum or maximum, the RiskTruncate function appears as an argument of the distribution function in the cell formula displayed at the top of the Define Distribution window. Because the default minimum parameter is −∞, and the default maximum parameter is +∞, the parameter you do not specify is automatically omitted from the RiskTruncate function.

  6. Click the OK button to write the formula to Excel.

If you often use truncation limits in your distributions, you can configure @RISK to make the "Trunc. Min" and "Trunc. Max" boxes a regular part of the Define Distributions dialog box. In @RISK, click Utilities » Application Settings » Distribution Entry and change Truncation Limits to Always Displayed (Values) or Always Displayed (Percentiles).

See also:

Last edited: 2016-09-01

3.9. Truncate and Shift in the Same Distribution

Applies to: @RISK for Excel, all releases

My @RISK distribution function is not obeying the minimum and maximum set by the truncation function. Here is the function I am using:

=RiskPearson5(47, 6018, RiskShift(-78), RiskTruncate(11,100))

But in a simulation, I am getting many values below 11. What is the problem?

We tend to think of truncation as applying limits after shifting. However, when simulating a distribution function, @RISK always truncates first and then shifts, regardless of the order of these arguments in the distribution function. In your example, after truncating at 11 and 100, @RISK shifts the distribution left 78, so that the actual min and max for your Pearson5 are –67 and 22.

You need to take this into account when figuring out how to manipulate the function to get the desired result. Subtract your desired shift factor from your desired final limits for the distribution.

For example, if you want a Pearson5 that is truncated at 11 and 100 after shifting left by 78 units, compute the pre-shift truncation limits as 11–(–78) = 89 and 100–(–78) = 178, and code your function this way:

=RiskPearson5(47, 6018, RiskShift(-78), RiskTruncate(89,178))

The truncation limits 89 to 178 before shifting become your desired limits 11 to 100 after shifting.

See also:

Last edited: 2015-06-19

3.10. Statistics for an Input Distribution

Applies to: @RISK 5.x–7.x

How can I place the mean or a given percentile of an input distribution in my workbook? Can I choose between simulation results and the perfect theoretical statistics?

@RISK has two sets of statistic functions that can be applied to inputs. The statistics of the theoretical distributions all have "Theo" in their names — RiskTheoMean( ), RiskTheoPtoX(), and so forth. You can get a list by clicking Insert Function » Statistic Functions » Theoretical. All the same statistics are available for simulated results; click Insert Function » Statistic Functions » Simulation Result.

The theoretical ("Theo") return the correct values before a simulation runs, during a simulation, and after a simulation, and they don't change unless you change the distribution parameters.

The simulation results (non-"Theo") change with each simulation, within ordinary statistical variability. Before the first simulation, they don't return meaningful numbers; during a simulation, they return #N/A. You can change them to be computed at every iteration, except in @RISK 5.0 — see "No values to graph" Message / All Errors in Simulation Data — but if you need those statistics during a simulation the better approach is usually to use the "Theo" functions.

Last edited: 2015-06-26

3.11. Statistics for Just Part of a Distribution

Applies to: @RISK 5.x–7.x

I want to get the mean and standard deviation for just part of my input distribution. If I enter truncation limits in the Define Distribution window or include RiskTruncate( ) in the distribution formula, then the mean and standard deviation of my distribution change and that is not what I want. I want the regular distribution to be simulated, but then after simulation I want to consider only part of it when computing the statistics.
OR,
I have applied a filter, but the statistics functions are still computed on the whole of the output distribution. Is there a way to get the mean of the filtered data set using RiskMean?

To use the whole distribution in simulation but then get the statistics of just a portion of it, put a RiskTruncate( ) or RiskTruncateP( ) function inside the RiskMean( ).  A very minimal example is attached.

  • A1 contains: =RiskNormal(100,10).

  • A2 contains: =RiskMean(A1, RiskTruncate(95)), which computes the mean of the part of the distribution from 95 to infinity.  This is equivalent to =RiskMean(A1, RiskTruncate(95, 1E+99)). RiskTruncate( ) specifies truncation limits by values.

  • A3 contains: =RiskMean(A1, RiskTruncateP(0.8,1)), which computes the mean of the part of the distribution from the 80th to the 100th percentile, the top 20% of the distribution. RiskTruncateP( ) specifies truncation limits by percentiles.

The other statistics functions can have RiskTruncate( ) or RiskTruncateP( ) applied in the same way. This you can get the mean of part of a distribution, percentiles of part of a distribution, standard deviation of part of a distribution, and so on.

About accuracy of theoretical statistics: Most distributions have no closed form for the mean of a truncated distribution. Therefore, if you're using a statistic function such as RiskTheoMean( ) with RiskTruncate( ) or RiskTruncateP( ), @RISK has to do a little mini-simulation to approximate the theoretical mean of the truncated distribution. This may differ from the actual theoretical mean by a small amount, usually not more than a percent or two. With a truncated simulated distribution, using a statistic function such as RiskMean( ) with RiskTruncate( ) or RiskTruncateP( ), @RISK uses actual simulation data. Thus results are accurate with respect to that simulation, but another simulation with a different random number seed would of course give slightly different results.

See also: Cutting Off a Distribution at Left or Right for truncating an input distribution and using only the truncated distribution in simulation.

Last edited: 2018-05-09

3.12. All Articles about RiskMakeInput

Applies to:
@RISK 5.x–7.x

The RiskMakeInput( ) function seems to have a lot of capabilities. Can you give me an overview?

The short answer is: RiskMakeInput lets you treat a formula as though it were an @RISK input distribution. That seemingly simple statement has a lot of implications, which we explore in various Knowledge Base articles:

Special applications:

Limitations:

Last edited: 2019-02-15

3.13. Event or Operational Risks

Applies to:
@RISK 5.x–7.x

A risk has a certain chance of occurring, let's say 40%. If it does occur, there's a probability distribution for its severity; let's say a Triang. I've been multiplying RiskBinomial(1, 0.4) by my RiskTriang. Should I do anything in my risk register beyond just multiplying?

Caution: The technique in this article will is intended only for "light-switch" risks that either happen once or don't happen. For risks that could happen multiple times in one iteration, please see Combining Frequency and Severity and use RiskCompound.

Probably you should. You probably want to wrap the multiplication inside a RiskMakeInput function. If probability is in cell C11 and impact in C12, your function for actual impact in any given iteration would look like this:

=RiskMakeInput(C11*C12)

If you wish, you can give it a name:

=RiskMakeInput(C11*C12, RiskName("my name for this risk") )

Why introduce an extra distribution instead of just multiplying? Don't they get the same answers?

Nearly the same, though not identical. Here's why. Suppose your simulation has 10,000 iterations, and your risk has a 40% probability of occurring. There are 10,000 values of your RiskTriang for the 10,000 iterations. Only 4,000 of them (40% of the 10,000) get used. but statistics and graphs will all report based on all 10,000 values. RiskMakeInput treats the product as a distribution, so that now you have 6,000 zero values and 4,000 non-zero values, and the statistics reflect that.

But using RiskMakeInput can make the greatest improvement in your tornado graphs. Without RiskMakeInput, you might get a bar in your tornado for the RiskBinomial, or for the RiskTriang, or both, or neither. With RiskMakeInput, if the risk is significant you get one bar in the tornado, and if the risk isn't significant there's no bar for it.

The attached example shows a risk register, both with plain multiplication and with improvement by way of RiskMakeInput. Run a simulation. Though the two output graphs don't look very different, the two tornado graphs show very different sets of bars. (In this particular example, most of the tornado bars in Method A come from the RiskBinomial functions, which probably isn't helpful.) Also, with plain multiplication in method A, there's no way to get an accurate graph of the impacts in all 10,000 iterations; with RiskMakeInput, just click on one and click Browse Results.

Can I correlate RiskMakeInput?

Unfortunately, no. This is a limitation of RiskMakeInput, and of the plain multiplication method also. There is a workaround in Correlating RiskMakeInput or RiskCompound, Approximately.

See also: All Articles about RiskMakeInput

Additional keywords: Event risk, operational risk, Risk register

Last edited: 2018-10-24

3.14. Combining Frequency and Severity

Applies to: @RISK 5.x–7.x

I have a risk that may or may not occur, or it might occur a variable number of times. But the impact or severity of each occurrence is a probability distribution, not a fixed number. How can I model this in @RISK?

The RiskCompound function, available in @RISK 5.0 and later, is the solution. It takes two arguments: a discrete function for frequency or probability, and a discrete or continuous function to govern severity or impact. (Two additional arguments are optional; see How are the deductible and limit applied, below.)

Suppose the impact or severity is according to RiskNormal(100,10). If you want to say that the risk may or may not occur, and has 40% probability of occurrence, code it this way:

=RiskCompound(RiskBinomial(1,0.4), RiskNormal(100,10))

(For more about a risk that can occur only zero times or one time, see Event or Operational Risks.)

If you want to say that the risk could occur a variable number of times, choose one of the discrete distributions for frequency. For example, if you choose a Poisson distribution with mean 1.4 for the distribution of possible frequencies, then your complete RiskCompound function would be

=RiskCompound(RiskPoisson(1.4), RiskNormal(100,10))

In any iteration where the frequency is greater than 1, @RISK will draw multiple random numbers from the severity distribution and add them up to get the value of the RiskCompound for that iteration. (There is no way to get at the individual severity values that were drawn within one iteration.)

Must frequency and severity be @RISK distributions, or can they be references to cells that contain formulas?

You can embed the frequency and severity distributions within RiskCompound( ), as shown above, or to use cell references for frequency and severity and have those distributions in other cells. There are two caveats:

  • Performance: If your frequency is large, or if you have many RiskCompound functions, your simulation will run faster — possibly much faster — if you embed the actual severity distribution within the RiskCompound( ). Using a cell reference for the frequency distribution doesn't hurt performance. (The attached CompoundExploration.xls uses cell references to make the discussion easier to follow, but it is a very small model and so performance is not a concern.)

  • Calculation: If the severity argument is a cell reference, and the referenced cell contains an @RISK distribution, then the severity will be evaluated multiple times in an iteration, just as if the severity were physically embedded in the RiskCompound( ) function. For instance, suppose that the severity argument points to a cell that contains a RiskTriang( ) distribution, either alone or within a larger formula. If the frequency distribution has a value of 12 in a given iteration, then the referenced formula will be re-evaluated 12 times during that iteration, and the 12 values added together will be the value of the RiskCompound( ).

    But if the referenced cell does not contain any @RISK distributions, it will be evaluated only once every iteration, even if the cell contains a formula that ultimately refers to an @RISK distribution. For example, consider the function =RiskCompound(F11,S22), and suppose that on one particular iteration the frequency value in F11 is 12. If the severity cell S22 contains a formula such as =RiskNormal(B14,B15)+B16*B17, it will be evaluated 12 times during this iteration, and the value of the RiskCompound will be the sum of those twelve values. But if the severity cell S22 contains a formula such as =LOG(B19), and B19 contains a RiskNormal( ) function, the formula will be evaluated only once in his iteration, and the value of the RiskCompound( ) for this iteration will be 12 times the value of that formula. You can think of it this way: RiskCompound( ) will drill through one level of cell referencing to find distributions, but only one level.

What if the frequency distribution is a continuous distribution? How does @RISK decide how many severity values to add up?

"Frequency" implies a number of occurrences, which implies a whole number (0 or a positive integer). Therefore we recommend a discrete distribution, returning whole numbers, for the frequency. But if you use a continuous distribution, or a discrete distribution returning non-integers, @RISK will truncate the value to an integer.

For example, if your frequency distribution returns a value of 3.7, @RISK will draw three values from the severity distribution, not four.

How are the deductible and limit applied in a RiskCompound( ) function? Is it on a per-occurrence or an aggregate basis?

RiskCompound( ) takes up to four arguments:

RiskCompound(dist1, dist2, deductible, limit)

Both deductible and limit are applied per occurrence. For example, suppose that the frequency distribution dist1 has a value of 6 in a particular iteration. Then the severity distribution dist2 will be drawn six times, and deductible and limit will be applied to each of the six.

The limit argument to RiskCompound( ) is meant to be the actual maximum payout or exposure per occurrence. If the actual maximum payout is the policy limit minus the deductible, then you should use the actual maximum payout for the fourth argument to the RiskCompound( ) function.

For each sample drawn from dist2, out of the multiple samples during an iteration, the result returned is
MIN( limit, MAX( sample - deductible, 0 ) )

In words:
1. If sample is less than or equal to deductible, zero is returned.
2. If sample is greater than deductible and (sample minus deductible) is less than limit, (sample minus deductible) is returned.
3. If (sample minus deductible) is greater than or equal to limit, limit is returned.

Again, limit and deductible are applied to each of the samples of dist2 that are drawn during a given iteration. Then the values of all the occurrences are summed, and the total is recorded as the value of the RiskCompound( ) function for that iteration. (It's not possible to get details of the individual occurrences within an iteration.)

You can download the attached workbook to try various possibilities for RiskCompound.

See also: All Articles about RiskCompound

Last edited: 2018-08-09

3.15. All Articles about RiskCompound

Applies to:
@RISK 5.x–8.x

The RiskCompound( ) function seems pretty complex. Can you give me an overview?

The short answer is: RiskCompound lets you model a risk that could occur a varying number of times, with different severities—and model it in one function. The main article is first in the list below, and the others explore specialized issues.

Special applications:

Correlation:

Last edited: 2021-11-18

3.16. Sum of Distributions Must Equal Fixed Value

Applies to: @RISK 4.x–7.x

I have several continuous distributions that vary independently, but I need them always to add up to a certain value.  Is there any way to accomplish this?

Please see the attached example.

The technique is to let the distributions vary randomly, but have an equal number of helper cells.  Each helper cell is a scaled version of the corresponding distribution.  "A scaled version" means that the first helper cell equals the first actual distribution multiplied by the desired total and divided by the actual total, and similarly for each of the helper cells.  In this way you are guaranteed that the helper cells always add up to the desired value.

Your workbook formulas should all refer to the helper cells, and not to the original distributions.  If you want to record the values of the helper cells during a simulation, you can designate them as @RISK output cells.

Please note that this technique is suitable for continuous distributions. If you need discrete distributions to add up to a fixed total, you can't use this technique because the scaled versions usually won't be whole numbers.

Additional keywords: Total of distributions equals constant, Fixed value for total

Last edited: 2016-12-15

3.17. Multinomial Distribution

Applies to: @RISK 5.x and newer

Does @RISK have a multinomial distribution?

The multinomial distribution is a generalized form of the binomial distribution. In a binomial, you have a fixed sample size or number of trials, n. Every member of the population falls into one of two categories, usually called "success" and "failure". The probability of success on any trial is p, and the probability of failure on any trial is 1–p. The RiskBinomial distribution takes the parameters n and p, and at each iteration it returns a number of successes. The number of failures in that iteration is implicitly n minus the number of successes.

In a multinomial, you have three or more categories, and a probability is associated with each category. The total of the probabilities is 1, since each member of the population must be a member of some category. As with the binomial, you have a fixed sample size, n. At each iteration you want the count of each category, and the total of those counts must be n.

@RISK doesn't have a multinomial distribution natively, but you can construct one using binomial distributions and some simple logic. This workbook shows you how to do it.

Last edited: 2016-03-18

3.18. Cumulative Probability

Applies to: @RISK 5.x and newer

Excel has functions like NORM.DIST (NORMDIST in older Excels) to return the cumulative probability in a normal distribution. Does @RISK have anything like that?

Yes, @RISK has functions to find the cumulative probability for any distribution. Instead of a separate cumulative-probability function for each distribution, @RISK uses the same function for cumulative probability of any distribution.

Actually, there are two functions, one to obtain simulation results and one to query the theoretical distribution.

  • Suppose you have an @RISK input or output, or even just an Excel formula, in cell AB123. To obtain the cumulative probability to the left of x = 14, for the most recent simulation, use the function =RiskXtoP(AB123,14). This function won't return a meaningful value until after a simulation has been run.

  • For @RISK distributions, you can access the theoretical distribution. For example, if you have =RiskNormal(100,10) in cell XY234, the function =RiskXtoP(XY234,120) will return 0.97725, give or take, but varying from one simulation to the next. But the "theo" function, =RiskTheoXtoP(XY234,120) will return the exact theoretical cumulative probability, limited only by the accuracy of floating point. The theoretical value is not dependent on running a simulation. With the "theo" functions, you can even embed the distribution right in the function, as for instance =RiskTheoXtoP(RiskNormal(100,10), 120).

Instead of the probability from –∞ to an x value, how can I get the probability between two x values?

Just subtract the two cumulative probabilities. For example, the cumulative probability of cell PQ456 between x = 7 and 22 would be =RiskXtoP(PQ456,22) – RiskXtoP(PQ456,7).

How do I get the probability density, which Excel returns when the last argument of NORM.DIST is FALSE?

The probability density is simply the height of the curve at a given x value. Use RiskTheoXtoY instead of RiskTheoXtoP. (The RiskTheoXtoY function was added in @RISK 6.0 and is not available in @RISK 5.x.)

Last edited: 2017-05-04

3.19. Specifying Distributions in Terms of Percentiles

Applies to: @RISK 5.x–7.x

I know what distribution I want to use, but I want to specify it in terms of percentiles rather than with the usual parameters. Is there a way?

For many distributions, you can. We use the term "Alternate Parameters" for specifying at least one percentile in place of a usual parameter like mean, most likely, alpha, and so forth.

In the Define Distributions dialog, select the Alt. Parameters tab, and you'll see the distributions that can be specified in terms of percentiles. Double-click your desired distribution to select it.

A dialog will open with some suggested percentiles, and you can change the values in that dialog as usual. But quite possibly you'll want to specify different percentiles from the suggested ones — for example, the 10th and 90th instead of the 5th and 95th. To change which percentiles are used, click the drop-down arrow at the right of the word Alternate to open the Parameters dialog. If you open the Parameters dialog, you can change which percentiles are used, and by selecting the radio buttons you can even define the distribution based on a mix of percentiles and standard parameters. For more, with a screen shot, please search for Alternate Parameters in @RISK's help file.

How do percentile parameters work internally? Does @RISK convert them to standard parameters?

Yes, @RISK resolves percentile parameters into standard parameters. This has to be done in every iteration in a simulation, because it's possible for your workbook's logic to change the parameters of a distribution from one iteration to the next.

In general terms, resolving alternate parameters is a kind of optimization problem. Say you have a potential candidate for the resolved (non-Alt) distribution. You can calculate an error for this candidate by computing the difference between the desired percentiles specified in the Alt function, and the percentile values your candidate actually has. Finding the correct non-Alt distribution requires juggling the parameters until that error goes to zero. That's the simplest method, and indeed you could use Palisade's Evolver or RISKOptimizer, or Excel Solver, to resolve alternate parameters yourself using this method.

But there's a problem with this method: it's just not fast enough, especially for complicated cases like the BetaGeneralAlt with its four parameters. The time to solve an optimization problem goes way up as you increase the dimensionality. If you were resolving parameters just one time, it would probably be fine to do it in this brute-force way; but, given the possibility of different parameters in each iteration, the resolution process needs to be really fast. Fortunately, we can usually reduce the dimensionality of the optimization. For example, with BetaGeneralAlt, some tricky math reduces the problem from four dimensions to two. (The differing mathematical tricks for each distribution are proprietary. Our developers put a lot of work into making them as efficient as possible.)

Can I see the standard parameters that @RISK computes from the percentiles?

Yes, please see Distribution Parameters from "Alt"  Distributions.

Last edited: 2018-07-02

3.20. Specifying Distributions in Terms of Desired Mean and Standard Deviation

Applies to:
@RISK 4.x–7.x
RISKOPTIMIZER 1.x–5.x

I want a BetaGeneral distribution with a given mean and standard deviation. What α1 and α2 (alpha1 and alpha2) should I enter in the Define Distribution window? 
 
Can I do this if I know other statistics, such as the mode and the variance? Can I do this for other types of distribution?

Let's take the easiest alternatives first. 

If you have @RISK 5.5.0 or newer, the JohnsonMoments distribution is available.  It lets you specify mean, standard deviation, skewness, and kurtosis, and it comes up with an appropriate distribution shape for those parameters.

If you have a particular distribution in mind and you want to target percentiles (including the median), you may be able to use a form of the distribution that specifies percentiles in place of one or more parameters. In Define Distribution select one of the distributions from the Alt. Parameters tab in the Define Distribution dialog.  @RISK will then calculate the needed parameters automatically.

If those alternatives don't meet your needs, you may be able to solve for the distribution parameters that give the desired mean and standard deviation (or other statistics) for your desired distribution.  It all depends on whether closed forms exist for your desired statistics. (A closed form, for this purpose, is an algebraic formula that can be implemented in Excel.  If you're targeting standard deviation, use the square root of the variance formula.)

There are two places to look for these closed forms.  In @RISK 5.x and newer, and in RISKOptimizer 5.x, look in the product's help topic for the particular distribution function that is of interest.  In @RISK 4.x or RISKOptimizer 1.x, click the Windows Start button, then Programs or All Programs, then Palisade DecisionTools, then Online Manuals, then Distribution Function Summary. 

Example:

The attached example shows how to find alpha1 and alpha2 for a BetaGeneral that give a desired mean and standard deviation.  Please download FindDistributionParams.xls and open it in RISKOptimizer or in Excel. All constraints and options are already set as needed.

The green cells are the desired statistics for the distribution, here minimum, maximum, mean, and standard deviation. The red cells are the adjustable cells for Solver or RISKOptimizer; they're arbitrarily set to 1 at the start. The purple cells are the formulas for mean and standard deviation, in terms of the adjustable red cells.

As you see, when α1 = α2 = 1 in a BetaGeneral distribution, the mean is 57.5 and the standard deviation is 24.5. These deviate from the desired values by a total of 27.04 units, the "error to minimize" in blue. RISKOptimizer or Solver is given the blue cell as the target to minimize.

When you run RISKOptimizer or Solver, it adjusts the red α1 and α2 until it converges on parameters that give the desired mean and standard deviation, or as close as possible to them.

Variations on the example:

If you want to target different statistics, such as kurtosis and mode or skewness and mean, change the captions A21:A24 and the formulas E27 and H27.

If you're interested in a different distribution, you may need to change the captions D21:D22 in addition to the above, and you may also need to edit the constraints in RISKOptimizer or Solver. (In a BetaGeneral distribution, α1 and α2 must be positive, but parameters for many other distributions have different constraints.)  If the distribution has three parameters or more, insert the additional parameters and add appropriate RISKOptimizer or Solver constraints.

Additional keywords: RiskJohnsonMoments distribution, Johnson Moments

Last edited: 2015-06-19

3.21. Combining Estimates from Several People

Applies to: @RISK 5.x–7.x

Several people gave me their assessments of the likely impact of a risk or a benefit, but naturally their estimates vary. Also, I have higher confidence in some opinions than others. How can I combine these assessments in @RISK?

We often say in ordinary language that we give more weight to one thing than another in making a decision, and it's the same in this situation. You want to set up a little table of weights and @RISK probability distributions, and the question then is how to give each distribution the appropriate weight. It would be easy to take the weighted average of the distributions, but that causes the extreme opinions to be under-represented. There are many possible approaches that don't have that problem, and the attached workbook shows four of them.

  • Sheet1 lets the contributors specify different distributions, not just different parameters to the same distribution. The weights are converted to percentages, and then using the number of iterations (which you specify) each distribution is sampled for the appropriate number of iterations.

  • Sheet1A is similar, and in fact it uses the exact same distributions as Sheet1. But it uses a RiskDiscrete function to sample the individual distributions in the appropriate proportions. This one does not need you to place the number of iterations in the workbook.

  • Sheet2 takes a different approach, computing weighted averages of the cumulative probabilities (the CDFs, not the PDFs. This could have been done with different distributions like Sheet1, but we also took the opportunity to show how you could set up a table of pessimistic, most likely, and optimistic cases and use the same distribution for all of them.

  • Sheet3 uses a multinomial distribution. Over the course of the simulation, each of the five distributions is samples in the appropriate proportion, based on the weights.

In all four cases, the combined function is wrapped in a RiskMakeInput function. That ensures that only the combined distribution, not the individual assessments, will show up in sensitivity graphs and figures. See also: All Articles about RiskMakeInput.

Last edited: 2016-03-28

3.22. Algorithm Used by RiskJohnsonMoments

Applies to: @RISK 5.5.0–7.x

The @RISK documentation says, "RiskJohnsonMoments(mean,standardDeviation,skewness,kurtosis) chooses one of four distributions functions (all members of the so-called Johnson system) that matches the specified mean, standard deviation, skewness, and kurtosis. This resulting distribution is either a JohnsonSU, JohnsonSB, lognormal, or normal distribution." How does @RISK choose among the four underlying distributions?

We use Algorithm AS 99, Journal of the Royal Statistical Society Series C (Applied Statistics) vol. 25, p. 180–189 (1976).  This article is available through JSTOR.

Additional keywords: JohnsonMoments distribution, Johnson Moments

Last edited: 2015-06-19

3.23. Distribution Parameters from "Alt" Distributions

Applies to:
@RISK 5.x–7.x

When I specify a distribution in terms of percentiles or "alt parameters", how does @RISK figure out the parameters of the distribution?

That's a good question. If you have a RiskPertAlt or RiskTriangAlt, for example, @RISK finds what parameters of a standard RiskPert or RiskTriang would give the percentiles you specified. But there's no formula. Instead, @RISK has to use a process of successive approximations to find the right parameters for the RiskPert. And it's the same for all the other Alt distributions, as well as RiskTrigen, which specifies two of a triangle's three parameters as percentiles. RiskUniformAlt is the exception; see below if you want to know the theory.

How can I find out what standard parameters @RISK computes for Alt functions?

  1. In the Define Distribution window's left-hand column, click the drop-down arrow next to Parameters: Alternate.
  2. A Parameters dialog opens; clear the check box for Alternate Parameters in that dialog and click OK.
  3. In the Define Distribution dialog, click OK to write the non-Alt function to the cell, or Cancel to keep the Alt function.

For example, paste this formula into an empty cell:

=RiskBetaGeneralAlt(5%,-3,25%,2,75%,12,95%,16)

Press Enter, and then click Define Distributions. (As an alternative, you could click Define Distributions on an empty cell, select the distribution, and enter the parameters in the dialog.)

Click the drop-down arrow to the right of Alternate, remove the check mark for Alternate Parameters, and click OK just once. The display now shows the equivalent regular parameters, α1=1.295682, α2=1.121222, Min=-4.990886, Max=17.204968. (Because these are rounded values, some statistics and percentiles may be slightly different from their values in the Alt distribution.) The full non-Alt distribution is shown in the Cell Formula box near the top of the Define Distributions dialog:

=RiskBetaGeneral(1.295682,1.121222,-4.990886,17.204968)

If you now click OK, @RISK will replace the Alt distribution in your worksheet with that non-Alt distribution; if you click Cancel, the Alt distribution will remain in your worksheet.

I have to convert a number of Alt functions to standard parameters. Is there some way to do this with worksheet functions?

For some Alt functions, yes. The attached workbook gives examples.

If the standard parameters of a distribution are statistics like min, max, and mean, you can use RiskTheoMin and other "Theo" statistic functions to find those parameters. The triangular distribution, for example, has parameters of min, mode (most likely), and max, and you can get them by applying those "Theo" functions to the TriangAlt or Trigen.

If the parameters don't map directly to statistic functions, but there are formulas in the help file, you can solve those formulas to find the parameters. For instance, the help file says that the mean and variance of a BetaGeneral are

μ = min + α1(max−min)/(α12)

σ² = α1α2 (max−min)² / ( (α12)² (α12+1) )

Solving for α1 and α2 gives

α2 = ( (μ−min)(max−μ)/σ² − 1 ) (max−μ) / (max−min)

α1 = α2 (μ−min) / (max−min)

The attached workbook shows about a dozen examples, mostly less complicated than that. Unfortunately, not all distributions have closed-form expressions for the statistics in terms of the distribution parameters; for those, the only choice is the method above using the Define Distribution window.

What about RiskUniformAlt? I'm curious how @RISK can use a formula to convert it to standard parameters, when the other Alt distributions require successive approximations.

This section shows the algebraic solution for those who are interested, although the techniques given above are quicker and simpler. Unlike all the other Alt functions, @RISK uses formulas to convert RiskUniformAlt to the equivalent non-Alt function. Consider RiskUniformAlt(C1,x1,c,x2) where C1 and C2 are cumulative ascending percentiles >0 and <1. How is that converted to the equivalent RiskUniform(min,max)?

The CDF (cumulative distribution function) for RiskUniformAlt is a straight line passing through your desired percentiles (x1,C1) and (x2,C2). But the same straight line also passes through (min,0) and (max,1), although you don't yet know the values of min and max. Therefore the equation of the CDF is

C = (x − min) / (max − min)

Substituting your two desired percentiles (x1,c1) and (x2,C2) gives C1⋅(max − min) = x1 − min and C2⋅(max − min) = x2 − min. Solving those as simultaneous equations in min and max gives the formulas

min = (x1⋅C2 − x2⋅C1) / (C2 − C1)

max = min + (x2 − x1) / (C2 − C1)

Additional keywords: Standard parameters, Alt parameters, Standard distributions, Alt distributions

Last edited: 2018-07-05

3.24. Turning Inputs On and Off

Applies to: @RISK 5.x–7.x

Is there an easy way to turn input variables on and off? I would like to try a simulation with some of the variables turned off, but I don't want to replace the distribution formulas with constants because I will want the distributions to vary again in later simulations.

Yes, you can lock any inputs.  To lock an input, do any of these:

  • In the Model Window, right-click the input and select Lock Input from Sampling.  (You can select multiple distributions with Shift-click or Ctrl-click, and lock all of them in one operation.)
  • In the worksheet, right-click the distribution and select Define Distributions.  In the Define Distribution dialog, click the fx icon near the top right to open the Properties dialog.  On the Sampling tab of that dialog, click Lock Input From Sampling.
  • Edit the distribution formula in the worksheet to insert a RiskLock( ) property function.

During a simulation, a locked input always returns its static value (if specified) or alternatively, its expected value, or the value specified through the options under When a Simulation is Not Running, Distributions Return of the Simulation Settings dialog.

Last edited: 2015-06-19

3.25. RiskSixSigma Property Function

Applies to: @RISK 5.x–7.x

Where should I use the RiskSixSigma( ) property function? Should it be part of RiskOutput( ) or RiskCpk( )?

The standard way is to use RiskSixSigma( ) with RiskOutput( ). That tells @RISK to put six-sigma labels on the graphs, and extra statistics in the statistics grid. It also lets you use the six-sigma worksheet functions to calculate Cpk and many other statistic functions. (In @RISK, click Insert Function » Statistic Functions » Six Sigma.)

For example, suppose you have an output in cell A1 that looks like this:

=RiskOutput(,,,RiskSixSigma(0,1,.5))+formula

If you put the formula =RiskCpk(A1) in another cell, @RISK will do the Cpk calculation using those LSL/USL/Target values. By doing it this way, you associate the six-sigma properties with the calculated output, and all statistic functions will use the same values.

Does that mean that I should never put a RiskSixSigma( ) inside RiskCpk( )?

There are two situations where you would want to place a RiskSixSigma( ) property function inside a statistic function such as RiskCpk( ):

  • You may have an output you don't want to apply six-sigma properties to, but you still want to compute the Cpk for it with a certain set of parameters.  For example, you have a regular output in cell A2 with no six-sigma properties. In another cell, you can place the formula =RiskCpk(A2,,RiskSixSigma(0,1,.5)) to get the Cpk assuming those LSL, USL, and Target values.

  • You may want to calculate the Cpk for an output with a different set of parameters — for instance, to make a table of different Cpk values for different LSL/USL pairs. Using the sample output formula mentioned above for cell A1, you might have RiskCpk(A1,,RiskSixSigma(0,1,1.5)). This would override the six-sigma parameters the output normally has in favor of the ones embedded in the Cpk function.

How are the arguments to RiskSixSigma( ) used in computing Six Sigma statistic functions?

In @RISK, please click Help » Example Spreadsheets » Six-Sigma » Six Sigma Functions.docx. The first section of that document explains where each pf the five arguments to RiskSixSigma( ) is used. Following that are technical details of each of the 19 statistic functions, including computational formulas.

Additional keywords: Cp, RiskCp, Cpk, RiskCpk, Cpm, RiskCpm, DPM, RiskDPM

Last edited: 2015-06-19

3.26. Replacing RAND with RiskBernoulli or RiskUniform

Applies to: @RISK, all releases

I use the Excel RAND function a lot in my spreadsheet, but it is causing some problems. For example, I am not getting the same results when I run my simulation a second time, even though I am using a fixed seed. And when I use shoeprint mode, the numbers in the worksheet are different from what the Data Window shows for the same iteration.

Use an @RISK distribution like RiskBernoulli or RiskUniform instead of Excel's RAND function. If you have a fixed random number seed, @RISK functions will produce a reproducible stream of random numbers. See Random Number Generation, Seed Values, and Reproducibility for more about this.

Sometimes people use RAND in an IF function to decide whether to draw a value from an @RISK distribution:

=IF( RAND()<0.4, RiskNormal(100,10), 0 )

The direct @RISK equivalent to Excel's RAND is RiskUniform(0,1):

=IF( RiskUniform(0,1)<0.4, RiskNormal(100,10), 0 )

However, RiskBernoulli is a simpler choice for IF-tests because it puts the probability right in the function:

=IF( RiskBernoulli(0.4), RiskNormal(100,10), 0 )

You can simplify this expression even further. Since RiskBernoulli returns a 0 or 1, you can replace the IF with a multiplication:

=RiskBernoulli(0.4) * RiskNormal(100,10)

All four of these formulas say that a given risk is 40% likely to occur, and if it does occur it follows the normal distribution. But the @RISK functions give you a reproducible simulation, which RAND does not.

Which of those is the recommended way to model an event risk or operational risk?

The last one is the simplest, but all four have the same problem: you're modeling one risk with two distributions. This means that sensitivity measures won't be accurate, and graphs of simulated results will either show a lot of errors or show a lot of values that weren't actually used. To solve all of these problems, you want to wrap the expression in a RiskMakeInput, like this:

=RiskMakeInput( RiskBernoulli(0.4) * RiskNormal(100, 10) )

See Event or Operational Risks for more information about using RiskMakeInput in this way.

Additional keywords: Bernoulli distribution, MakeInput distribution, Uniform distribution

Last edited: 2019-02-15

3.27. Triangular Distribution: Specify Mean or Median Instead of Most Likely

Applies to: @RISK, all releases

I'd like to use the triangular distribution in @RISK, but I don't know the mode (m.likely), only the mean. Can I specify a triangular distribution using the mean?

Yes. The mean of a triangular distribution equals (min+m.likely+max)/3. Therefore

m.likely = 3*mean – min – max

Compute m.likely using that formula, and enter it along with min and max in the Define Distribution window.

(If the formula yields a value for m.likely that is less than min or greater than max, then mathematically no triangular distribution exists with the specified min, mean, and max.)

I'd like to use the triangular distribution in @RISK, but I don't know the mode (m.likely), only the median (50th percentile). Can I specify a triangular distribution using the median?

Yes.  Many distributions, including RiskTriang( ), let you specify one or more parameters as percentiles. Here's one method:

  1. Open the Define Distribution window and select the Triang distribution.
  2. Click in the box to the right of Parameters (the box contains "Standard"), then click the drop-down arrow that appears, and check (tick) "Alternate Parameters".
  3. The window expands with a "Parameter Selection" section.  Select the radio buttons to the left of Min and Max, but to the right of M. likely select Percentiles and if necessary enter 50.
  4. Click OK.  Now in the Define Distribution window you can specify min, median (50th percentile), and max.

Here's an alternative method:

  1. Open the Define Distribution window and select the Alt. Parameters tab, then TriangAlt.
  2. At the left, next to Parameters, click on Alternate, then click the drop-down arrow that appears.
  3. In the Triang Parameters dialog, select the radio buttons next to Min and Max, then click OK.

Last edited: 2015-06-19

3.28. Double Triangular Distribution

Do @RISK distributions include the Double Triangular Distribution that has been recommended by AACE at http://www.aacei.org/resources/rp/?
(The relevant article is AACE recommendation number 41R-08, "Risk Analysis and Contingency Determination Using Range Estimating" by Dr. Kenneth K. Humphreys.)

With @RISK 6.x–7.x:

Use the RiskDoubleTriang(min,mode,max,lower_p) distribution.

For example, suppose that you have a 76% probability of underrun (0 to 4) and a 24% probability of overrun (4 to 10).   Then you want this formula:

=RiskDoubleTriang(0, 4, 10, 0.76)

With @RISK 5.x and earlier:

You can create a double triangular distribution by using a RiskGeneral distribution.

Suppose that you have a 76% probability of underrun (0 to 4) and a 24% probability of overrun (4 to 10). Then the RiskGeneral function would be

=RiskGeneral(0,10,{4,4},{0.38,0.08})

Please paste this into an empty cell and then click the Define Distribution icon to see the graph.

Where do the 0.38 and 0.08 come from?  In this example, the minimum (greatest possible underrun) is 0, maximum (greatest possible overrun) is 10, and the most likely value is 4 (common side between the two triangles, repeated in {...,...}). 0.38 is the maximum probability density of the first triangle, and 0.08 is the maximum probability density of the second triangle. These are found by

  • 2 × (probability of underrun) ÷ (most likely minus minimum) = first value
    2 × 0.76 ÷ (4 - 0) = 0.38

  • 2 × (probability of overrun) ÷ (maximum minus most likely) = second value
    2 × 0.24 ÷ (10 - 4) = 0.08

You don't necessarily have to use these formulas. @RISK will automatically adjust the probability densities proportionally so that the total probability of the double triangle is 1.

Additional keywords: DoubleTriang distribution

Last edited: 2015-06-19

3.29. Entering Parameters for Gamma Distribution

Applies to:
@RISK 6.x/7.x

I'm trying to use a RiskGamma distribution, but the parameters don't seem to match what I found in another source.

A two-parameter gamma distribution has one shape parameter and one scale parameter. @RISK specifies them in that order, as shape = α (alpha) and scale = β (beta).

But there are other ways to specify the parameters, simply by using different Greek letters or even by using different parameters. Wikipedia, for instance, lists three ways: shape k and scale θ (theta), shape α or k and rate β (β = 1/θ, rate= 1/scale), or shape k and mean μ = = k/β.

Don't be confused by the different letters. Comparing to Wikipedia, @RISK α (alpha) matches Wikipedia's k, and @RISK β (beta) matches Wikipedia's θ (theta), not Wikipedia's β (beta). @RISK's β is a scale parameter, but Wikipedia's β is a different parameter called rate, which is the reciprocal of the scale parameter θ. Thus @RISK reports the mean of the gamma distribution as αβ, considering β as a scale parameter, and that matches Wikipedia's αθ. Wikipedia also gives a mean of α/β, because the rate β is 1/θ where θ is the scale parameter corresponding to @RISK's β parameter.

I have parameters from an outside source. How do I enter them in @RISK?

Here's what to enter for the α and β parameters of the RiskGamma distribution;

  • If you have shape and scale, regardless what letters are used by your outside source, set α = shape and β = scale.
  • If you have shape and rate, set α = shape and β = 1/rate.
  • If you have shape and mean, set α = shape and β = mean/shape.

Last edited: 2018-12-28

3.30. Entering Parameters for Log-normal Distribution

Applies to:
@RISK, all releases
RISKOPTIMIZER, all releases
RDK and RODK, all releases

The RiskLognorm( ) function doesn't behave like the log-normal function in the books. When I enter μ=4, σ=2, I expect the simulated distribution to have a mean of 54.6 and standard deviation of 2953.5, but instead the mean and standard deviation are very close to 4 and 2. What's wrong?

@RISK, RISKOptimizer, and the developer kits have two log-normal distributions. RiskLognorm2( ) is the traditional distribution and behaves in the way described in statistics books. We also offer RiskLognorm( ), where the μ and σ (mu and sigma) you enter are the actual mean and standard deviation of the distribution, subject to the usual sampling fluctuation. The two distributions are the same except for the way you enter parameters.

  • If you know the desired actual mean and standard deviation, use RiskLognorm( ).  For RiskLognorm(μ,σ):
    Actual mean of the distribution = μ
    Actual standard deviation of the distribution = σ

  • If you want to use parameters that match the log-normal distribution in many textbooks, use RiskLognorm2( ).  For RiskLognorm2(μ,σ):
    Actual mean of the distribution = exp(μ+σ²/2)
    Actual standard deviation of the distribution = exp(μ+σ²/2)·sqrt(exp(σ²)−1) = (actual mean)·sqrt[exp(σ²)−1]

  • Finally, if you know the desired geometric mean and standard deviation of a log-normal distribution, use RiskLognorm2( ) but set μ to the natural log of the desired geometric mean, and σ to the natural log of the desired geometric standard deviation.  For details, please see Geometric Mean and Geometric SD in Log-normal.

These three methods are illustrated in the accompanying workbook.

Additional keywords: Lognorm distribution, Lognorm2 distribution

Last edited: 2015-06-19

3.31. Log-normal Distribution with 2 Percentile Parameters

Applies to: @RISK 5.x–7.x

In @RISK, is there any other way to generate a log-normal distribution with two percentile parameters, even if the log-normal automatically generates a third percentile parameter based on the other two?

Yes, you can do this easily.

  1. In Define Distribution, select the Alt. Parameters tab, and then LognormAlt.
  2. Click in the "Alternate" box next to Parameters, in the left-hand section of the dialog, then click the drop-down arrow that appears.
  3. Under Parameter Selection, click the radio button at the left of Location.  Change the two remaining percentiles, if you need to.  Click OK.
  4. Back in the Define Distribution dialog, specify zero for Loc (location).

Additional keywords: LognormAlt, RiskLognormAlt

Last edited: 2015-06-19

3.32. Left Skewed or Negative Skewed Log-normal Distribution

Applies to: @RISK 5.0 and newer

I want to create a left skewed (negatively skewed) log-normal distribution function with @RISK, using three percentiles. Using the @RISK Define Distribution window with alt parameters, I put in 0 as my 5th percentile, .018 as my 70th percentile, and .021 as my 95th percentile. But then the Define Distribution window says "Unable to graph distribution", and the function returns a #VALUE error. Can you explain why @RISK won't allow this to be done?

This is based on the definitions for skewness and the domains for mu and sigma in a log-normal (mu > 0 and sigma > 0, μ and σ both positive). The skewness will never be negative if all terms in the expression are always positive.

You do have several workarounds, however:

  • Enter 100 minus the percents, minus the percentiles, and minus the RiskLognorm, like this:

    =RiskMakeInput( -RiskLognormAlt(5%,-0.021, 30%,-0.018, 95%,0) )

    The 95th percentile of 0.021 becomes a 5th percentile of minus 0.021, and so on for the others.  The RiskMakeInput wrapper tells @RISK to collect data and statistics on the final formula, not just the RiskLognorm. See also: All Articles about RiskMakeInput.

    With this technique, the Define Distribution window will show the "backwards" log-normal, with the negative percentiles. But after a simulation, the Browse Results window will show the desired distribution with +0.021 in the 95th percentile.

  • Enter those three (x,p) pairs in your worksheet and then fit a log-normal distribution. @RISK comes up with RiskLognorm(0.013156,0.0022747, RiskShift(0.0038144) ).

  • You could also use a different distribution, such as the BetaGeneral distribution, which can take on the left skewed shape.

Additional keywords: RiskBetaGeneral, RiskLognorm

Last edited: 2015-06-19

3.33. Geometric Mean and Geometric SD in Log-normal

Applies to: @RISK, all releases

I want to use a log-normal distribution. I have the geometric mean and geometric standard deviation. How can I set up this distribution?

Use RiskLognorm2, but wrap each of the two parameters in a natural-log function. Example:

=RiskLognorm2( LN(2.1), LN(1.8) )

You can type the function directly in Excel's formula bar, or use @RISK's Insert Function button, or use Define Distribution and select Lognorm2 from the Continuous tab.  You can type the LN function right into the boxes in the Define Distribution dialog, as shown in the attached illustration.

Please note: In this application you want Lognorm2, not Lognorm. For the difference between them, please see Entering Parameters for Log-normal Distribution.

Last edited: 2015-06-19

3.34. Preventing Duplicates in Discrete Distributions

Applies to: @RISK 5.5 and newer

I have a RiskDiscrete distribution, and I want to ensure that each iteration gets a unique value from that distribution: no duplicates across iterations, in other words. How can I accomplish this?

The RiskResample distribution provides an easy solution for this requirement.  For the first argument of RiskResample, use sampling method 1 if you want @RISK to go through your list of values in a specified order, or sampling method 3 for random sampling from your list of values without replacement.

With either sampling method 1 or method 3, if your simulation has more iterations than the number of values in your list, the RiskResample function will return an error for those extra iterations.

Additional keywords: Discrete distribution, Resample distribution

Last edited: 2016-04-21

3.35. Delimiters and Discrete Distributions

Applies to: @RISK 5.x–7.x

I have a RiskPoisson(3) distribution, and I click Define Distributions, or Browse Results after a simulation. I set the delimiters to 0 and 6, and @RISK shows a probability of 91.7% between them. But Excel's POISSON.DIST(6,3,TRUE) shows a cumulative probability of 96.6%. Which one is right?

This seems strange at first, but there's an explanation. This is nothing special about the Poisson distribution; it applies to RiskBinomial, RiskDiscrete, and all the other discrete distributions.

@RISK and Excel are both right, but they're measuring different things. Excel is reporting the cumulative probability from x=minus infinity to x=6. @RISK reports the cumulative probability from x=0 to x=6. But the Poisson distribution doesn't extend to negative x, so why aren't those two the same?

It's clearer if you look at the cumulative distribution. It's a step function, and x=0 and x=6 are right on the steps. So how is @RISK to allocate the probability for x=0 and the probability for x=6? The answer is that if a delimiter is directly on a discrete x value, @RISK allocates all the probability for that x value to the region to the left of the delimiter. So the 5% probability of x=0 (left-hand delimiter) goes in the left-hand region, and the 5% probability of x=6 (right-hand delimiter) goes into the middle region. The probability shown for the middle region is thus P(L < x ≤ R), not P(L ≤ x ≤ R) as you might expect.

This convention avoids some anomalies. For example, suppose you set both delimiters to 3. If the rule were P(L ≤ x ≤ R), then the middle region, which has zero width, would have a probability of 22.4%, equal to P(x=3), and the visible probabilities would add up to only 77.6% instead of 100%.

Given that this is mathematically valid, it still looks odd at first glance. If you need to make the graph look "right" for a presentation, you can do it easily. Delimiters are rounded to two decimal places, so set them to −0.001 and 6.001. Then x=0 and x=6 will both be inside the center region, but @RISK will display 0.00 and 6.00 for the delimiters.

I clicked in an empty cell, clicked Define Distributions, and selected Poison with λ=3. Initially the graph showed delimiters of 1 and 6, with probability 90% between them. I just clicked on the delimiters, without moving them, and the probability changed to 76.7%. Why?

By default, the Define Distribution graph sets delimiters at the 5th and 95th percentiles. (You can change this default in the Simulation Graph Defaults section of Application Settings.) To show the delimiters, @RISK finds the x values of those percentiles. If you change the percentages, @RISK finds new x values; and if you change the x values, @RISK finds new percentages. When you click on a delimiter, even if you don't actually change it, @RISK takes that as a signal that it should adjust the percentage to the x value instead of the other way around. So it recomputes the percentages based on the x values 1 and 6.

But why should the probability change? Once again, the explanation is in the cumulative graph. One percentile can only be one possible x value, but one x value can be any of a range of percentiles. Thus, computing a percentile from an x value may not give a consistent result with computing an x value from a percentile.  This is a feature of any discrete distribution, not just the Poisson.

Last edited: 2015-06-19

3.36. Cauchy Distribution

Applies to: @RISK 5.0 and newer

Does @RISK have a Cauchy distribution?

Yes, beginning with @RISK 7.5 you can specify a Cauchy distribution (also known as a Lorentz or Lorentzian distribution) in the regular Define Distributions dialog: RiskCauchy(γ,β) where γ is the location parameter and β is the scale parameter.

@RISK 7.0 and earlier did not have a Cauchy distribution among the pre-programmed list. If you can't upgrade to the current version of @RISK, you can easily create one yourself from a t distribution. According to Evans, Hastings, Peacock Statistical Distributions 3/e (Wiley, 2000), pages 49–50:

"The Cauchy variate C:a,b is related to the standard Cauchy variate C:0,1 by C:a,b ~ a+b(C:0,1). ... The standard Cauchy variate is a special case of the Student's t variate with one degree of freedom."

Therefore, to get a Cauchy distribution with location parameter (median) in cell A1 and scale parameter in A2, use this formula in @RISK 5.5 through 7.0:

=RiskMakeInput(A1 + A2*RiskStudent(1))

In @RISK 5.0, use:

=RiskMakeInput(A1 + A2*RiskStudent(1), RiskStatic(A1))

Notes:

  • The RiskMakeInput( ) wrapper tells @RISK that graphs, reports, and sensitivity analysis should show the Cauchy distribution from the formula, as opposed to the Student's t distribution.

  • The mean of a Cauchy distribution is undefined, so when a simulation isn't running you would normally see #VALUE in the cell. By using the RiskStatic property function, you tell @RISK to display the median in the cell when a simulation isn't running. Beginning with @RISK 5.5, the RiskStatic function is not necessary because @RISK will use 0 as the mean for RiskStudent(1).

  • The output graph may look like just a spike, because the default automatic scaling includes the few extreme values as well as the great mass in the center. If that happens, right-click on the x axis labels and select Axis Options to adjust the scaling.

The attached workbook illustrates the Cauchy distribution for @RISK 7.0 and earlier.

Last edited: 2016-07-12

3.37. Extreme Value Distributions: Gumbel and Fréchet

Applies to: @RISK 5.x and newer

@RISK does not use the type of Extreme Value distribution that I need. Is there any way I can get the other type of Extreme Value distribution out of @RISK?

The Extreme Value distribution falls into two major types: Type I is also called Gumbel, and Type II is also called Fréchet; both are offered in @RISK.

Gumbel Distribution (Type I Extreme Value)

There are two sub-types of Gumbel distribution.

The Maximum Extreme Value distribution is implemented in @RISK's RiskExtValue(α,β) function, which has been available since early versions of RISK.

The Minimum Extreme Value distribution is implemented in @RISK 6.0 and newer as the RiskExtValueMin(α,β) function. In earlier versions of @RISK, use RiskExtValue( ), but put a minus sign in front of the function and another minus sign in front of the first argument. For example, for a Minimum Extreme Value distribution with α=1, β=2, use RiskExtValueMin(1,2) in @RISK 6.0 and newer, or –(RiskExtValue(–1,2)) in @RISK 5.7 and earlier.

Fréchet Distribution (Type II Extreme Value)

The Fréchet distribution is defined in @RISK 7.5 and newer.

If you have an older @RISK and can't upgrade to the latest, you can use the technique in Add Your Own Distribution to @RISK to create one. You'll need the CDF, which is exp[–z–α], where z = (x–γ)/β. γ is the location parameter, β is the scale parameter, and α is the shape parameter.

Additional keywords: ExtValue distribution, ExtValueMin distribution

Last edited: 2016-07-12

3.38. F Distribution

Applies to: @RISK 5.0 and newer

Does @RISK have an F distribution?

With @RISK 6.0 and newer:

Select Define Distributions » Continuous » F, or Insert Function » Continuous » RiskF.

With @RISK 5.x:

Before release 6.0, @RISK did not have an F distribution (Fisher-Snedecor distribution, variance ratio distribution) among the pre-programmed list. If you still have @RISK 5.x, you can easily create one yourself with a ratio of chi-squared distributions. According to Evans, Hastings, Peacock Statistical Distributions 3/e (Wiley, 2000), page 92:

The variate F:n,m is related to the independent Chi-squared variates χ²:ν and χ²:ω by

F:n,m ~ [(χ²:ν)/ν] / [(χ²:ω)/ω]"

Therefore, to get a distribution of F(A1,A2), you can program

=RiskMakeInput( (RiskChiSq(A1)/A1) / (RiskChiSq(A2)/A2) )

The attached workbook shows this, and requires @RISK 5.0 and later. (In earlier versions of @RISK, you can still do the calculation, but the RiskMakeInput wrapper isn't available. RiskMakeInput, which lets you treat a calculation as a distribution for most purposes, was new in @RISK 5.0.)

Last edited: 2015-06-19

3.39. Generalized Pareto Distribution

Does @RISK handle a generalized Pareto distribution?

Yes, with the restriction that the shape parameter must be positive.

The generalized Pareto distribution takes three parameters: location μ (mu), scale σ (sigma), and shape k.  The RiskPareto2 distribution takes three parameters: scale b, shape q, and optionally a location shift in the RiskShift( ) property function.

Conversion between the parameters:

  • scale: b = σ/k or σ = b/q
  • shape: q = 1/k or k = 1/q
  • location: μ = RiskShift value

Conversion between the functions:

  • GPD(μ, σ, k) is equivalent to RiskPareto2(σ/k, 1/k, RiskShift(μ))
  • RiskPareto2(b, q, RiskShift(μ)) is equivalent to GPD(μ, b/q, 1/q)

Last edited: 2017-05-02

3.40. Four-Parameter Pert Distribution

Applies to: @RISK 5.x–7.x

Does @RISK have a four-parameter Pert distribution, with a shape parameter?

The RiskPert distribution has three parameters: min, mode (most likely), and max. Some authorities, such as Wolfram, mention a four-parameter Pert distribution, the fourth parameter λ being the shape, and you may see references to a "Beta-Pert" on some Web sites. Implicitly, with RiskPert the value of λ is 4.

The Pert distribution is closely related to the Beta distribution, and in fact RiskPert is a special case of RiskBetaGeneral.

@RISK doesn't let you enter a shape parameter directly, but you get the equivalent of a four-parameter Pert distribution with a RiskBetaSubj(min, mode, μ, max) function, where

μ = (min + max + λ·mode) / (λ + 2).

To modify a regular three-parameter RiskPert(min, mode, max) by adding a shape parameter λ, change it to

RiskBetaSubj(min, mode, (min+max+λ*mode)/(λ+2), max).

If you have the min, mode, max, and λ in cells B1 through B4, then you can put the formula

=(B1 + B3 + B4*B2)/(B4 + 2)

in cell B5, and use =RiskBetaSubj(B1,B2,B5,B3). A simple example is attached.

Last edited: 2016-08-29

3.41. Sensitivity Simulation with RiskSimtable for Specific Values

Applies to: @RISK 5.x–7.x

As you know, @RISK sensitivity analysis lets you see the impact of uncertain model parameters on your results. But what if some of the uncertain model parameters are under your control? In this case the value a variable will take is not random, but can be set by you. For example, you might need to choose between some possible prices you could charge, different possible raw materials you could use or from a set of possible bids or bets. To properly analyze your model, you need to run a simulation at each possible value for the "user-controlled" variables and compare the results. A Sensitivity Simulation in @RISK allows you to quickly and easily do this, offering a powerful analysis technique for selecting between available alternatives.

In @RISK, any number of simulations can be included in a single Sensitivity Simulation. The RiskSimtable( ) function is used to enter lists of values, which will be used in the individual simulations, into your worksheet cells and formulas. @RISK will automatically process and display the results from each of the individual simulations together, allowing easy comparison.

To run a Sensitivity Simulation:

  1. Enter the lists of values you want used in each of the individual simulations into your cells and formulas using RiskSimtable( ). For example, possible price levels might be entered into Cell B2, like this:

    =RiskSimtable({100,200,300,400})

    This will cause simulation #1 to use a value of 100 for price, simulation #2 to use a value of 200, simulation #3 to use a value of 300 and simulation #4 to use a value of 400. (If you have too many values to place comfortably in the formula, see Cell References in Distributions.)

  2. Set the number of simulations in the Simulation Settings dialog box (in this example, 4 simulations) and run the Sensitivity Simulation using the Start Simulation command.

Each simulation executes the same number of iterations and collects data from the same specified output ranges. Each simulation, however, uses a different value from the RiskSimtable( ) functions in your worksheet.

@RISK processes Sensitivity Simulation data just as it processes data from a single simulation. Each output cell for which data was collected has a distribution for each simulation. Using the functions of @RISK, you can compare the results of the different alternatives or scenarios described by each individual simulation. The Distribution Summary graph summarizes how the results for an output range change. There is a different summary graph for each output range in each simulation, and these graphs can be compared to show the differences between individual simulations. In addition, the Simulation Summary report is useful for comparing results across multiple simulations.

The values entered in the RiskSimtable function can be distribution functions, so you can also use Sensitivity Simulation to see how different distribution functions affect your results. For example, you may wish to see how your results change if you alternately try RiskTriang( ), RiskPert( ), or RiskNormal( ) as the distribution type in a given cell. For more, see RiskSimtable with Distributions as Arguments.

Caution:
It is important to distinguish between controlled changes by simulation (which are modeled with the RiskSimtable( ) function), and random variation within a single simulation (which is modeled with distribution functions). RiskSimtable( ) should not be substituted for RiskDiscrete( ) when evaluating different possible random discrete events. Most modeling situations are a combination of random, uncertain variables and uncertain but "controllable" variables. Typically, the controllable variables will eventually be set to a specific value by the user, based on the comparison conducted with a Sensitivity Simulation.

Caution:
Each simulation executed when the number of simulations is greater than one in the Simulation Settings uses the same random number generator seed value. This isolates the differences between simulations to only the changes in the values returned by RiskSimtable( ) functions. If you wish to override this setting, select Multiple Simulations Use Different Seed Values in the Random Number Generator section of the Sampling tab prior to running multiple simulations.

Additional keywords: Simtable, Sensitivity analysis

Last edited: 2015-06-19

3.42. RiskSimtable with Distributions as Arguments

Applies to: @RISK, all releases

The @RISK manual says that RiskSimtable( ) can take distributions as arguments, but I can't get the syntax right. How should I code my RiskSimtable( ) function?

RiskSimtable( ) actually has one argument. It's an array, either a list of values in curly braces like {14,33,68,99} or an Excel range reference without curly braces like C88:C91. To use distribution functions as arguments to RiskSimtable( ), put them in a range of cells in Excel and then specify the range as the argument to RiskSimtable( ).

Please download the attached example, KB55_SimtableArguments.xlsx. It shows two methods to use RiskSimtable( ) to modify distributions from one simulation to the next. In each case, cell references are the key to making the behavior vary.  In the example, Simulation Settings » Sampling specifies that multiple simulations all use the same seed.  Thus, any differences between simulations are completely due to the different distributions chosen.

Method 1: There is only one distribution function, RiskBinomial( ), in this example. Its second argument, p, is a cell reference to a RiskSimtable( ) function that lists the value of p for each simulation. Other formulas would use the value of the distribution function, not the RiskSimtable( ) function.

Method 2: Each simulation uses a different distribution function. Those functions are defined in an array of cells, and the RiskSimtable( ) function has that array reference as its argument. Other formulas would use the value of the RiskSimtable( ) function, not the individual distribution functions.

There's one potential problem with that second method. Since the RiskSimtable( ) function refers to the three cells containing the three distribution functions, all three of them are precedents of the RiskSimtable. If one of your @RISK outputs refers to that RiskSimtable(), directly or indirectly, all three of the distributions will show as precedents of the output. Logically, in each of the three simulations, a different one of the functions is  a precedent of your output. But since the RiskSimtable( ) function argument refers to all three, all three show up as precedents in each of the three simulations.

The solution is to wrap the RiskSimtable( ) inside a RiskMakeInput( ), as was done in the last block in the example. Then @RISK will not consider the precedents of the RiskSimtable( ) as precedents of the output, and the tornado diagram for the output will show just one bar in each of the three simulations, which makes sense logically. See also: All Articles about RiskMakeInput.

Additional keywords: Simtable

Last edited: 2015-10-06

3.43. Multiple Simtables — Need All Combinations of Values

Applies to:  @RISK, all releases

I'm using several RiskSimtable functions because I want to vary multiple variables. Variable A has two values and variable B has five values. How do I set up the 2×5 = 10 simulations to use all combinations of the variables?

When you have multiple RiskSimtable functions, the first simulation uses the first value of every RiskSimtable, the second simulation uses the second value of every RiskSimtable, and so on. If the number of simulations is greater than the number of values, that RiskSimtable will return error values for the extra simulations.

This means that if you want 2×5 = 10 combinations, each RiskSimtable needs 10 values. There are two ways to accomplish this: list all ten combinations of values, and have your RiskSimtable functions access each list, or select values of the variables yourself based on the current simulation number without using RiskSimtable.  The first method is easier, especially if you have a small number of variables and they have a small number of values. The second method is more flexible and can be extended easily, but it's more complicated. The attached workbook shows both methods, using the same three variables for each.

Additional keywords:  Simtable

Last edited: 2014-10-17

3.44. Selecting Exactly Two Items (Two Numbers Guaranteed Different)

Applies to:
@RISK, all versions

Question:
I have N items, and every iteration I need to select exactly two of them. Let's say N = 25, for example. I can't just use two RiskIntUniform(1,25) because they might come up with the same number in a given iteration. I need two unique items in every iteration. (I'm not worried about repetitions between iterations, just that the two numbers I get in any particular iteration are always different from each other.)

Response:
The attached workbook shows two methods to accomplish this. Each method varies the two selections independently but guarantees that they'll never be equal in any one iteration. You can tap F9 repeatedly to see how each method selects two numbers, and each time the two are different. If you run a simulation, it will count the number of occurrences where the two numbers are the same; that is zero because they are always different, as desired.

In Method A, you use RiskIntUniform(1,25) to select the first one. Therefore, 24 items have not been selected as the first item, so you use RiskIntUniform(1,24) to help you find the second one. Specifically, to find the second one you add the two RiskIntUniform functions together and then, if the total is greater than N, you subtract N.  For example, suppose that on one iteration you get 16 from RiskIntUniform(1,25) and 19 from RiskIntUniform(1,24). Then your second selection is number 35 ( = 16+5–10).

Method A is fairly straightforward, but it has a small problem: the 25 numbers are not quite equally likely to occur over the course of the whole simulation. (See the 'Method A Results worksheet in the attached workbook.) Why does this happen? Adding two independent distributions, as Method A does, tends to lose some of the advantage you normally get from the stratified sampling method of Latin Hypercube.

Method B overcomes this problem, but at the cost of some complexity. Start with the number of ways to draw two numbers from N without replacement: that is N(N–1)/2.  For N = 25, there are 300 possibilities, which you can think of as numbered from 1 to 300. Therefore, Method B uses a RiskIntUniform(1,300). In each iteration, the integer value is "decoded" to a pair of unique integers 1–25. (If you look at the formulas, you'll see some pretty involved algebra.)

Now the bumpiness of the results of Method A is gone. On the 'Method B Results' sheet, you can see that all 25 numbers come up exactly the same number of times.

Looking at the formulas on the 'TWO METHODS' sheet, you might be suspicious of the formulas for first and second selection with Method B. Maybe they work for 25 items but not for other numbers of items? That's the purpose of the last sheet, 'Method B Verify'. It shows that, for any number of items from 3 to 100, the RiskIntUniform of Method B does cover all possible draws of two different numbers.

last edited: 2014-05-30

3.45. Add Your Own Distribution to @RISK

Applies to: @RISK 5.0 and newer

I need a particular distribution that isn't in the Define Distributions dialog. Can I just give @RISK a formula for the CDF and have @RISK draw the random numbers?

If you have a formula for the inverse CDF, you can use it with @RISK to create your own distribution. The input to that formula is a RiskUniform(0,1), which provides a randomly selected cumulative probability; then your inverse CDF formula converts that to an x value. By enclosing the formula in RiskMakeInput( ), you tell @RISK to treat the formula as a regular distribution for purposes like sensitivity analysis and graphing.

We'll illustrate this with the Burr distribution. (Starting with release 7.5, the Burr12 distribution is built into @RISK, but you would use the same method if you need to create a distribution that's not in @RISK.) Wikipedia gives the CDF of a Burr Type XII as

F(x; c,k) = 1 - (1 + xc)-k

where c and k are positive real numbers. A little algebra gives the inverse as

x = [ (1-F)-1/k - 1 ]1/c

To draw random numbers for its standard distributions, @RISK first draws a random number from ithe uniform distribution 0 to 1, which represents a cumulative probability; then it finds the x value corresponding to that percentile — in other words, it uses that cumulative probability as input to an inverse CDF. (@RISK uses special techniques for distributions that don't have a closed form for their inverse CDFs.) Therefore, your Excel formula for a Burr distribution is the combination of RiskUniform and the inverse CDF above:

=( (1-RiskUniform(0,1))^(-1/k) - 1 )^(1/c)

Finally, you want to wrap that in a RiskMakeInput, so that @RISK will store iteration values of this formula, let you make graphs, treat it as an input in sensitivity analyses, and so on. Your final Excel formula is:

=RiskMakeInput( ( (1-RiskUniform(0,1))^(-1/k) - 1 )^(1/c), RiskName("Burr"))

You'll replace the parameters c and k with numbers, or more likely with cell references.

To see the formula in action, open the attached workbook in @RISK. The four graphs were made with the four combinations of c and k shown in the worksheet; you can compare these to the PDF curves show in the Wikipedia article. You can also enter your desired values of c and k in any of columns A through D, and run a simulation.

See also: All Articles about RiskMakeInput

Last edited: 2016-07-12

3.46. Custom Distribution Using RiskCumul

Note: This article illustrates solutions to very specific problems, but you can modify them to create many different custom distributions.

Example 1:
I need a distribution where there's a 75% chance of a value between 0 and 8 and a 25% chance of a value between minus 12 and minus 7.  A competing product does this as a "custom distribution". Can I do it in @RISK?

Response:
Yes, the RiskCumul function can represent this distribution for you.  In RiskCumul, you specify an array of points and a second array of cumulative probabilities at those points.

Here is the function: 
            =RiskCumul(minimum, maximum, array of x, array of cum-p)
and specifically for your distribution:
            =RiskCumul(-12, 8, {-7,0}, {0.25,0.25})
Try pasting this formula into an Excel cell and then clicking Define Distribution to see the histogram.

Here's how to read the arguments:

x  cum-pexplanation
−12 0 minimum value of distribution is −12
−7 0.25 25% probability between −12 and −7
0 0.25 0% probability between −7 and 0
8 1 maximum value of distribution is 8

The first two arguments to RiskCumul are the lowest and highest possible values in your distribution. You specified minus 12 and plus 8 in your problem statement.

The array of x's and the array of cum-p's are enclosed in curly braces { }.  (Alternatively, you could put the numbers in cells of your Excel sheet, and then reference the array in the form D1:D4 without braces.)

The 0.25 cumulative probability for x=0 might seem a bit strange. The explanation is that you specified zero probability between minus 7 and 0.  If the probability in that region is zero, then the cumulative probability at every point in the region is the same as the cumulative probability at the left edge, namely 0.25 (25%).

The cumulative probability of 1 is not specified anywhere in the RiskCumul function, because it's implicit in the listing of 8 as the maximum for the distribution.

Example 2:
I need to set up a probability distribution as follows:

  • 75% of the probability occurs between 0 and 5
  • 25% of the probability occurs between −15 and −5

Response:
Here's how to analyze it:

  • The lowest possible value is −15 and the highest possible value is 5.
  • The first 25% of cumulative probability occurs between the minimum and −5.
  • Values between −5 and 0 are impossible (zero probability), so the cumulative probability remains at 25% at x=0.
  • The rest of the probability, 75%, occurs between x=0 and the maximum.

Paste this formula into a cell:
        =RiskCumul(-15, 5, {-5,0}, {0.25,0.25})
and press the Enter key.

To see the distribution, click into the cell and click Define Distribution.

RiskCumul takes four arguments: the minimum x, the maximum x, an array of intermediate x's, and an array of the cumulative probabilities for those x's. (Arrays are enclosed in { } curly braces.)  The 75% probability for the region 0 to 5 doesn't appear explicitly — it's implied by the fact that cumulative probability is 0.25 at x=0 and is 1.00 at x=5.

These particular examples show three regions (divided by two x's) between the minimum and maximum, but you could have any number of regions.

Additional keywords: Cumul distribution

last edited: 2013-04-11

3.47. How does RiskSplice Work? (Technical Example)

Applies to: @RISK 5.5.0 and newer

The help text for RiskSplice( ) says

The two pieces of the distribution will be re-weighted since the total area under the (spliced) curve still has to equal 1. Thus the probability density of any given x value in the resulting spliced distribution will probably be different from what it was in the original distribution.

How exactly does RiskSplice( ) work? How are the density functions adjusted to make the new distribution?

Please see the attached Word document for the mathematical details and a complete example.

Additional keywords: Splice

Last edited: 2015-04-23

3.48. Nesting RiskSplice Distributions

Applies to:
@RISK 5.5.0 and newer

Can I splice together more than two distributions with RiskSplice? I understand how to use RiskSplice( ) to combine two distributions into one, but is there a way to combine more?

It is possible to splice together three or more distributions by nesting RiskSplice functions. For example, if you had distributions in cells A1, B1, C1, and D1, you could do:

=RiskSplice(A1,B1,X) in cell A3, =RiskSplice(C1,D1,X) in cell B3, and then splice those in another cell, which would be =RiskSplice(A3,B3,X).

You can also nest the distributions without using cell references if you prefer:

=RiskSplice(RiskNormal(30,1), RiskSplice(RiskWeibull(2,10), RiskGamma(2,10), 10), 40)

However, while splicing together more than two distributions is possible, the define distribution window is unable to graph it. It does simulate normally and you can see the results in the Browse Results window.

If you want to graph a distribution with an unusual shape, you might be better off with the RiskGeneral or RiskCumul distributions, which let you define the dataset manually, or the Artist feature under Distribution Fitting, which creates a RiskGeneral distribution for you.

Last edited: 2018-11-02

3.49. Bimodal or Mixed Distribution

Applies to: @RISK 5.x–7.x

How can I create a bimodal distribution in @RISK? I want a distribution that is a mix of one distribution some percentage of the time, and a different distribution the rest of the time.

Although it's possible to do the whole thing in one cell, it's clearer if you use several "helper cells". That will also make it easier to find what is wrong if the final distribution doesn't behave as you expected.

Please refer to the attached example in conjunction with these steps:

  1. Place the two desired distributions in two cells (B15:B16).

  2. In a third cell (C16), place the proportion of the final mix that should come from the first distribution. The rest will come from the second distribution.

  3. In a fourth cell (B18), place a RiskBernoulli( ) to determine which distribution gets used in that iteration. (RiskBernoulli returns 1 the stated percentage of the time, and 0 the rest of the time.)

  4. Finally (B20), construct an IF(fourth cell, first cell, second cell).  That is the final mixed distribution.

  5. Recommendation: Wrap the final distribution in a RiskMakeInput( ). That way, any sensitivity analysis that you do will treat the final distribution as an input and will not go back to the original inputs or the RiskBernoulli( ). See also: All Articles about RiskMakeInput.

I have a similar requirement, but instead of choosing a distribution probabilistically, I need to use one distribution below a certain x value and a different distribution above that value.

The RiskSplice( ) function is designed for this application. In the @RISK ribbon, click Insert Function and find RiskSplice( ) among the special distributions.

Last edited: 2015-06-19

3.50. Password Protecting a Worksheet or Workbook

Applies to:
@RISK 6.x/7.x

I want to password one sheet in a workbook. How can I do that, and still run a simulation?

@RISK can store and remember the password after you provide it once. See "Protected Workbook: This operation cannot be performed ...".

I want to protect the whole workbook, and the above technique doesn't work. Is there another option?

If you protect the workbook by File » Save As » Tools next to the Save button) » General Options, you can write a VBA function to provide @RISK with the password. See "GetPassword cannot be found." This technique will not work if you protect the workbook with the Protect Workbook command on Excel's Review tab.

Last edited: 2018-03-09

3.51. Cell References with RiskCompound

Applies to: @RISK 5.x–8.x

When creating a RiskCompound function, I notice that the results are the same if the entire distribution is defined within the function or if one of the component distributions is defined in another cell that the RiskCompound function references. However, if the function references another cell which references a third cell, the results are different.

The RiskCompound function is not designed to work the same with references to references as it does with a single cell reference. Please see the attached worksheet for a complete explanation.

See also: All Articles about RiskCompound

Last edited: 2021-11-18

4. @RISK Distribution Fitting

4.1. Capacity of Distribution Fitting

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

How many points can be used to fit a distribution?  How many variables (columns) can be included in a batch fit?

@RISK requires at least five points, and it allows up to 10 million points in a fit. 

Batch fits can include up to 256 variables. If you are fitting more than a few variables in a batch fit, for faster performance you may want to tell @RISK not to produce detailed reports.  In the Batch Fit dialog, on the Report tab, turn off the option "Include Detailed Report Worksheet for Each Fit".

Time Series batch fits are limited to 255 variables.

Last edited: 2015-06-19

4.2. Bootstrapping for Distribution Fitting

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I am a user of @RISK, and I wonder if it might be used for a nonparametric bootstrap method for analyzing a data set.

Beginning with release 6.0, @RISK offers parametric bootstrapping. Compared to nonparametric bootstrapping, parametric bootstrapping requires less resampling and is more robust with smaller data sets. You can get parameter confidence intervals as well as goodness-of-fit statistics.

Because it is computationally intensive, parametric bootstrapping is turned off by default in @RISK. You can select it on the Bootstrapping tab of the dialog for fitting distributions. Please see "Appendix A: Distribution Fitting" in the @RISK user manual or help file. There's also a nice picture in 15.3 Bootstrapping from Penn State's Eberly College of Science.

See also: N/A in Results from Parametric Bootstrapping

Last edited: 2018-11-09

4.3. Discrepancy from Fits Performed by Other Software

Applies to: @RISK 5.x and newer, Professional and Industrial Editions

When I fit my data in @RISK, I get a very different result from the ________ software. Maybe @RISK fails to converge at all, or maybe it converges on a fit but the parameters are very different. Is there some setting I need to change?

Probably there is. Specifically, if the process that generated the data has a natural lower bound, you should specify that lower bound on the Distributions to Fit tab of the fitting dialog.

Why is this necessary? Many software packages assume a lower bound of zero for distributions that don't have a left-hand tail. Other packages, including @RISK, take a more general approach and make the lower bound subject to fitting also, as a shift factor. This allows, for instance, a distribution shaped like a log-normal but offset to left or right, if that matches the data best. But sometimes that is actually too much freedom, and @RISK fails to converge on a fit. (In general, "convergence failed" means that the numerical process of homing in on an answer for the MLE got stuck in a loop and couldn't finish.)

When the data have a natural lower bound, and you specify that lower bound to @RISK, it can do a better job of fitting more efficiently. Specifying the lower bound may even make the difference between "convergence failed" and a successful fit, as for example in some Weibull distributions with shape parameter less than 1.

On the Distributions to Fit tab of the fitting dialog, "bounded but unknown" restricts the fit to distributions that don't have left-hand tails, but it doesn't affect the fitting algorithm for those distributions. But when you specify a specific lower bound, then @RISK uses that as a fixed shift factor, and the mathematics of doing the fit are simplified.

Last edited: 2015-06-01

4.4. "Distributions to Fit" Dialog Doesn't Allow Every Distribution

Applies to: @RISK 5,x–7.x

Why won't @RISK allow me to specify that I want to fit a discrete distribution (e.g. a Binomial, Geometric, HyperGeo, IntUniform, NegBin, Poisson) in the "Distributions to Fit" dialog?

@RISK lets you choose distributions that are appropriate for the type of data you specify.  On the "Data" tab, change the data type to a discrete type, and then the discrete distributions will be available to you on the "Distributions to Fit" tab.

Last edited: 2015-06-19

4.5. Discrete Density Data Treated as Continuous

Applies to: @RISK 5.x–7.x

My data set is as follows:

xp
0 0.14
50 0.35
100 0.30
200 0.15
500 0.06

I calculate the mean in Excel by summing the product of each data value multiplied by its probability, and I get 107.5. But if I do a fit on this data, the Input column in the Fit tab shows the mean as 183.74. Have I used a correct method to calculate the mean for density data?  If not, what is the correct way to do this?

Probability is quite different between discrete and continuous distributions. In a continuous distribution, there are an infinite number of points on a continuous distribution (not just 0 and 1, for instance, but also all values in between), and therefore the probability of getting any one of those values is infinitely small. That is why we always look at the probability that something will be within a certain range, not the probability that it will be equal to a single value. For discrete distributions, the probability of each possible outcome is nonzero; for example, a coin toss has only two, not infinite, possible values, so we can talk about the probability of a single value. When you do a fit, one thing you tell @RISK is the data type, so that it can apply the proper rules for probability.

The way you have the fit set up, you are specifying 5 points on a continuous density curve, which is not the same as specifying the probability at those points. Since it doesn't have any more information, @RISK assumes a linear change in density between each of these points (it connects the dots with straight lines). In effect, it treats the data as describing a RiskGeneral distribution.

When you manually calculated the mean, however, you assumed a discrete distribution: it only has values 0, 50, 100, 200, 500, and nothing else. If this is what you intended, then you want to select a data type of "Discrete Sample Data (Counted Format)" in the fitting dialog, on the first tab. As the name "counted format" suggests, the second column must be whole numbers, so you need to multiply all your probabilities by the same number. In this case, since the probabilities are all two decimal places, multiply them all by 100 to get whole numbers in the same proportion.

Last edited: 2015-06-19

4.6. RMS Error Calculation in Distribution Fitting with (x,p) Pairs

Applies to: @RISK 5.x–7.x

How does @RISK calculate the RMS error that it uses for fit ranking?

For curve data—x values with associated probability densities or cumulative probabilities—@RISK computes the root-mean-square error as a measure of goodness of fit. The equation is in the help file, but it can be hard to relate that to the computations for your particular data set.

The attached example shows how @RISK computes the RMS error for (x,p) data, where p is the cumulative probability or area under the curve for all values less than or equal to that x value.

See also: RMS Error Calculation in Distribution Fitting with (x,y) Pairs.

Last edited: 2015-06-19

4.7. RMS Error Calculation in Distribution Fitting with (x,y) Pairs

Applies to: @RISK 5.x–7.x

How does @RISK calculate the RMS error that it uses for fit ranking?

For curve data—x values with associated probability densities or cumulative probabilities—@RISK computes the root-mean-square error as a measure of goodness of fit. The equation is in the help file, but it can be hard to relate that to the computations for your particular data set.

The attached example shows how @RISK computes the RMS error for normalized or unnormalized (x,y) data, where y is the height of the probability density curve or relative frequency curve. Although the RMS calculation is the same in @RISK 5.x and 6.x, the example requires @RISK 6.0 or higher because it uses the new RiskFit functions that were introduced in @RISK 6.0.

See also: RMS Error Calculation in Distribution Fitting with (x,p) Pairs.

Last edited: 2015-06-19

4.8. P-Values and Distribution Fitting

Applies to: @RISK 5.x–7.x

How do I get p-values, critical values, and confidence intervals of parameters of fitted distributions?

In the Fit Distributions to Data dialog, on the Bootstrap tab, tick the box labeled "Run Parametric Bootstrap". You can also specify the number of resamples, and your required confidence level for the parameters. Bootstrapping will take extra time in the fitting process, particularly if you have a large data set.

Click the Fit button as usual. You'll see a pop-up window tracking the progress of the bootstrap.

When the fit has finished, you can click the Statistical Summary icon (last of the small icons at the bottom) to see an exhaustive chart. Or you can select one distribution in the list at the left and click the Bootstrap Analysis icon (second from right) to see just fit statistics and p-values, or just parameter confidence intervals, for that one distribution. If the information is not available because the bootstrapping failed, you will see a box "Unable to refit one or more bootstrap resamples."

Why doesn't @RISK give p-values for the Kolmogorov-Smirnov and Anderson-Darling tests for most fits?  Why do the ones that @RISK does give disagree with other software packages?

Basically, the p-values require knowledge of the sampling distribution of the K-S or A-D statistic.  In general this sampling distribution is not known exactly, though there are some very particular circumstances where it is.

While we don't know the exact methodology that other packages use, it is true that there are a number of ways to deal with this problem.  The method @RISK takes is very cautious.  If we cannot report the p-value, either we report a possible range of values it could be (if we can determine that) or we don't return a value at all.  Some people will choose the "no-parameters-estimated case", which can be determined in many cases, but which returns an ultra-conservative answer.  A good reference for how @RISK handles this can be found in the book Goodness-of-Fit Techniques by D'Agostino and Stephens.

Do you have any cautions for my use of p-values?

Sometimes too much stress is laid on p-values in distribution fitting.  It's really not valid to select a p-value as a "bright-line test" and say that any fit with a higher p-value is good and any fit with a lower p-value is bad.  There is no substitute for looking at the fitted distribution overlaid on the data.

We recommend against using the p-values for your primary determination of which distribution is the best one for your data set. For some guidance, see "Fit Statistics" in the @RISK help file or in Appendix A of the user manual, and Interpreting AIC Statistics in this Knowledge Base.

Last edited: 2017-06-29

4.9. Interpreting Anderson-Darling Test Statistics

Questions:
What does it mean for the inverse Gauss distribution to have an A-D test value of 1.67895 and the Loglogistic distribution to have an A-D test value of 6.78744? Does the A-D test have a unique distribution, meaning that it is not a conventional F Test or χ² (chi-squared) test? Is an A-D test value of 1.68 approximately four times better than an A-D test value of 6.79? How can the test values be compared?

Response:
The A-D test value is simply the average squared difference between the empirical cumulative function and the fitted cumulative function, with a special weighting designed to accentuate the tails of the distribution. There are many good references for this, including Simulation Modeling and Analysis by Law & Kelton. What this means is that in an absolute sense A-D values can be compared from one distribution to another. An A-D test value of 6.78744 versus one of 1.67895 implies that the average squared distance between the empirical and fitted cumulative functions (including the effects of the preferential weighting of the tails) is four times as big in one case versus another.

A potential drawback for the A-D test is that it does not have a convenient, unique test distribution, like the χ² test does. Actually, to be fair to the A-D test, even the χ² statistic only approximately follows the χ² distribution in the case where fit parameters have been estimated (see Law & Kelton). Because the A-D test doesn't have a usable test distribution, we can't calculate p-values and critical values for the test, except in special distributions under special conditions, and even in those cases only approximately. There is a very brief discussion of this in Law & Kelton as well, but most of @RISK's treatment of this is taken from the very specialized book Goodness-of-Fit Techniques by D'Agostino & Stephens.

last edited: 2012-08-04

4.10. Number of Bins in Distribution Fitting

Applies to: @RISK 5.x, Professional and Industrial Editions
(The fitting methods were changed beginning with @RISK 6.0.)

How does your software automatically determine the number of chi-squared bins to use when fitting distributions against sample data? What degrees of freedom does it use?  Is this the same method used for the "Auto" option when specifying the number of bins in a histogram?

χ² (chi-squared) binning and histogram binning are very different, and the number and position of bars on a histogram chart is almost never the same as the arrangement of the χ² bins. For starters, χ² bins are equally probable and therefore are typically not all the same width, while (at least for all Palisade products) histogram graph bars always have equal width.

For histogram binning, see Number of Bins in a Histogram.

For χ² (chi-squared) binning with n data points:

  • If n < 35, bins = nearest integer to [n/5]
  • If n >= 35, bins = largest integer below [1.88 n ^ (2/5)]

The small-n part is a rule of thumb that says you should have on average at least five data points per bin (a rule which is not always followed in practice).  The large-n part has a real basis in statistical theory.  A reference for it is in Goodness-of-Fit Tests by Ralph D'Agostino and Michael Stephens (Dekker 1986), page 70.

After a fit, you can find how many bins @RISK used for computing the chi-squared statistic by clicking the "Statistical Summary" icon at the bottom of the "Fit Results" graph.

The degrees of freedom for the χ² statistic is (number of points) minus 1, without regard to the number of parameters in the particular distribution. You can see this by examining the critical parameters in that same statistical summary. Law and Kelton, in Simulation Modeling and Analysis (2000), pages 359–360, say that some authors do vary degrees of freedom according to the number of parameters in the fitted distribution, but the conservative procedure is to use (number of points) minus 1, as @RISK does.

It's important to remember that the χ² binning has zero effect on which fit is actually presented to the user. In other words, when @RISK is trying to fit to (say) a triangular distribution, it chooses the parameters that make the triangle as close as possible to your data as measured by MLEs. (There is no L-M optimizer in current versions of @RISK.) Changing the binning may change the statistics that purport to measure the goodness of a fit, but will have no effect on the parameters of the fitted distribution.

For most data sets, from a glance at the overlay plot of the fitted distribution against your data it should be obvious which fit is best. If the binning is important to you, you can click that tab of the Fit dialog before performing the fit, and adjust the binning to your preference.

Last edited: 2014-01-14

4.11. Discrepancy in AIC Calculation?

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I fitted {1,2,3,4,5} to a RiskIntUniform distribution. I used the formula

AIC = 2k – 2×ln(L)

where k = 2 is the number of parameters and L is the likelihood.

For a uniform integer distribution fitted to a sample of n = 5 points, every point has probability of 1/5, and so ln(L) = 5 ln(1/5). I computed

AIC = 2k – 2×ln(L) = 2×2 – 10×ln(1/5) = about 20.0944

But @RISK gives 26.0944 in the Fit Results window. How do you reconcile this?

@RISK actually computes AICc, which includes a correction for finite sample sizes.

The formula is AICc = AIC + 2k(k+1)/(nk–1)

for a distribution with k parameters fitted to n data points.

With k = 2 for a RiskIntUniform and n = 5 data points,

AICc = AIC + 2×2×3/(5–2–1) = AIC + 6 = 26.0944

just as shown by @RISK.

The finite-sample correction is important for very small samples, but much less important for samples of reasonable size. For example, for fitting a 2-parameter distribution to 30 data points, the correction would be 12/27, about 0.4444.

Additional keywords: Distribution fitting, IntUniform

Last edited: 2015-06-19

4.12. Interpreting AIC Statistics

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

@RISK gives me several candidate distributions. How can I interpret the AIC statistics? How much of a difference in AIC is significant?

The answer uses the idea of evidence ratios, derived from David R. Anderson's Model Based Inference in the Life Sciences: A Primer on Evidence (Springer, 2008), pages 89-91. The idea is that each fit has a delta, which is the difference between its AICc and the lowest of all the AICc values. (@RISK actually displays AICc, though the column heading is AIC; see Discrepancy in AIC Calculation?)

Example: suppose that the normal fit has the lowest AICc, AICc = –110, and a triangular fit has AICc = –106. Then the delta for the triangular fit is (–106) – (–110) = 4.

The delta for a proposed fit can be converted to an evidence ratio. Anderson gives a table, which can also be found on the Web. One place is page 26 of Burnham, Anderson, Huyvaert's "AIC model selection and multimodel inference in behavioral ecology", Behav Ecol Sociobiol (2011) 65:23–35 (PDF, accessed 2014-07-11). In the table, a delta of 4 corresponds to an evidence ratio of 7.4, meaning that the normal fit is 7.4 times as likely as the triangular fit to be the right fit. If you had to choose between those two only, there's a 7.4/8.4 = 88% chance that the normal is right, and a 1/8.4 = 12% chance that the triangular is right. But of course you usually have more than two fits to choose from.

To give you a further idea, delta = 2 corresponds to an evidence ratio of 2.7, and delta = 8 to an evidence ratio of 54.6.

So how high does delta need to be before you reject a proposed fit as unlikely? Anderson cautions, "Evidence is continuous and arbitrary cutoff points ... should not be imposed or recognized." Yes, the higher deltas correspond to higher evidence ratios, so you can think of them as higher evidence against the lower-ranking fit, but you can never reject a fit with complete certainty. And of course if all the fits are poor then the best of them is still not a good fit. One other argument against relying solely on mechanical tests: A model that is a poorer overall fit may nonetheless be better in the region you care most about, or vice versa. It's always advisable to look at the fitted curves against the histogram of the data when making your final decision.

Last edited: 2015-06-19

4.13. Bounds of Fitted Uniform and Exponential Distributions

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

When I fit points to the continuous distributions RiskUniform and RiskExpon, the minimum of the fitted distribution is to the left of the smallest data value.  The maximum of the fitted RiskUniform is to the right of the largest data value.

This seems strange at first, but it actually makes good sense if you look deeper.  Here are two explanations:

  • The data points you're fitting are a sample from some ideal theoretical distribution that you are trying to find. How likely is it, purely by chance, that your sample data points would include both the absolute minimum and the absolute maximum of the theoretical distribution? Extremely UNlikely. Therefore, the boundaries of the theoretical distribution are almost certainly wider than the boundaries of your sample data.

    This issue is the famous German Tank Problem: given serial numbers of captured or destroyed tanks, how do you estimate the number of tanks that are being produced?

  • More formally, consider the sampling distribution of an order statistic, in this case the minimum and possibly the maximum. Let n be the number of points in your sample. If you take many, many samples of size n from the theoretical distribution, and take the minimum of each sample, you have the sampling distribution of the minimum. The mean of that distribution (μmin) should equal the minimum of the data points. Some values in the sampling distribution will be above μmin and some will be below. The minimum (the left boundary) of the sampling distribution of the minimum must be less than μmin, which means it must be less than the minimum of the sample data.

    With the uniform distribution, you can make an equivalent statement about the maximum. The exponential function is unbounded to the right, so there is no maximum.

    @RISK lets you view a simulated sampling distribution of each parameter of each fitted distribution, such as the minimum and maximum of the fitted continuous uniform distribution. While setting up the fit, on the Bootstrap tab of the dialog, select Run Parametric Bootstrap. Then, on the Fit Results window, click the Bootstrap Analysis icon, which is the next to last one in the row at the bottom of the window. Select Parameter Confidence Intervals. Select a distribution at the left, select a parameter at the top, and see the graph of the simulated sampling distribution of that parameter. To see the statistics of the distribution, click the drop-down arrow at the top right of the graph and select Legend (with Statistics) or Statistics Grid.

This issue will come up in any bounded continuous distribution, where the probability density shifts abruptly at the left from zero to a positive value, or at the right from a positive value to zero.

For example, if you fit the points {11,12,13,14,15} as a continuous uniform distribution, you get RiskUniform(10,16), not RiskUniform(11,15) as you might expect at first. (Please see attached illustration.)  To make μmin and μmax equal the minimum and maximum of the sample data, @RISK applies a bias correction of (maxmin)/(n–1) = (15–11)/(5–1) = 1, so the minimum and maximum of the RiskUniform are 1 unit left and right of the minimum and maximum of the data.  For the points {11,11.5,12,12.5,13,13.5,14,14.5,15}, the bias correction is 0.5, and the fitted uniform function is RiskUniform(10.5,15.5).

For the exponential function, the bias correction is (meanmin)/n. Again considering the points {11,12,13,14,15}, the bias correction is (13–11)/5 = 0.4. (Please see attached illustration.)

Last edited: 2015-06-19

4.14. Best Fit for Small Data Sets?

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

When I do a fit on {1,2,3,4,5} as discrete data, @RISK prefers a RiskPoisson distribution, even though the RiskIntUniform is clearly a better fit. Why is that?

In @RISK 6.x, the default statistic for measuring goodness of fit is AIC (more specifically, AICc). For small data sets, the AIC calculation strongly prefers distributions with fewer parameters. (This is an application of the principle of parsimony.) The Poisson distribution and the geometric distribution (RiskGeomet) are both one-parameter distributions, but the uniform integer distribution (RiskIntUniform) is a two-parameter distribution. With a data set of only five points, the AIC statistic's preference for distributions with fewer parameters trumps the poorer likelihood functions computed for those distributions.

There are three countermeasures:

  • For small data sets, consider changing Fit Ranking to BIC. Although BIC also favors distributions with fewer parameters, it doesn't favor them as strongly as AIC does. (Please see attached illustration.)

  • Don't just take the first listed fit, but examine the fitted distributions. Your data probably won't show the kind of dramatic difference that we got from this artificial data set, but you may find that a fit that doesn't have the best statistic actually does a better job in a particular region of the graph that you care most about.

  • Use more data points. @RISK does allow fitting to as few as five data points. But in general, the more points you have, the better the fitted distribution will match the true theoretical distribution that those points represent. Extending this made-up data set, with as few as nine points {1,2,3,4,5,6,7,8,9} @RISK computes the smallest AIC statistic for the integer uniform distribution.

Last edited: 2015-06-19

4.15. N/A in Results from Parametric Bootstrapping

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

In the dialog box for distribution fitting, I selected parametric bootstrapping. The results columns for some distributions shows N/A for bootstrap results, instead of numbers. What does this mean?

N/A for bootstrapping means that the bootstrap failed for that distribution.

If the bootstrap fails for one distribution, it will not necessarily fail for all distributions. The bootstrapping process is done separately for each type of distribution.

What does @RISK consider to be a failure of bootstrapping?

In the bootstrapping process, @RISK takes each fitted distribution and generates a large number of new sample data sets from it, each with the same size as the original data set. It then refits these new data sets and tabulates information about each of the resampled fits. @RISK takes a conservative approach. If it is unable to fit a distribution to even one of the new sample data sets that it generated (meaning that the parameters of that distribution did not converge for that new data set), then @RISK considers that the bootstrap has failed for that distribution.

Does that mean that the fit itself is bad?

Fits aren't good or bad in absolute terms. Instead, you can say that one distribution is better or worse than another for your data set.

Evaluating fits is both objective and subjective. You have the guidance of the fit statistics; for example, see Interpreting AIC Statistics. But your own judgment plays a part, too. For one thing, you have to decide which statistic to use — by a different statistic, fits may rank differently. Also, as you compare your data set to the distributions that @RISK came up with, you might decide to use distribution A rather than B, even though B has a more favorable fit statistic. Maybe A is a better fit than B in a region that you feel is most important, or maybe you have some more general reason for preferring one type of distribution over another.

Last edited: 2016-01-14

4.16. "Auto Detect" Button in Time Series Fitting

Applies to: @RISK 6.x/7.x, Industrial Edition

How does @RISK try to achieve stationarity in fitting data to time series? Does the "Auto Detect" button use the Dickey-Fuller test or KPSS?

The first thing to say is that, without knowing the source of the data, it's impossible to do auto detect perfectly; by necessity, it's a heuristic. When you choose Auto Detect, you need to look at the result and correct it as necessary, based on your knowledge of the source of the data,

The KPSS test isn't appropriate for @RISK, since we don't really support trend-stationary time-series. Currently we only detrend using differences, although that may change in future versions of @RISK.

The Dickey-Fuller (DF) test—or usually augmented Dickey-Fuller—is more appropriate, but @RISK does not use that either. The Analysis of Time Series by Chris Chatfield (Chapman & Hall, 2003) p. 263 is quite negative about unit root testing. At best, it would be some additional information that could help distinguish between difference-stationarity and trend-stationarity, if we ever add the latter to @RISK. The DF tests have an additional drawback: they generally assume you have already removed other things that are making the data non-stationary. The obvious hard one is seasonality, especially when combined with a functional transform. But it's Catch-22, because every paper we found with routines for determining the periodicity of seasonality assumes you have already removed trends. Therefore, we had to take a different approach.

@RISK uses a technique adapted from electronic signal processing. There, one takes small "windows" or subsets of the data, calculates statistics of these subsets, and then does standard statistical tests to compare if these statistics are changing as a function of time. @RISK looks through all the possible transformations (functional, detrending, and deseasonalization) to find the combination that produces acceptable test statistics. We also use proprietary techniques to address some nasty details: such as how to avoid over-differencing, how to determine the seasonal period, and so forth.

Last edited: 2015-12-21

4.17. ARIMA Model in Time Series

Applies to: @RISK 6.x/7.x, Industrial Edition

Does @RISK support ARIMA-based time series? I couldn't find anything about ARIMA in the help file.

To use ARIMA forecasting in @RISK, select ARMA(1,1) and specify trending.. First-order integration gives ARIMA(1,1,1), and second-order integration gives ARIMA(1,2,1).

For all the details, step by step with illustrations, please open the attached Excel file. (You can view the file without starting @RISK.)

Last edited: 2016-05-12

4.18. Constraining Time Series to Return Positive Results

Applies to: @RISK 6.x/7.x Industrial Edition

I'm using Time Series in @RISK 6. I want to ensure that the projected results are greater than zero. I tried using RiskTruncate, but the manual says that RiskTruncate and RiskShift aren't effective with time series. Is there a way to do what I want?

Yes, assuming that the original data are greater than zero.

In the fit dialog, select Function and then Logarithmic. The data will be transformed according to that function. After the fit, the projected data are de-transformed. De-transforming a logarithm means exponentiating, and the range of the exponential function is all positive numbers.

You'll want to experiment a bit and make sure that doesn't have any undesirable side effects with your particular data set.

Last edited: 2015-06-19

4.19. Number of Periods to Forecast in Time Series

Applies to: @RISK 6.x/7.x, Industrial Edition

In time series fitting, when I click Write to Cells, @RISK gives me a default array of 24 cells. Can I change the default number of periods that time series will project into the future?

Yes, you can. In Utilities » Application Settings, expand the section "Time Series Graph Defaults". The last item, "Num. Default Data Points", tells the time series fit how many periods to forecast past the end of your historical data. You can still change this when you do any particular fit.

Last edited: 2018-05-03

4.20. Technical Details of Distribution Fitting

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

How does @RISK estimate distribution parameters? Can you give me any details?

In general, we use Maximum Likelihood Estimators (MLEs). For details, please use the Search tab in @RISK help to find the topic "Sample Data — Maximum Likelihood Estimators (MLEs)". After reading, click the Next button at the top and continue reading the subtopic "Modifications to the MLE Method".

For references to methods that we use, search Help for the term "Merran" and click on the topic "Distributions and Distribution Fitting" in the search results.

It's important to realize that not all distributions are fit in exactly the same way. In the more than 30 years we've been improving @RISK, we have developed many proprietary tweaks to the standard algorithms, to do a better job of fitting particular distributions. These let the fit proceed more efficiently, handle cases where the standard MLE algorithms break down, and so on.

Although the fine details of our fitting algorithms are proprietary, the fit results include many popular goodness-of-fit statistics, including AIC, Anderson-Darling, BIC, χ², Kolmogorov-Smirnov, and RMS. For details of these statistics, see the "Fit Statistics" topic in @RISK help, as well as several articles in the @RISK Distribution Fitting chapter of this Knowledge Base.

Last edited: 2016-08-09

4.21. Time Series with Irregular Intervals

Applies to: @RISK 6.x/7.x/8.x, Industrial Editions

Can I fit a Time Series distribution with irregularly spaced data?

The @RISK methodology for Time Series is only applicable to equally spaced data.

So, if the you have enough data points, we would suggest that you use any interpolation method to transform the series into equally spaced observations and then use any model available in @RISK.

Last edited: 2020-08-06

5. Correlation in @RISK

5.1. How @RISK Correlates Inputs

Applies to: @RISK, all releases

How do I specify correlations?

When two or more input variables should be correlated, you can click the Define Correlations icon in the ribbon, specify correlations in the Model Definition Window, or add RiskCorrmat( ) functions directly to the distribution formulas for those variables in your Excel sheet.

The correlation coefficients you specify are Spearman rank-order correlations, not Pearson linear correlations. The rank-order correlation coefficient was developed by C. Spearman in the early 1900's, and this article explains how @RISK computes the rank-order correlation coefficient.

Pearson correlations assume linear distributions, but the great majority of distributions are non-linear, and Spearman is usually more appropriate for non-linear distributions. A Web search for choose Spearman or Pearson correlation will show lots of articles about the different uses of these two forms of correlation.

During a simulation, how does @RISK draw random numbers to achieve my specified correlations?

@RISK draws all samples for correlated variables before the first iteration of the simulation. (Non-correlated variables are sampled within each iteration.) Knowing the number of iterations to be performed, @RISK adjusts the ranking and associating of samples within each iteration to yield the defined correlation values.  Again, this correlation is based on rankings of values, not actual values themselves as with the linear correlation coefficient. A value's "rank" is determined by its position within the min-max range of possible values for the variable.

@RISK generates rank-correlated pairs of sampled values in a two-step process:

  1. A set of randomly distributed "rank scores" is generated for each variable. If 100 iterations are to be run, for example, 100 scores are generated for each variable. (Rank scores are simply values of varying magnitude between a minimum and maximum. @RISK uses van der Waerden scores based on the inverse function of the normal distribution.) These rank scores are then rearranged to give pairs of scores which generate the desired rank-order correlation coefficient. For each iteration there is a pair of scores, with one score for each variable.

  2. A set of random numbers (between 0 and 1) to be used in sampling is generated for each variable. Again, if 100 iterations are to be run, 100 random numbers are generated for each variable. These random numbers are then ranked smallest to largest. For each variable, the smallest random number is then used in the iteration with the smallest rank score, the second smallest random number is used in the iteration with the second smallest rank score, and so on. This ordering based on ranking continues for all random numbers, up to the point where the largest random number is used in the iteration with the largest rank score

This process results in a set of paired random numbers that can be used in sampling values from the correlated distributions during an iteration of the simulation.

This method of correlation is known as a "distribution-free" approach because any distribution types may be correlated. Although the samples drawn for the two distributions are correlated, the integrity of the original distributions is maintained. The resulting samples for each distribution reflect the distribution function from which they were drawn.

Does @RISK use Cholesky decomposition?

Yes. If Cholesky fails, the matrix is not self-consistent, and @RISK proceeds as in How @RISK Adjusts an Invalid Correlation Matrix.

If Cholesky succeeds, @RISK proceeds as in Iman, R. L., and W. J. Conover. 1982. "A Distribution-Free Approach to Inducing Rank Correlation Among Input Variables." Commun. Statist.-Simula. Computa. 11: 311-334. Retrieved 2018-08-23 from https://www.uio.no/studier/emner/matnat/math/STK4400/v05/undervisningsmateriale/A distribution-free approach to rank correlation.pdf.

Why does Excel report a different correlation from the one I specified?

This correlation method yields a rank order correlation (Spearman coefficient) that is usually quite close (within normal statistical variability) to your specified value. However, Excel's =CORREL( ) function reports the Pearson coefficient.  The Pearson value may vary somewhat from the Spearman value, depending on the exact nature of the correlated distributions. This difference is illustrated in the attached workbook. (Beginning with @RISK 5.5, you can use the =RiskCorrel( ) worksheet function to display the Spearman or Pearson correlation for simulated data.)

Correlation of discrete distributions can be a particular problem.  For more on that, please see Correlation of Discrete Distributions.

See also: Correlation in @RISK collects over a dozen articles explaining various aspects.

Additional keywords: Corrmat property function

Last edited: 2018-08-23

5.2. Limit on Correlated Variables?

Applies to: @RISK 5.x–7.x

How many input distributions can correlate?  Is there a limit to the size of my correlation matrix?

There is no fixed limit. However, as with all other aspects of modeling, your available system resources are a constraint.

Note on Excel 2003, with @RISK 6.3 and older:
Excel 2003 workbooks are limited to 256 rows. You can still have larger correlation matrices, but you have to use special techniques; see Correlation Matrix Exceeds Excel's Column Limit. This does not apply to newer versions of @RISK, because they require Excel 2007 or newer.

Recommendation: If possible, don't have one huge matrix, but partition your correlation into smaller matrices. For example, suppose you have 400 inputs that are correlated. A 400×400 matrix is 160,000 cells. But if those 400 inputs actually fall into four groups of about 100 each, and there's correlation within each group but not between the groups, then you should use four 100×100 matrices, for a total of 40,000 cells. @RISK can test the smaller matrices for validity and, if necessary, adjust them much faster than one large matrix. If all 400 variables really do need to be correlated with each other, you need that larger matrix. But if you can group the variables as described, it's worth having a separate correlation matrix for each group.

Recommendation: If you have several groups of variables that all need the same correlations within the group, they can all use he same smaller matrix. Follow the technique in Same Correlation Coefficients for Several Groups of Inputs. A typical example is time periods or geographical regions where a number of factors are correlated in the same way, but there's no correlation between periods or between regions.

Last edited: 2016-08-30

5.3. Same Correlation Coefficients for Several Groups of Inputs

Applies to: @RISK 4.x–7.x

I have multiple groups of inputs, and I want to use the same set of correlation coefficients for each group. But @RISK correlates all the inputs of all the groups together, which is not what I want. How do I tell @RISK that inputs A, B, C are correlated with each other, and D, E, F are correlated with each other with the same coefficients, but A, B, and C are not correlated with D, E, and F?

The short answer is to use the optional "instance" argument to RiskCorrmat( ), assigning a different instance to each group of correlated inputs. See attached example CorrelationGroups.xls. After a simulation, the worksheet CorrelationAudit_Report within that workbook shows sample correlations within a group and between groups.

You can set up the correlations by pointing and clicking (Model A below) or by formula editing (Model B below). These methods will work with any number of groups, and any number of inputs per group.

Solution details — point and click, Model A:

If the correlated groups aren't too large and there aren't too many of them, you can easily correlate separate groups of inputs through menu selections. For simplicity we'll show two groups of three inputs each. Within the attached example CorrelationGroups.xls, the two worksheets ModelA and @RISK Correlations were created by this method.

In @RISK 5.x–7.x:

You can now run the simulation. Inputs within each group will be correlated, but inputs in different groups will not be correlated. The worksheet CorrelationAudit_Report, which is created automatically within the workbook, shows that the actual correlations match the requested correlations quite well.

  1. Highlight the first group of inputs you want to correlate, using Shift-click for a continuous range and Ctrl-click for non-adjacent cells.
  2. Right-click and select @RISK » Define Correlations, or just click the Define Correlations icon in the ribbon.
  3. A window opens into a new correlation matrix, with your selected inputs listed. Set your correlation coefficients, either above or below the diagonal.
  4. Click the icon at the bottom of the window to check matrix consistency, and correct any problems. See How @RISK Tests a Correlation Matrix for Validity.
  5. Near the top of the window, set the matrix location. If you wish, also give the matrix a name and description.
  6. Just above the matrix, click the first icon, "Rename Instance", and enter a unique identifier for this group of inputs. It can be text or numeric, such as a year number.
  7. In the same row of icons, click "Create New instance".  When prompted, enter a unique identifier for the second group of inputs that will use this correlation matrix. Click the Add Inputs button at the bottom and select the second group of inputs.
  8. Repeat step 7 for each group of inputs that will use this correlation matrix.  After entering the last group, click OK.

Special case: If the groups of inputs are in a contiguous rectangular array, either as rows or as columns, you can short-cut the above process:

  1. Click Define Correlations in the the ribbon. In the dialog box, click the Create Correlated Time Series icon near the top. (Despite the name of the icon, the groups of inputs don't actually have to be a time series.)
  2. With your mouse, select the rectangle that contains all the groups of inputs that you want to correlate. Select correlation by rows or by columns.
  3. Set your correlation coefficients, either above or below the diagonal.
  4. Click the icon at the bottom of the window to check matrix consistency, and correct any problems. See How @RISK Tests a Correlation Matrix for Validity.
  5. Near the top of the window, set the matrix location. If you wish, also give the matrix a name and description. Click OK.

In @RISK 4.x:

  1. Open the Model window by clicking the icon "Display List of Outputs and Inputs". (Alternative: menu selections @RISK, Model, List Outputs and Inputs.)
  2. In the Explorer-style list at the left, click the first correlated input in the first group, then Ctrl-click the other correlated inputs in the first group. Click the icon "Define Correlation". (Alternative: menu selections Model, Correlate Distributions.)
  3. Enter your correlation coefficients, change the matrix name if you wish, and click Apply. You'll see a new Correlations category in the Explorer-style list at the left with the name of your correlation matrix, and @RISK creates a new "@RISK Correlations" worksheet in your workbook.
  4. Right-click on the name of the correlation matrix in the Explorer-style list, and select Edit Correlation Matrix. In the menu line of the Model window, select Correlation, Instance, Create New instance. Give the instance a name.
  5. Drag each input of the second group into the correlation matrix and when prompted select Replace. When you've done this with all the inputs of the second group, click Apply.
  6. Repeat steps 4 and 5 for each additional group of correlated inputs.

You can now run the simulation. Inputs within each group will be correlated, but inputs in different groups will not be correlated. The worksheet CorrelationAudit_Report, which is created automatically within the workbook, shows that the actual correlations match the requested correlations quite well.

Solution details — formula editing, Model B:

As an alternative to point-and-click, you can take advantage of Excel's ability to replicate formulas by dragging the fill handle. (Search Excel help for "fill handle" if this is unfamiliar to you.) This method scales well to larger groups of correlated inputs, or greater numbers of groups.

For this example we'll show ten groups of four inputs each, representing growth in the value of stocks and a bank account over ten years. Performances of stocks in a given year are positively correlated to each other but negatively correlated to interest rates. Within the attached example CorrelationGroups.xls, worksheet ModelB was created by this method.

  1. Create your correlation matrix; row and column heads are optional but help to document the model. Highlight just the actual coefficients and define a name for them (menu selection Insert, Name, Define). In our example the correlation matrix including headings is C18:G23, and SecondCorr is the name of the 4×4 array of coefficients in D20:G23.

  2. Set up your first group of correlated inputs as one row or one column. Create the distribution in the usual way, but add a RiskCorrmat function as an additional argument within the distribution function. The three arguments to RiskCorrmat are the name you assigned to the correlation matrix, the input number, and the instance. For reasons that will become clear in the next step, the instance argument should be a reference to the column header.

    In our example, the first group is Year 1, in column E. Growth factors are the @RISK distributions in cells E9, E11, E13, and E15; look at the formulas for those cells and see how the RiskCorrmat function is used. The new values at year end are in cells E10, E12, E14, and E16. Notice that the growth factors are correlated, but the year-end values are not.

  3. Highlight the cells of the first year, E8:E16, and drag the fill handle to create the additional groups through year 10 in column N. Notice how the instance argument changes in each group, but is the same for all the inputs within a group; this was the reason for the cell reference in step 2. Note also that the named correlation matrix does not change from one column to the next.

You can now run the simulation. Inputs within each group will be correlated, but inputs in different groups will not be correlated. The worksheet CorrelationAudit_Report, which is created automatically within the workbook, shows that the actual correlations match the requested correlations quite well.

Additional keywords: Corrmat property function

Last edited: 2016-12-12

5.4. How @RISK Tests a Correlation Matrix for Validity

Disponible en español: Cómo prueba @RISK una matriz de correlación para determinar su validez

Applies to:
@RISK 4.x–7.x

How does @RISK decide whether my correlation matrix is valid?

The basic principle is that if two inputs are each strongly correlated to a third, they must be at least weakly correlated to each other. For example, it would be inconsistent to correlate A and B at 0.9, A and C at 0.8, but B and C at 0.0.  A valid matrix is one where the correlation coefficients are mutually consistent.

When only three inputs are involved, it's pretty easy to check for valid combinations. If the coefficient of A and B is m, and the coefficient of A and C is n, then the coefficient of B and C must be in the range of

m n  ±  sqrt(  (1-m²) (1-n²) )

Source: Two Random Variables, Each Correlated to a Third at Math Forum.

For example, if A and B correlate at 0.9, and A and C correlate at 0.8, then B and C must correlate in the range of

0.9 * 0.8  ± sqrt( (1-0.9²) (1-0.8²) ) = 0.72 ± 0.26153 = 0.458 to 0.982

Here's how @RISK generalizes this principle for a correlation matrix of any size:

If a correlation matrix is created using a full data set, it will be positive semi-definite if there is a linear relationship between any of the variables and positive definite if there is no linear relationship.

The easiest way to determine if a matrix is positive definite is to calculate its eigenvalues, and that is what @RISK does at the start of a simulation. A positive definite matrix will have all positive eigenvalues and a positive semi-definite matrix will have eigenvalues greater than or equal to zero and at least one eigenvalue equal to zero.

For @RISK, a "valid" matrix is any matrix that is positive definite or positive semi-definite, and an "invalid" matrix is any matrix that has at least one negative eigenvalue.  For details on how @RISK adjusts an invalid correlation matrix, please see How @RISK Adjusts an Invalid Correlation matrix.

How can I determine ahead of time if my matrix is invalid?

With @RISK 5.x–7.x: In the @RISK Model window, click the Correlations tab and use the Check Matrix Consistency command to have @RISK check whether the matrix is self consistent.

With @RISK 4.5 and earlier: An "invalid" matrix has one or more negative eigenvalues.  Excel itself doesn't have a worksheet function to calculate eigenvalues, but there are many software applications and Excel add-ins with that capability. One freeware alternative is MATRIX at http://digilander.libero.it/foxes/ (accessed 2013-03-14).  (We mention this as one example, without endorsement and without prejudice to any other software for computing eigenvalues.)

See also:  How @RISK Adjusts an Invalid Correlation Matrix

Last edited: 2015-06-23

5.5. How @RISK Adjusts an Invalid Correlation Matrix

Applies to:  @RISK 4.x–7.x

How does @RISK decide whether my correlation matrix is valid?  If the matrix is invalid, how does @RISK adjust it to create a valid matrix?

A correlation matrix is valid if it is self-consistent, meaning that the specified coefficients are mutually compatible. Please see How @RISK Tests a Correlation Matrix for Validity.

When you click Start Simulation, @RISK checks all correlation matrices for validity. If a matrix is invalid, @RISK looks for an adjustment weight matrix (see below), If the adjustment weight matrix exists, @RISK uses it to adjust the invalid correlation matrix, and the simulation proceeds. But if there's no adjustment weight matrix, @RISK displays this message:

Warning
The correlation matrix at ... is not self-consistent. @RISK can generate the closest self-consistent matrix.  OK generates a corrected matrix and continues, Cancel stops the simulation.

If you want to adjust the matrix on your own or create an adjustment weight matrix, click Cancel. This is usually a good idea, because in the absence of an adjustment weight matrix @RISK may make quite large changes in your correlation coefficients.

How do I set up and use an adjustment weight matrix?

This feature is available in @RISK 5.5 and newer.

You can create an adjustment weight matrix to guide @RISK in adjusting the correlations. The adjustment matrix is a triangular matrix the same size as the correlation matrix; a square matrix is also acceptable as long as it is symmetric.

In your adjustment weight matrix, enter a weight 0 to 100 in each cell below the diagonal. A weight of 100 means that the corresponding coefficient must not be changed, and a weight of 0 means that you don't care how much @RISK changes the corresponding coefficient. Between 0 and 100, larger weights place greater importance on the original coefficients. In other words, larger weights cause @RISK to apply less adjustment to the corresponding correlation coefficients, and smaller weights let @RISK adjust the corresponding correlation coefficients more.

The adjustment can be done during a simulation, or in a one-time procedure before a simulation. Both possibilities are explained below.

Technical details: Your correlation matrix is not self-consistent, meaning that it has one or more negative eigenvalues. You want @RISK to find a consistent matrix that is as close as possible to your original inconsistent one, taking your adjustment weight matrix into account. This is a non-linear optimization problem. The goal is to minimize the weighted sum of squared differences between the inconsistent matrix and a candidate consistent matrix. @RISK uses the standard limited-memory BFGS algorithm to perform this optimization.

As mentioned above, weights are in the range 0 to 100. Between those special weights, other weights are treated in an exponential fashion. The exact details are proprietary, but 50 versus 25 or 10 versus 5 means "more important", not "twice as important".

Correcting a matrix during simulation:

The name of your adjustment weights matrix must match the range name of the correlation matrix, with the suffix _Weights. For example, if your correlation matrix is named Matrix1, the associated adjustment weight matrix must be named Matrix1_Weights. If a correlation matrix is inconsistent, @RISK looks for an adjustment weights matrix with the right name, and if it finds one it will adjust the inconsistent matrix without displaying any message. You can name a matrix by highlighting its cells and then typing its name in the name box to the left of Excel's formula bar. Or, click Formulas » Define Name. (In Excel 2003 and older, click Insert » Name » Define.)

Please see the attached example, KB75_AdjustDuringEverySimulation.xlsx.

When @RISK adjusts an invalid matrix during simulation, it doesn't store the adjusted matrix in your workbook or anywhere permanent. @RISK does cache the adjusted matrix in your temporary folder, in a file called CORRMAT.MTX. It will reuse that file in future simulations if you haven't changed your original matrix.

Correcting a matrix outside of a simulation:

You can perform the adjustment up front, rather than leaving @RISK to do it in every simulation. If you have a large correlation matrix, this can make a difference in the speed of your simulation. Use the RiskCorrectCorrmat( ) array function to place the corrected matrix in your worksheet, and make all your correlated inputs refer to the corrected matrix, not the original. With this approach, you can assign any name, or no name, to the adjustment weight matrix.

Please see the attached example, KB75_RiskCorrectCorrmat.xlsx.

When the RiskCorrectCorrmat( ) function performs an adjustment on a large matrix, it may take considerable time. You'll see messages on Excel's status line, referring to the step number (number of candidate valid matrices tried) and the residual (sum of squared differences). @RISK keeps at the optimization till the residuals stop decreasing sufficiently. Unfortunately, there's no way to know how many steps will be necessary, so @RISK can't give you a progress indicator in the form of percent complete.

What if I don't use an adjustment weight matrix?

If you're running @RISK 4.x or 5.0, or if you're running a later version but you didn't specify an adjustment weight matrix, @RISK follows these steps to modify an invalid correlation matrix:

  1. Find the smallest eigenvalue (Eo)

  2. To shift the eigenvalues so that the smallest eigenvalue equals zero, subtract the product of Eo and the identity matrix (I) from the correlation matrix (C).

    C' = C – EoI

    The eigenvectors of the matrix are not altered by this shift.

  3. Divide the new matrix by 1 – Eo so that the diagonal terms equal 1.

    C'' = (1/(1−Eo)) C'

The matrix that @RISK calculates by this method is positive semi-definite, and therefore valid, but in no way is it special or optimal. It's one of many possible valid matrices, and some of the coefficients in it may be quite different from your original coefficients.

@RISK stores the new matrix in file CORRMAT.MTX in your temporary folder. You can use this as a guide to modify your matrix so that @RISK won't need to adjust it every time you run a simulation. See How @RISK Tests a Correlation Matrix for Validity to ensure that your edited matrix is self-consistent.

Additional keywords:  CorrectCorrmat function

Last edited: 2017-10-20

5.6. Correlation of Discrete Distributions

Applies to: @RISK, all versions

I specified a correlation of 0.5 between two RiskBinomial(1, 0.5) distributions, and the actual correlation of simulated results was much lower. I understand that simulated results will only approximately match the requested correlation, but why such a big difference?

This issue will occur to some extent for any discrete distribution. In general, the fewer the possible values of the distribution, the greater discrepancy you will see. Here is the explanation, using two RiskBinomial(1, 0.5) as illustration.

Two of these distributions correlated at 0.5 give (0,0) 37.5% of the time, (0,1) and (1,0) each 12.5% of the time, and (1,1) 37.5% of the time. That is the same as saying that the data pairs (0,0), (0,0), (0,0), (0,1), (1,0), (1,1), (1,1), (1,1) occur with equal frequency. If you put these eight data pairs into Excel, the CORREL( ) function does indeed return 0.5.

So why doesn't @RISK produce those data pairs with about those frequencies? This has to do with how @RISK generates random numbers for correlated distributions.

In order to generate correlated samples for any distributions, @RISK first generates 100 (or however many iterations are desired) pairs of random decimals between 0 and 1 that have the specified correlation coefficient. Call these numbers the U01 numbers. The U01 numbers are then plugged into the distribution's inverse cumulative distribution, which converts the U01 number into a sample over the range of the distribution. In the specific case of a RiskBinomial(1,0.5), any of the U01 numbers below 0.5 map to a 0, and any above 0.5 map to a 1. That all works as designed.

Note, however, that in mapping all those U01 numbers in that way we lose a lot of information. For example, 0.1 maps to a value of 0 the same as 0.49 does, and 0.51 and 0.99 both map to a value of 1. At this point, a lot of the correlation information is lost, because the 0.1 U01 number from the first distribution is more likely paired with a U01 number from the second distribution close to 0.1 than close to 0.49, but that information is lost, since they are both samples of 0.

Another way of looking at it is that at the end, when we have 100 samples from each distribution that are all either 0 or 1, there are many ways to assign ranks to those samples to calculate the Spearman rank-order correlation coefficient. Assigning them all the mid-rank (25.5 & 75.5) is just one way to do it. (For more on this, see How @RISK Computes Rank-Order Correlation.)

If we could assign the 0 sample that came from a lower U01 to a lower rank, and a 0 sample that came from a higher U01 to a higher rank, we'd get an observed correlation coefficient closer to what was asked for. But after the simulation, a 0 is a 0 and a 1 is a 1, and the information about where they came from is not available to @RISK's RiskCorrel( ) function or Excel's CORREL( ) function.

See also: How @RISK Correlates Inputs.

Additional keywords: Binomial distribution, Bernoulli distribution

Last edited: 2013-04-11

5.7. Correlating RiskMakeInput or RiskCompound, Approximately

Applies to:
@RISK 5.x–7.x

The help file says that RiskCompound or RiskMakeInput can't be correlated, but I really need to use correlation in my model. Is there any workaround available?

You can come close, and the process is the same for RiskMakeInput or RiskCompound. In brief: (1) Simulate your RiskCompound or RiskMakeInput to find its percentiles. (2) Turn that set of percentiles into a new RiskCumul distribution. (3) Replace the RiskCompound or RiskMakeInput with the new RiskCumul, which you can correlate.

This technique is workable if the parameters of the RiskMakeInput or RiskCompound don't change from one simulation to another. If they do, this technique isn't practical. (You could, however, use the @RISK XDK to automate the process, in a before-simulation macro.)

Here are details of the procedure. (Please open the attached workbook and run a simulation.)

Step 1. Get a lot of percentiles.

RiskCumul needs the minimum, the maximum, and some percentiles. The attached workbook is already set up to find every half-percentile in cells G8:H208.

  • This example finds every half-percentile: P0.5, P1, P1.5, and so on to P99.5. Depending on your distribution, you might need more percentiles, or fewer percentiles might be enough, or you might need more percentiles in one region of the distribution but fewer percentiles in another region of the distribution.
  • This example runs 100,000 iterations to find those statistics, but if your distribution is highly irregular you might need more.

At the end of this preliminary simulation, the RiskCumul functions no longer show #VALUE, because the percentiles of the RiskMakeInput and RiskCompound are now available. But you can't graph the RiskCumul functions at this stage, because the percentiles weren't available during the preliminary simulation.

Step 2. Turn the percentiles into a RiskCumul.

After a simulation, all the percentiles are formulas. But you want to use them as values, without depending on the original RiskMakeInput or RiskCompound.

Highlight the percentiles array with your mouse. Press Ctrl+C for copy, then Alt+E, S, V, Enter for Paste Special: Values. The RiskCumul distribution is now independent of the original RiskCompound or RiskMakeInput.

 Step 3. Replace RiskMakeInput or RiskCompound in your model with the new RiskCumul.

To do this, click the cell containing the RiskCumul, highlight the formula in the formula bar with your mouse, and press Ctrl+C then Esc. Click the cell that you want to replace, and press Ctrl+V then Enter. This copies the formula without changing the receiving cell's formats.

You can now correlate the RiskCumul in the usual way.

See also:

Last edited: 2018-07-03

5.8. Correlation of RiskCompound

Applies to: @RISK 5.x–7.x

I have a risk register with columns for cost and schedule risk, and I'm using formulas like these:

=RiskCompound(RiskBinomial(1,D5), RiskTriang(F5,G5,H5), RiskName("Cost"))

=RiskCompound(RiskBinomial(1,D5), RiskTriang(I5,J5,K5), RiskName("Duration"))

The RiskBinomial with n = 1 indicates that the risk will or will not happen, with the probability given in cell D5. But here's my problem. In some iterations, the cost risk is zero but the duration risk is nonzero, or vice versa. Logically I want them to be zero or nonzero together.

The RiskCompound( ) function itself can't be correlated, because @RISK has no way to know in advance how many times it will need to draw values from the severity distribution during the simulation. However, you can correlate elements of the RiskCompound, as follows:

  • Correlate the frequency distributions, or even use the same frequency distribution for both RiskCompound functions. Both methods are illustrated in the attached workbook, Correlating RiskCompound.xlsx. Using the same frequency distribution in both RiskCompound( ) functions is simpler, and it guarantees that you'll never have a zero for one risk while the other is nonzero.

  • Unpack the severity distribution, so that you have multiple copies of a distribution and use Excel's SUM( ) function to add them up. This replaces RiskCompound with a frequency distribution and multiple copies of the severity distribution. Please see the attached workbook Unpacking RiskCompound.xlsx. For an alternative method, correlating a RiskCompound by converting it to a RiskCuml, see Correlating RiskMakeInput or RiskCompound, Approximately.

By the way, @RISK 6.0 and later offer a Bernoulli distribution for events that may or may not happen. In those releases of @RISK, you could replace RiskBinomial(1,D5) with RiskBernoulli(D5).

For more about RiskCompound( ), see All Articles about RiskCompound.

Additional keywords: Compound distribution

Last edited: 2018-06-28

5.9. Correlating Results of Calculations

Applies to:
@RISK, all releases

How do I correlate the results of a calculation?

There is no way to do that in a simulation. Only @RISK distributions can be correlated.

Results of calculations, whether they are @RISK outputs or not, cannot be correlated. You can use a RiskCorrel function to compute the correlation coefficient that actually occurred in the simulation, but there's no way to impose a desired correlation on them.

See also: How @RISK Correlates Inputs

Last edited: 2018-06-29

5.10. Correlation Matrix Exceeds Excel's Column Limit

Applies to: @RISK 4.x–7.x

I'm using Excel 2003, and I have a correlation matrix whose size exceeds the 256-column limit in Excel. How can I use a correlation matrix of this magnitude with @RISK?

Rebuilding the structure of the matrix lets you use a correlation matrix of this size with @RISK. Below is a description of how to rebuild the matrix. Also, attached are two examples that illustrate the transformation and referencing of the rebuilt matrix.

Note:  @RISK 4.5.7 and earlier won't be able to run a simulation on the example models, because the matrices don't exceed the column limit in Excel.  If you try a simulation, these older versions of @RISK will display the error message, "The correlation matrix [matrix reference] is not square." (See this article if you have a Excel 5.0 or newer.) When you have a matrix that does exceed the column limit, you won't get this error message. The attached examples are for illustrative purposes only, as regarding those older versions.

To rebuild the matrix, take the following steps:

  1. Break your original matrix up into smaller blocks, moving from left to right. The number of columns in the blocks should be less than Excel's column limit, and each block should have the same number of rows as the original matrix. Make as many as the blocks the same size as possible.

    For example, if you have a 400 x 400 correlation matrix, you could break it up into two blocks, each with 400 rows and 200 columns.  A 789 x 789 correlation matrix could be broken up into three blocks with 789 rows and 250 columns each, and one block with 789 rows and 39 columns.

  2. Stack the blocks vertically to create a new matrix.

    Move from left to right, placing each block under the one before it. Place the second block under the first block, place the third block under the second block, and so on.  See Example 1 in attached file.

    The last block may have fewer columns than the others.  That last and smallest block should always be placed at the very bottom of the stack. See Example 2 in attached file.)

  3. Define a range name for the rectangle that contains the rebuilt matrix.   The matrix cell range reference must be rectangular; it can't have an irregular shape. If you end up with one section of the matrix that has fewer columns than the rest, make the matrix cell range reference rectangular by including empty cells in the reference. See Example 2 in attached file.

    For example, that 789 × 789 correlation matrix is rebuilt as three blocks 789×250 and one block 789×39, so you define your range name for the resulting rectangle of 3156 rows and 250 columns.

  4. Add the RiskCorrmat function to your inputs. In your Excel workbook, add the RiskCorrmat function directly to each cell containing the input functions that you wish to correlate. The syntax for the RiskCorrmat function is:

    RiskCorrmat (matrix cell range, position, instance)

I'm running Excel 2007 or later, which allows a million rows, so I don't need this technique for new models. But I've still got some older models  that used this technique. Do newer versions of @RISK still support it?

Yes, this technique works in any type of Excel workbook — XLS, XLX, XLSM, etc. — in any supported Excel, for any @RISK release 4.x–7.x. (Although Excel 2003 is not supported by @RISK 7.x, files created by Excel 2003 are supported by @RISK 7.x in later versions of Excel.)

Additional keywords: Corrmat property function

Last edited: 2018-03-08

5.11. Changing Correlation Coefficients During a Simulation

Applies to: @RISK 4.x–7.x

How does @RISK respond if the correlation matrix contains cell references or formulas whose values change in the middle of a simulation?

@RISK will apply the new coefficients from that point forward in the simulation.

Last edited: 2015-06-24

5.12. Making Correlations Conditional

Question:
Is it possible to induce correlations between @RISK distribution functions in such a way that the coefficient of correlation depends on the results in other cells?  That is, the level of correlation between variables would depend on the risk outcome in other variables.

Response:
Conditional correlations can be created by first modeling all possible cases of correlation and then using logic to control which of the correlated variables are passed into the model.  Please see the attached example.

This example has three risk variables A, B, and C.  The error terms for these three variables are correlated.  However, the correlation between A, B, and C within a given period (t) will depend on the outcome for variable A in period (t-1).

last edited: 2012-06-29

5.13. Excel Reports a Correlation Different from What I Specified

I specified a correlation coefficient, but when I apply Excel's CORREL( ) function to the simulation data a different correlation is reported. Why?

Briefly, in the correlation matrix in @RISK you supply rank-order correlation coefficients (Spearman), but Excel calculates product-moment correlation coefficients (Pearson). In @RISK 5.5.0 and later, you can use the RiskCorrel( ) function to show the Spearman coefficient after a simulation, and it should be close to what you specified.

For full details, please see How @RISK Correlates Inputs, particularly the last few paragraphs. That page contains a downloadable example to illustrate the issues.

Last edited: 2015-06-23

5.14. How @RISK Computes Rank-Order Correlation

Applies to: @RISK 5.5.0 and later

The RiskCorrel( ) function can return the Pearson product-moment or Spearman rank-order correlation coefficient.  How is the rank-order coefficient computed?

@RISK uses the method in Numerical Recipes by Press, Flannery, Teulosky, and Vetterling (Cambridge University Press; 1986), pages 488 and following.

Each number in each of the simulated distributions is replaced with its rank within that distribution, as an integer from 1 to N (number of iterations).  If the values in a distribution are all different, as they usually are with continuous distributions, then the rank numbers will all be distinct.  If there are duplicate numbers within the distribution, as often happens with discrete distributions, then "it is conventional to assign to all these 'ties' the mean of the ranks that they would have had if their values had been slightly different.  This [is called the] midrank" (quoting from the reference book above).

Once the ranks are obtained, the rank-order coefficient is simply the Pearson linear correlation coefficient of the ranks.

The above explains how @RISK computes rank-order correlation after a simulation is complete.  @RISK also uses rank-order correlation within a simulation, when drawing numbers for correlated distributions.  This page gives details: How @RISK Correlates Inputs.

Last edited: 2015-06-23

5.15. Create a Correlation Matrix from Historical Data

Disponible en español: Crear una matriz de correlación a partir de datos históricos

Overview:

With @RISK, you can use correlation coefficients reported from historical data to simulate distribution functions created from the data. This is an effective way to use past observations to predict future behavior.

For example, you may have data from several years representing mortgage interest rates, mortgages sold, housing starts, and inventory of existing homes for sale. Each of these variables bears a historical relationship to the others. For example, the data may show that a rise in inventory of existing homes on the market is typically accompanied by a decrease in housing construction starts. Mortgage interest rates may exhibit a similar inverse relationship to both housing starts and mortgages sold. This web of historical relationships can be captured in correlation coefficients identified with an Excel correlation matrix. Coefficients from the Excel matrix can then be copy/pasted into an @RISK correlation matrix to control sampling of distributions that represent these variables.

To identify the correlation coefficients of your data:

  1. Make sure that your data are located in adjacent columns or rows in Excel. The Excel correlation analysis tool will not report correlations for non-adjacent selections made with the Ctrl key.
  2. In Excel 2010 or 2007, chose Data » Data Analysis; in Excel 2003 or earlier it's Tools » Data Analysis. The Data Analysis dialog appears. (If the Data Analysis item does not appear in the Excel menu, then you need to install the Analysis ToolPak Add-In from your Microsoft Office CD, or you need to place a check mark in the box beside Analysis ToolPak in your Excel Add-Ins dialog.)
  3. Select Correlation from the list box in the Data Analysis dialog and click the OK button. The Correlation dialog appears.
    1. In the area labeled Input, click the appropriate button to indicate whether your data are arranged in columns or rows, and check the box beside Labels in First Row (or Labels in First Column) if applicable.
    2. To indicate the range of data for which you want correlations reported, click the Collapse/Expand Dialog button (small red and blue square) beside the Input Range text box. This shrinks the dialog temporarily so that you can select your data from the Excel spreadsheet.
    3. Select your data. The cell reference for the selected range appears in the Input Range text box.
    4. Click the Collapse/Expand Dialog button again to expand the dialog.
    5. In the area labeled Output options, click one of the option buttons to indicate whether you want the Excel correlation matrix to appear in an Output Range in the same worksheet, a New Worksheet Ply (tab) in the same workbook, or a New Workbook. If you choose Output Range, you can enter a single cell. This cell will be the upper left cell at the beginning of the matrix location. If you choose New Worksheet Ply, you must enter a name for the new worksheet.
    6. Click the OK button. A correlation matrix appears in Excel reporting the correlation coefficients of your data.
  4. Create an @RISK distribution function from each column (or row) of data. (See "Distribution Fitting" in the @RISK manual or on-line help.) To ensure that variable and coefficient positions in the @RISK correlation matrix match positions in the Excel matrix, create the distributions down an Excel row, or across an Excel column, in the same order as the data columns from which the Excel matrix was created. (Alternatively, you can create the distribution functions, and then move them so that are arranged in the order reflected in the Excel matrix.)
  5. Look at the Excel matrix again and note carefully the order of variables and coefficients in that matrix.
  6. Select all cells in the Excel correlation matrix that contain correlation coefficients. Be sure to exclude the variable labels from the selection.
  7. Open the @RISK Model Window. If you followed step 4, the Inputs to be correlated should appear in the same order in which they were arranged in the Excel worksheet.
  8. From the Explorer pane (left-hand side) of the @RISK Model Window, use the Shift key to select the group of Input distributions you just created from your data.
  9. Right click the selection of Inputs. A popup menu appears.
  10. Choose Correlate Distributions from the popup menu. The @RISK Correlation window appears. Verify that the order of variables and coefficients in the @RISK correlation matrix matches the order of variables and coefficients in the Excel matrix.
  11. In the @RISK Correlation window, select all cells that contain correlation coefficients. Be sure to exclude the variable labels from the selection.
  12. From the menu in the @RISK Model Window choose Edit > Paste. Verify that the coefficients appear in their expected positions within the @RISK correlation matrix. Rename the matrix as desired by entering a new name in the Name text box.
  13. Click the Apply button to enter the correlation matrix in @RISK. The name of your @RISK correlation matrix now appears beneath the list of Inputs in the Explorer pane of the @RISK Model Window. In addition, correlation icons appear beside each correlated Input in the grid to the right of the Explorer pane, and the RiskCorrmat function now appears as an argument of each of the correlated distribution functions. In Excel, a new worksheet containing your @RISK correlation matrix has been added to the workbook.

last edited: 2012-08-08

5.16. How and Why to Switch from RiskDepC to RiskCorrmat

Applies to: @RISK 5.5 and newer

I'm using RiskIndepC and RiskDepC functions to correlate my inputs, but the manual says that's the old way, and I should use RiskCorrmat. Does it really matter?

If you have only one RiskDepC for each RiskIndepC, it really doesn't matter. @RISK creates a 2 × 2 matrix for each DepC/IndepC pair.

But multiple RiskDepC functions associated with any particular RiskIndepC can cause problems. To understand the issue, you need to know that @RISK creates a separate correlation matrix for each RiskIndepC function, using the RiskDepC functions associated with that RiskIndepC. The top row of the matrix is assigned to the variable designated with RiskIndepC, and the other rows to the RiskDepC variables with the same string identifier. The correlations you specify go in the first column of that matrix. The other columns of the matrix represent correlations among the RiskDepC variables, and since the DepC/IndepC scheme gives no way to assign those correlations, @RISK uses zeroes. (See Correlation Matrix Equivalent to RiskIndepC and RiskDepC in the attached workbook.)

Now, why is this bad? First, you may unwittingly create a matrix that is not self-consistent. For example, if you have two RiskDepC functions specifying correlations of 0.9 and 0.5 with your RiskIndepC, it's not mathematically possible for those two RiskDepC variables to be correlated to each other with a coefficient of 0.0, yet that's what @RISK uses, so you get the message that the matrix is not self-consistent. If you let the simulation proceed, @RISK will adjust the matrix to be self-consistent, changing not only the zeroes but the correlations you specified, and the simulated correlations may be very different from what you specified. (See Simulated Correlations if You Click OK in the attached workbook.)

Even if your matrix is self-consistent, for example two RiskDepC functions specifying 0.8 and 0.5, while it's mathematically possible for those two dependent variables to have a correlation coefficient of zero with each other, it's not very likely. Thus, your model may not be representing the real-world situation as accurately as possible.

What can I do to prevent these problems?

Switch to RiskCorrmat. Starting with @RISK 5.5, you can specify an adjustment weights matrix to tell @RISK to come up with a self-consistent matrix that preserves your desired correlations as far as possible, while assigning valid values to the correlations between the RiskDepC variables. The attached workbook gives an example of how to make the conversion:

  1. Construct the correlation matrix for each of your RiskIndepC variables, as described. Also construct an adjustment weights matrix with 100's in the first column and zeroes elsewhere.
  2. Use a RiskCorrectCorrmat array function to compute a self-consistent adjusted correlation matrix. (You may wish to give the new matrix a name in Excel, for convenience in editing formulas.)
  3. Change RiskIndepC and RiskDepC to RiskCorrmat. RiskCorrmat takes two arguments, the matrix and a variable number. Use 1 for the variable that used to have RiskIndepC, and 2 through n for the rest.

The end result is that the simulated correlations will match the ones you originally assigned in RiskDepC, as closely as mathematically possible.

If you wish, you can highlight the adjusted matrix and select first Copy, then Paste Special Values to replace the formulas with numbers. Then the original matrix and the adjustment weights matrix can be deleted.

Last edited: 2017-08-08

5.17. Correlation Coefficient of Output Distributions

Please read the full article in the "Simulation Results" chapter.

5.18. Correlation of Time Series

Applies to: @RISK 6.x/7.x, Industrial Edition

How does time series correlation work? Is it different from correlating regular distributions?

Short answer: You can correlate between different time series, but that correlation may not be visible in the displayed numbers.

Let's clarify the time series process, to help in understanding correlation of time series. You create a time series by fitting real-world data. However, the fitting process usually involves transforms, such as differencing or taking the logarithm. @RISK actually fits the transformed data, not your real-world data. The time series is a series of transformed data, not real-world data. After fitting a series, @RISK projects the series forward into the future. Each time period's prediction has two components: a formula that uses transformed data from one or more previous time periods, plus a randomly generated noise term or error value. Remember, all of the predictions are done with the underlying time series values, not with real-world data. After computing the formula and noise term for a time period, @RISK reverses the transforms that were used in the fit, and the result is the displayed numbers that you see in your Excel worksheet. The displayed numbers differ from the underlying time series values when there are transforms in that particular time series.

You can correlate two or more time series functions using the @RISK Define Correlations window, or manually using RiskCorrmat( ) property functions, just as you would correlate regular @RISK distribution functions. However, correlation between time series is fundamentally different from the correlation of standard distributions. In correlating regular distributions, all the values for all iterations form one array per distribution, and the correlation is applied to those two arrays. For regular distributions, correlation is an attribute of the whole simulation, not of any particular iteration. By contrast, when two time series functions are correlated, the correlation is reapplied, from scratch, in each iteration. The two arrays that get correlated are the noise terms within the projected time periods of the two time series for that particular iteration. Again, this happens within each iteration, without reference to any other iteration.

This explains why the displayed values won't match your correlation coefficients. The displayed values aren't correlated; only the noise term parts of each underlying time series value are correlated. The formula parts can't be correlated because they are generated by the time series function, so the underlying time series values as a whole are not correlated. What you see is the real-world data, computed by reversing the transforms of that particular time series. But even if there are no transforms, so that the displayed numbers equal the underlying time series values, still the correlated noise term is just part of each displayed number. Correlations are honored but are effectively buried, so you never see the numbers that are actually being correlated.

Can I correlate the successive periods of a given time series?

There is no way to correlate the noise terms between time periods of a given time series. Time series correlation applies only between one time series and another.

And when the time series were produced by batch fit?

When you generate sets of correlated time series using the time series batch fit command, it constructs a correlation matrix as part of its output. (See Correlated Time Series in Batch Fit.) The generated coefficients apply to the historical data that you supplied. You're free to alter the correlation coefficients, or add or remove time series in the matrix.

Forward projections use the coefficients in that matrix just like any other correlations for time series. These correlations are applied in the same way as described above: the noise terms of the underlying time series values can be correlated, but those noise terms are not displayed separately.

Can I define a copula for two or more time series?

Copulas cannot be defined for Time Series array functions, only for regular @RISK input distributions.

Last edited: 2016-09-19

5.19. Correlated Time Series in Batch Fit

Applies to: @RISK 6.x/7.x

Can you tell me more about the correlation matrix shown in the fit summary after a time series batch fit? The coefficients in that matrix don't seem to match the results of Excel's CORREL( ) function; where do the numbers come from? How are they used?

The correlation matrix in the fit summary is the Spearman(*) correlations of the transformed historical data. Those transformed data are not available as numbers. However, you can see the graph of the transformed data by doing a single fit rather than a batch fit, and using the same transformations.

(*)The correlation matrix was Pearson in @RISK 6.0, but was changed to Spearman (rank order) in 6.1. When distributions with very different shapes are correlated, results with Pearson can be unsatisfactory. The change to the distribution-free Spearman rank order correlation was made for this reason, and to be consistent with how other correlations work in @RISK.

That correlation matrix in turn is used to correlate the projected distributions of the time series in each time period by the rank order method. The correlation is not applied to the numbers that you can see in a worksheet; it is the "raw" time series functions that are correlated.

Conceptual summary of a batch fit:

  1. @RISK applies transforms to the historical data and then fits the transformed data. The transforms can be user selected, or if you click Auto Detect then @RISK will determine the transforms to apply.

    @RISK computes the actual Spearman correlations of the transformed historical data. This is the matrix shown in the fit summary.

  2. For projections, @RISK applies the projection functions shown in the worksheet cells. This is conceptually a two-stage process: first "raw" numbers are developed, projecting from the transformed historical data according to the fit. Then the "raw" numbers are de-transformed by reversing the transforms that were applied to the historical data, and the de-transformed projections become the final output of the functions that you see in the worksheet cells..

    The correlation matrix is applied to the "raw" numbers, not to the final de-transformed projections that appear in the worksheet.

Last edited: 2015-06-23

5.20. Copulas

Applies to: @RISK 7.x

I need something more general than rank-order correlation. Does @RISK support copulas?

Yes, beginning with release 7.0.0, @RISK supports copulas. See "Define Copula Command" in the @RISK user manual or help file.

If you have an earlier version and would like to use copulas, please see Upgrading Palisade Software.

Last edited: 2015-08-13

6. @RISK Simulation: Numerical Results

6.1. Convergence Monitoring in @RISK

Applies to: @RISK for Excel 4.5–7.x

I know that @RISK lets me set criteria for convergence monitoring, but how does it actually do the calculations?

Answer for @RISK 5.x–7.x:

Convergence monitoring means that @RISK keeps simulating until it has stable results for the outputs. To have @RISK monitor convergence, set the number of iterations to Auto, and then go to the Convergence tab of Simulation Settings.

Convergence can be done on any combination of the mean, standard deviation, and a specified percentile for any or all outputs. You specify a convergence tolerance such as 3%, and a confidence level such as 95%. The simulation stops when there is a 95% chance that the mean of the tested output is within 3% of its true value.  Analogous calculations are done if you monitor standard deviation or a percentile.

If you specify "Perform Tests on Simulated" for two or three items, @RISK considers convergence to have occurred only if all the selected measures meet the convergence test.

The setting "Calculate every ___ iterations" says how often in a simulation @RISK pauses to check whether convergence has occurred, but it has no effect on the stringency of the test.  It's simply a trade-off for efficiency: if you check convergence more often, you may converge in fewer iterations but in more time because convergence testing itself imposes some overhead. If you test convergence less often, it may take more iterations but less time for a similar reason.

With convergence testing selected, the Results Summary window will open automatically in a simulation to show progress toward convergence. The status column of that window shows OK for outputs that have converged.  But, typically, some outputs converge faster than others. If a given output has not converged, a number from 1 to 99 is shown in the status column.  That is @RISK's estimated percentage of the number of iterations done so far over the number that would be needed for this output to converge.  Example: if the number is 23, and you've done 10,000 iterations, then @RISK estimates that a total of about 10,000/23% = 43,500 iterations would be required for convergence.

See also:

Answer for @RISK 4.5:

Every N iterations (for example every 100 iterations, where N is user selectable), @RISK calculates these three statistics:

  • The relative change in the mean of the monitored output, which is
    (mean from previous test made N iterations ago - current mean) / max(abs(previous mean), abs(current mean))
  • The relative change in standard deviation.
  • The relative change in the average percentile. For this @RISK calculates the relative change in the 5th percentile, 10th, 15th, ..., 90th, 95th, then takes the average of these relative changes.

If all three of these statistics are less than or equal to the user-specified threshold, @RISK marks the simulation as converged for this test. If the simulation is marked as converged for 2 tests in a row @RISK considers it converged, and stops.

Last edited: 2017-06-30

6.2. More Than 50,000 Iterations to Converge

Applies to: @RISK 6.1.1 and newer

I have set up convergence, with iterations set to Auto, but @RISK stops at 50,000 iterations although not all of my outputs have converged.

@RISK 8.x

The application comes with an interface where the user can choose the maximum number of iterations for Auto Stop. This option is enabled when the user selects the "Auto" setting in the number of iterations field. When the Simulation Settings User Interface is open, a new field labeled "Maximum of" will let the user define the maximum number of iterations.

Details can be found in the Online Help

@RISK 6.1.1 and 7.x

By default, @RISK stops at 50,000 iterations rather than keep iterating indefinitely.  However, it is true that some models will eventually converge, but at some point after 50,000 iterations.  You could set a higher number of iterations explicitly instead of Auto, but then you lose convergence monitoring.

Beginning with @RISK 6.1.1, you can change that 50,000-iteration limit for convergence monitoring.  Create a workbook-level name RiskMaxItersForAutoStop with a value such as =100000.  (The leading = sign is required.)  With iterations set to Auto, @RISK will stop when outputs converge, or when the RiskMaxItersForAutoStop number of iterations is reached, whichever happens first. 

Name Manager dialog to create RiskMaxItersForAutoStopTo create a workbook-level name in Excel 2007, Excel 2010, and Excel 2013, click Formulas » Name Manager » New.  Enter the name RiskMaxItersForAutoStop.  In the Refers-to box, enter your desired maximum number of iterations for convergence monitoring, preceded by the = sign.  Click OK and then Close.

To create a workbook-level name in Excel 2003, click Insert » Name » Define.  In the box at the top, enter the name RiskMaxItersForAutoStop.  In the Refers-to box, enter your desired maximum number of iterations for convergence monitoring, preceded by the = sign.  Click Add and then OK.

During simulation, the progress window at the lower left of your screen shows a percent complete.  If iterations is Auto, @RISK doesn't know how many iterations will be needed until convergence has occurred, so it computes the percent complete on a basis of 100% = 50,000 iterations.  If you have set RiskMaxItersForAutoStop to a larger number, and more than 50,000 iterations are needed, the percent complete will go above 100%.

Last edited: 2020-06-01

6.3. Convergence by Testing Percentiles

Why do different percentiles take different numbers of iterations to converge? And why do percentiles sometimes converge more quickly than the mean, even though the mean should be more stable?

There can definitely be some surprises when you use percentiles as your criterion for convergence, and you can also get very different behavior from different distributions.

First, an explanation of how @RISK tests for convergence.  In Simulation Settings, on the Convergence tab, you can specify a convergence tolerance and a confidence level or use the default settings of 3% tolerance and 95% confidence.  Setting 3% tolerance at 95% confidence means that @RISK keeps iterating until there is better than a 95% probability that true percentile of the distribution is within ±3% of the corresponding percentile of the simulation data accumulated so far.  (See also: Convergence Monitoring in @RISK.)

Example: You're testing convergence on P99 (the 99th percentile).  N iterations into the simulation, the 99th percentile of those N iterations is 3872,  A 3% tolerance is 3% of 3872 = about 116.  @RISK computes the chance that the true P99 of the population is within 3872±116.  If that chance is better than 95%, @RISK considers that P99 has converged.  If that chance is less than 95%, @RISK uses the sample P99 (from the N iterations so far) to estimate how many iterations will be needed to get that chance above 95%.  In the Status column of the Results Summary window, @RISK displays the percentage of the necessary iterations that @RISK has performed so far.

Technical details: @RISK computes the probabilities by using the theory in Distribution-Free Confidence Intervals for Percentiles (accessed 2020-07-28).  The article gives an exact computation using the binomial distribution and an approximate calculation using the normal distribution; @RISK uses the binomial calculation.

Now, an explanation of anomalies, including those mentioned above.

  • P1 (first percentile) takes many more iterations to converge than P99, or vice versa.

    At first thought, you might expect P1 and P99 to converge with the same number of iterations, P5 and P95 with the same, P10 and P90 with the same, and so on.  But it usually does not work out that way.  Let's take just the first and 99th percentiles as an example.

    The tolerance for declaring convergence complete is expressed as a percentage of the target. If the values are all positive, then the first percentile is a smaller number than the 99th percentile, and therefore the tolerance for P1 is a smaller number than the same percentage tolerance for P99. The difference is greater if the distribution has a wide range, or if the low end of the distribution is at or near zero.

    For an extreme example, consider a uniform continuous distribution from 0 to 100. P1 is around zero, and P99 is around 100, so a 3% tolerance for P1 is quite small and will take about 406,000 iterations to achieve. By contrast, a 3% tolerance for P99 is relatively much larger and is achieved in only 30 iterations.

    On the other hand, if the values are all negative then P99 will have a smaller magnitude than P1, and will therefore converge more slowly.

  • Convergence happens too quickly and is very poor.

    This occurs when the distribution has a narrow range, so that there is little difference between one percentile and another.

    Consider the uniform continuous distribution from 10000 to 10100. Every percentile is in the neighborhood of 10,050, so a 3% tolerance is about 10,050±302 = 9,748 to 10,352. That range is actually larger than the data range.  Therefore, the very first sample value for every percentile will be within that range, so convergence of any percentile happens on the first iteration, but that "convergence" is meaningless.

  • P50 takes more iterations to converge than P1 or P99, and also more than the mean.

    This is expected for many distributions. The percentile convergence is based on a binomial distribution, with p = the percentile being tested. The binomial distribution is fairly broad for p = 50%, and so the margins of error are greater and convergence takes more iterations. But as p gets closer to 0 or to 100%, the distribution gets more narrow, margins of error get smaller, and convergence happens in fewer iterations.

    As for convergence of a mean versus convergence of the 50th percentile, percentiles use the binomial distribution, but the confidence interval for the mean uses Student's t. Margins of error are usually narrower for Student's t than for the binomial, so convergence of the mean happens faster than convergence of the 50th percentile, even for a symmetric distribution.

Advice: In @RISK's simulation settings you have to set convergence tolerance as a percentage of the tested statistic (mean, standard deviation, or percentile), but the appropriate percentage is not always obvious.  To help you make the decision, run a simulation with a few iterations, say 100, just to get a sense of what the output distribution looks like. Then, if you expect the percentile value to be close to zero, specify a higher tolerance or choose a different statistic. Also check your tolerance against the expected range of the output, and if necessary specify a smaller tolerance.

Last edited: 2020-07-28

6.4. Static Value of Output Differs from Simulated Mean

Applies to:
@RISK, all releases
@RISKOptimizer, all releases
@RISK Developer's Kit (RDK), all releases

In Simulation Settings » When a Simulation is Not Running, Distributions Return, I selected Expected Value or True EV. When a simulation is not running, the expected values of my inputs and outputs are visible in my model. I run a simulation, and the simulated means for my inputs closely match the expected values, but the simulated means of my outputs are very different from the expected values. What is wrong?

This is normal behavior.

When a simulation is not running, both inputs and outputs display their static values. For inputs, the static values are the means (unless you changed that — see Setting the "Return Value" of a Distribution). For outputs, the static values are not the means, but are values calculated from the displayed values of the inputs. In general, the mean of a calculated result does not equal the value you would calculate from the mean inputs, and the same is true for percentiles and mode.

If you want to display the means of outputs in your worksheet, use the RiskMean function. For percentiles, including the median, the 50th percentile, use RiskPtoX. Please see Placing Simulation Statistics in a Worksheet. For an example, please download the attached workbook.

In the example, column C is the inputs, and column D is the outputs created from them. For this example the computation is just a square or reciprocal, but the same principle holds for the calculations in your model. Column E is the simulated mean of each output. As expected, the simulated means are quite different from the static values displayed in column D. The principle here is "the mean of the squares is different from the square of the mean."  But the simulated means are quite close to the theoretical means I calculated for those outputs in column F.

Row 4 is a standard normal distribution (mean=0) and its square (mean=1, not 0). Row 5 is a discrete uniform with two values 1 and 9 (mean=5) and its reciprocal (mean=5/9, not 1/5). Row 6 is a continuous uniform (mean=1/2) with its square (mean=1/3, not 1/4).

If you run a simulation, you can see easily that the distributions of the outputs have different shapes from the distributions of the inputs. This will always be true, to a greater or lesser extent, for a non-linear model. Since the vast majority of models are non-linear, you should expect to see the mean values of outputs be different from the static values computed from the mean inputs.

See also:

Last edited: 2017-09-07

6.5. Static Value of Input Differs from Simulated Mean

Applies to:
@RISK, all releases
RISKOptimizer, all releases
@RISK Developer's Kit (RDK), all releases

Why is the expected value displayed in the spreadsheet cell that contains an @RISK input function different from the mean of the simulation results for that input?

The simulated mean of a distribution will typically be close to the theoretical mean, but not exactly the same. This is normal statistical behavior. And it's not just the mean, but the standard deviation, median, mode, percentiles, and all other statistics.

To illustrate, set up a simulation in the following way:

  1. Start with a blank workbook.

  2. In cell A1, define an @RISK input as a normal distribution with mean of 10 and standard deviation of 1.

  3. Click the Simulation Settings icon from the @RISK toolbar. On the Iterations tab, set the number of iterations to 10,000. On the Sampling tab, select Latin Hypercube for the sampling type and a fixed random generator seed of 1.

  4. Run the simulation, click the cell, and click Browse Results in the @RISK toolbar. (In @RISK 4.x, open the @RISK–Results window.)

Statistical theory tells us that the expected distribution for the mean of the input is a normal distribution with a mean of 10 and a standard deviation (often called the standard error) of 1/√10000 = 0.01. Although the exact results will vary by version of @RISK, you should find that the mean of the simulated input is well within the interval 9.99 to 10.01, which is within one standard error of the theoretical mean. (The default Latin Hypercube sampling type does considerably better than classic Monte Carlo sampling.)

But the displayed value is not even close to the simulated mean of the input. How can that be?

Most likely this is a setting in your model. In the Simulation Settings dialog, look at When a Simulation is Not Running and verify that it is set to Static Values. Set Where RiskStatic is Not Defined, Use to True Expected Values.

As the dialog box implies, a RiskStatic property function in any distribution will make that distribution display the RiskStatic value instead of the statistic you select in Static Values.

Is there any way to get the exact mean of the distribution in my worksheet?

We'd say "theoretical mean" rather than exact mean, and that's the key to the answer. Where RiskMean returns the mean of the last simulation that was run, RiskTheoMean returns the mean of the theoretical distribution. There's also RiskTheoStdDev, RiskTheoPercentile, and so forth. In @RISK, click Insert Function » Statistic Functions » Theoretical to see all of them.

See also:

Last edited: 2017-10-03

6.6. Placing Simulation Statistics in a Worksheet

Applies to:
@RISK For Excel 4.x–7.x
RISKOptimizer 1.0, 5.x

Is there a way to have result statistics from an @RISK simulation placed in a specific location of a spreadsheet automatically at the end of a simulation?

Statistics for any cell—including inputs (@RISK distributions), @RISK outputs, and plain Excel formulas—can be reported directly in the spreadsheet at the end of simulation by using the statistic functions provided with @RISK. A full list of the statistic functions is available in the @RISK menu: Insert Function » Statistic Functions » Simulation Result.

As an example, the formula =RiskMean(A1)  will return the simulated mean of cell A1 across all iterations in the simulation. For an @RISK output or an Excel formula, that's the only option. For an @RISK input distribution, you have a choice: =RiskMean(A1) to return the mean of random values drawn for that particular simulation, or =RiskTheoMean(A1) to return the theoretical mean of the distribution. RiskMean values will vary slightly from one simulation to the next, and are not available till you have run a simulation; RiskTheoMean is based on the theoretical distribution and will always return the same result, even if you have not run a simulation. See Statistics for an Input Distribution.

In @RISK 5.5 and later, by default the statistic functions are not calculated until after the last iteration of a simulation, though you can change this in Simulation Settings. In @RISK 5.0 and earlier, the statistic functions all calculate in "real time", meaning that @RISK recalculates the statistic at each iteration based on the number of samples that have been drawn. For more about the timing of calculating the statistic functions, please see "No values to graph" Message / All Errors in Simulation Data.

See also:

Additional keywords: Mean, simulated mean, percentile, simulated percentile, statistics functions

Last edited: 2017-06-30

6.7. Simulation Statistics for Output Ranges

Applies to: @RISK 5.x–7.x

Many of my @RISK outputs are in output ranges, where a group of related outputs share a name. The formulas look like this:

=RiskOutput(,"Profit by Month",1) + formula

=RiskOutput(,"Profit by Month",2) + formula

=RiskOutput(,"Profit by Month",3) + formula

and so on. How can I address these outputs by name in a statistic function like RiskMean( ) or RiskPercentile( )?

Outputs and inputs referenced by name in statistic functions must have unique names, according to the @RISK manual. When the same name is used for multiple outputs, including a range of outputs, statistic functions need to reference them by cell reference rather than by name. Example: =RiskMean(A15).

Last edited: 2015-06-26

6.8. What Was My Random Number Seed?

Applies to: @RISK 5.x–8.x

I've run a simulation, but it was with the random number seed set to "Choose Randomly". How can I find out what seed @RISK actually used, so that I can reproduce the simulation later?

If Quick Reports were run automatically at the end of the simulation, look at the Simulation Summary Information block on any of them. The Random Seed item gives you the random number seed that was actually used. If you haven't run Quick Reports, see Get a Quick Report for Just One Output, unless of course you want Quick Reports for all outputs.

In Version 8, the Quick Report is not available. The RiskSimulationInfo function can be used to determine the seed number instead. You can find more about that function here: https://help.palisade.com/v8_1/en/@RISK/Function/6-Miscellaneous/RiskSimulationInfo.htm

If you don't have any outputs, you can use a snippet of Visual Basic code without being a programmer:

  1. Press Alt-F11 to open the Visual Basic Editor, then F7 to open the code window.

  2. Paste this VBA code into the window:

    Sub StoreSeed()
        Risk.Simulation.Settings.RandomSeed = _
            Risk.Simulation.Results.RandomNumberSeed
    End Sub
  3. Please see Setting References in Visual Basic for the necessary references and how to set them.

  4. Click somewhere in the middle of the StoreSeed routine, and press F5 to run the code. This will change @RISK's Simulation Settings, on the Sampling tab, to a fixed random seed. and it will insert the actual seed from the latest simulation as the seed for future simulations.

  5. If you leave the code in place, Excel 2007 or above will no longer store the workbook as an .XLSX but instead will use .XLSM format. This may present you or anyone who opens your workbook with a macro security prompt. To prevent that, you can delete the pasted code before you save the workbook. The fixed random seed will remain in Simulation Settings.

  6. Save the workbook to save the fixed random seed.

See also: Random Number Generation, Seed Values, and Reproducibility

Last edited: 2020-12-03

6.9. Mode of Continuous Data

Applies to: @RISK, all versions

I'm displaying the mode of my data, and it seems to be very far from the tallest bar in the histogram. What is wrong? How does @RISK compute the mode of a continuous distribution?

The traditional definition of the mode of discrete data is the most frequently occurring value of the variable. An analogous definition works well for most theoretical continuous distributions: you have a smooth probability density curve pdf(x), and the mode is simply the value of x where the pdf(x) is highest. But for continuous data in simulation results, it's unusual to have identical data points, and therefore a new definition is needed.

Different authorities use different definitions and therefore find different modes; the way you bin the data can also change which value you call the mode. One way to come up with a mode is to divide the n simulated data points into k bins, each with n/k consecutive data points, and then look at the widths of the bins. The narrowest bin is the one where the points are clustered closest together, which means that the probability density is greatest in that bin, so the mode must be there.

@RISK uses that method. It divides the simulated data into k = 100 bins unless there are fewer than 300 data points in the simulation; in that case @RISK uses k = n/3 bins, so that a bin never has fewer than three points. @RISK then finds the narrowest bin, where the points are most closely clustered together.  Finally, it computes the mean value of the n/k points in that bin, and reports that value as the mode. (This information is current as of 2015-05-01, but may change in a future release of @RISK.)

The binning for purposes of finding the mode is almost always different from the binning for a histogram of the data in Browse Results or other graphs. Even if you specify a histogram of 100 bins, they will still be different from the histogram bins. When @RISK is finding a mode, the bins all contain the same number of points and have different widths. On the histogram, the bins (bars) all have the same width and contain different numbers of points. This is how the mode can be far from the tallest bar in the histogram.

A simple example is attached. That example does show how changing the graph settings can reveal the mode, but that technique won't work on every distribution.

By the way, when you do distribution fitting, @RISK uses this same computation. That's how it gets the approximate mode of your sample data that it shows in the fit results window, for comparison with the fitted distribution. This computation is not used in any way in the process of fitting distributions to your data; it's purely for display of the comparison.

Last edited: 2015-05-01

6.10. Conditional Tail Expectation or Conditional VaR

Applies to: @RISK 5.x–7.x

How can I use @RISK to calculate the conditional tail expectation (conditional value at risk, CVaR) of a simulated output?

Beginning with @RISK 5.5, you can compute statistics for part of a distribution by value or by percentile; @RISK 5.0 can compute statistics for a distribution delimited by value only. To compute your statistics, you insert the property function RiskTruncate( ) in an @RISK statistics function such as RiskMean( ).

For example, suppose you have a simulated output in cell C11, and you want the conditional value at risk for the left-hand 5% tail. That is equivalent to the mean value of just the lowest 5% of the distribution, and you compute it like this:

@RISK 5.5 and later: =RiskMean(C11, RiskTruncateP( , 0.05) )
@RISK 5.0: =RiskMean(C11, RiskTruncate( , RiskPtoX(C11,0.05) ) )

You can also compute expected value for the upper tail. For example, the upper 5% is above the 95th percentile, so you set the 95th percentile as a lower limit and compute the expected value of the 5% right-hand tail like this:

@RISK 5.5 and later: =RiskMean(C11, RiskTruncateP(0.95, ) )
@RISK 5.0: =RiskMean(C11, RiskTruncate( RiskPtoX(C11,0.95), ) )

The value will be approximate if you're calculating conditional tail expectation on a theoretical distribution. See About accuracy of theoretical statistics in Statistics for Just Part of a Distribution.

In @RISK help or the manual, see the section "Calculating Statistics on a Subset of a Distribution".

Last edited: 2017-09-01

6.11. Probability of an Interval

Applies to: @RISK 5.x–7.x

From a simulated output or input, I'd like to find the probability of the result occurring in an interval, between a lower and an upper limit.

An easy way is to click the output, click Browse Results, and adjust the sliders to the limits you're interested in. The probability then shows in the horizontal bar between the two sliders.

I really wanted this as a worksheet function.

No problem! The probability of the interval is the cumulative probability of the upper limit, minus the cumulative probability of the lower limit. Assuming your output is in cell DF1, and the limits are 100 and 200, the formula is:

=RiskXtoP(DF1,200)-RiskXtoP(DF1,100)

If the limits are in cells DL1 and DH1, then the formula is:

=RiskXtoP(DF1,DH1)-RiskXtoP(DF1,DL1)

Last edited: 2016-05-04

6.12. Semivariance, Semideviation, Mean Absolute Deviation

Applies to: @RISK for Excel 5.x–7.x

Can @RISK compute upper and lower semivariance, semideviation, and mean absolute deviation?

Yes, beginning with @RISK 7.5 you can use @RISK statistic functions to compute these quantities automatically. In all statistic functions, datasource can be the name of an input or output, in quotes, or a cell reference.

  • Lower semivariance: RiskSemiVariance(datasource) or RiskSemiVariance(datasource, TRUE, simnumber)
  • Upper semivariance: RiskSemiVariance(datasource, FALSE) or RiskSemiVariance(datasource, FALSE, simnumber)
    (Upper semivariance plus lower semivariance equals variance.)
  • Lower semideviation: RiskSemiStdDev(datasource) or RiskSemiStdDev(datasource, TRUE, simnumber)
  • Upper semideviation: RiskSemiStdDev(datasource, FALSE) or RiskSemiStdDev(datasource, FALSE, simnumber)
    (Lower and upper semideviation are square roots of lower and upper semivariance. The sum of lower and upper semideviations doesn't equal the standard deviation.)
  • Mean absolute deviation: RiskMeanAbsDev(datasource) or RiskMeanAbsDev(datasource, simnumber)
    (Lower and upper mean absolute deviation are each half of the mean absolute deviation.)

But I have an earlier version of @RISK, and I'm required to use this version. Is there a workaround?

In earlier versions, you can do it yourself with user-defined functions in VBA, or by manipulating the iteration data with RiskData( ) functions. The attached file illustrates both approaches, and shows that they have the same result. (To call on @RISK in user-defined functions in VBA, you need @RISK Industrial or Professional. @RISK Standard Edition does not support automating @RISK.)

Lower and upper semivariance are computed in a similar way to variance: take the sum of squares of differences from the mean, and divide by number of iterations minus 1. (The minus 1 is necessary to create an unbiased estimate of variance, because the simulation is a sample, not the whole population.) However, in computing lower semivariance, use 0 in place of squared deviations above the mean; and in computing upper semivariance, use 0 in place of squared deviations below the mean. Equations might make this clearer:

SV_lower(X) = reciprocal n minus 1, times summation of IF(X <= Xbar) × (X-Xbar)²; SV_upper(X) = reciprocal n minus 1, times summation of IF(X >= Xbar) × (X-Xbar)²; SV_upper(S) = VAR(X) minus SV_lower(X)

where n is the number of iterations, and IF(condition) has the value 1 if condition is true and 0 if it is false.

Notice that values on the "wrong" side of the mean are not simply omitted; rather, they are replaced by zeroes, so the denominator of the semivariance is the same as the denominator of the variance. Though some authors replace n with the number of values lower (higher) than the mean for lower (upper) semivariance, this article follows Estrada, Rohatgi, and others. Thus the sum of lower and upper semivariance is the variance.

Lower and upper semideviation are found by taking the square roots of lower and upper semivariance. The sum of lower and upper semideviations is of course different from the standard deviation of the full sample.

Lower and upper mean absolute deviation (MAD) are found by taking the sum of the absolute values of deviations from the mean, divided by the number of iterations. However, in computing lower MAD, use 0 in place of deviations above the mean; and in computing upper MAD, use 0 in place of deviations below the mean. Lower and upper mean absolute deviation are numerically equal for any simulated data set, and each is equal to half of the plain mean absolute deviation.

See also:

Additional keywords: Semi-variance, Semi-deviation

Last edited: 2017-06-30

6.13. Correlation Coefficient of Output Distributions

Applies to:
@RISK for Excel, releases 5.x–7.x

I want to find the simulated correlation coefficient between two cells in my simulation. Is there any way to find this value?

In @RISK 5.5 and newer, the RiskCorrel( ) function can compute the correlation for you, either the Pearson correlation or the Spearman (rank-order) correlation. To find the simulated Pearson correlation, enter this function in a worksheet cell:

=RiskCorrel(cell1, cell2, 1)

To find the simulated Spearman (rank-order) correlation, use:

=RiskCorrel(cell1, cell2, 2)

The cell will show #VALUE initially, replaced with the coefficient when you run a simulation.

When you specify your desired correlation for two inputs, as opposed to computing the actual correlation in a simulation, @RISK applies Spearman correlation of those inputs. See Excel Reports a Correlation Different from What I Specified for more about this.

How was this done in older releases of @RISK?

These methods continue to work in newer releases of @RISK, though the RiskCorrel( ) function is simpler to use.

In @RISK 5.0, you can display the correlation coefficient fairly easily by making a scatter plot.

In the Results Summary window, select one of the outputs and click the icon for "Create Scatter Plot" at the bottom of the dialog box. Then drag the other output from the Results Summary window onto the scatter plot. You get a scatter plot, the mean and standard deviation of both outputs, and the Pearson correlation coefficient. (Y is the first output you selected, and X is the second output you selected.)

You can even drag additional outputs to the Scatter Plot window, and they will be plotted as additional X's against the same Y, with their correlation coefficients displayed

In @RISK 5.0, if you want the correlation coefficient to appear automatically in an Excel sheet after simulation, use the RiskData( ) function to insert data in a worksheet during simulation and use Excel's CORREL( ) function. Please see Placing Iteration Data in Worksheet with RiskData( ) and Sum of All Iteration Values for two examples using RiskData( ).

If you have a large number of iterations, RiskData( ) may slow down the simulation for some models. If this is an issue for your particular model, you can remove the RiskData( ) functions and use a macro to save the simulation data. Please see Exporting Information During Simulation for an example.

last edited: 2018-08-29

6.14. How @RISK Calculates Percentiles

Applies to:
@RISK Excel 3.5 through 7.x
@RISK for Project 3.x, 4.x
RISKOptimizer 1.x, 5.x
@RISK Developer's Kit (RDK) 4.x
RISKOptimizer Developer's Kit (RODK) 4.x

How does @RISK calculate cumulative percentiles for simulation data?

Depending on the nature of the simulation data, @RISK will use one of two methods for calculating cumulative percentiles.

When the simulation data appear to be discrete (samples are repeated in the data), every returned percentile is chosen from the simulation data. Specifically, the software computes k = the smallest whole number greater than or equal to (your percentile target) times (number of iterations), and then the answer is the k-th smallest data value from the simulation. For a simplified example, suppose you request the 68th percentile from a simulation where there were ten iterations and the data points were {4, 7, 9, 13, 15, 19, 21, 25, 28, 30}. k = roundup(.68*10) = 7, so the 68th percentile is the 7th-lowest number, which is 21.

When the simulation data appear to be continuous (none of the samples are repeated in the data), @RISK will use linear interpolation to calculate percentiles where necessary. For example, when the desired percentile does not correspond exactly with a value in the data @RISK will use linear interpolation between points in the data set to derive the percentile. See the attached spreadsheet demonstrating the linear interpolation.

Does @RISK's calculation correspond to the Excel function PERCENTILE.INC( ) or PERCENTILE.EXC(  )?

You can specify any number from 0 to 1 inclusive as the second argument of RiskPtoX( ), and RiskTheoPtoX( ), so to that extent they are analogous to PERCENTILE.INC( ). However, it may be necessary to interpolate to find the value of a given percentile. Excel and @RISK may not necessarily return the same values, based on their different interpolation methods. (The literature showsn numerous methods of interpolation.) The larger the number of iterations, the smaller should be any difference between the two.

RiskPtoX(A1,0) and RiskPtoX(A1,0) equal the smallest and largest iteration values of cell A1 in the latest simulation. RiskTheoPtoX(A1,0) returns the theoretical minimum of the distribution in A1 if it has a lower bound, or #VALUE! if there's no lower bound. RiskTheoPtoX(A1,0) returns the theoretical maximum, or #VALUE! if the distribution has no upper bound.

Last edited: 2017-09-27

6.15. Which Iteration Produced a Given Percentile?

Applies to: @RISK for Excel, all releases

I know how to find the value of a percentile, such as the 99th percentile. But how can I find which iteration produced the 99th percentile of a given input or output? I want to look at that whole scenario.

The easiest way is to open the Simulation Data window (x-subscript-i icon in the Results section of the @RISK ribbon), highlight the column for that input or output, click the sort icon at the bottom of the window, and sort in descending order. Then count down the appropriate number of iterations and you have the one you need.

For example, if your simulation runs 1000 iterations, then your 99th percentile would be the 11th highest one, which is the highest of the bottom 990 iterations.

If this is a frequent need, you could automate the process with a RiskData( ) array function and a VBA macro. Please see Placing Iteration Data in Worksheet with RiskData( ).

Last edited: 2017-09-27

6.16. Placing Iteration Data in Worksheet with RiskData( )

Applies to:
@RISK for Excel 4.x–7.x
RISKOptimizer 1.x, 5.x

The manual and the help file say that I can get data from a range of iterations by entering RiskData( ) as an array formula. What does that mean, and can you give an example?

If you want data from all iterations of all inputs and outputs, you can use the Simulation data window (x-subscript-i icon) or select the Simulation Data Excel report.  If you want only selected variables or iterations, use the RiskData( ) worksheet function.

You cannot fill an array with RiskData( ) by typing the formula in one cell and dragging, the way you usually would. Instead, follow this procedure:

  1. Select the row or column array where you want to place the input or output value for each iteration.

  2. In the formula bar, type your formula, which involves RiskData( ). For instance, to capture the first 100 iterations of the input called The_Input, type

    =RiskData("The_Input",1)

    To capture iterations 151 through 250 of cell A4, type

    =RiskData(A4,151)

    An optional third argument to RiskData( )lets you specify the simulation number, if you're running RISKOptimizer or multiple simulations in @RISK.

  3. Instead of Enter, press Ctrl-Shift-Enter to create an array formula for this array. Though Excel puts curly braces { } around the formula, you can't create an array formula by typing curly braces yourself.

  4. If you haven't run a simulation, you'll see lots of #N/A appear in the array. These will change to numbers when you run your simulation. To see this happen, open the attached example and run a simulation.

See also: In @RISK 6.0 and later, if you just have an occasional need you can get all the iterations for one input or output in the Browse Results window. See All Iterations of One Input or Output.

Last edited: 2015-06-30

6.17. Exporting Information During Simulation

Applies to:
@RISK for Excel 4.x–7.x
RISKOptimizer 1.x, 5.x

While the simulation is running, how can I store intermediate results outside the Excel workbook?

All editions of @RISK offer the ability to run a user-written macro after every iteration. The Professional and Industrial editions of @RISK include the Excel Developer Kit (XDK), a complete library of commands and functions that let you control every aspect of @RISK in your spreadsheet.

You can export data to a text file during simulation by using a VBA macro. The attached sample workbooks show one way to do this. There are two workbooks, one for @RISK 4.x and one for 7.x.  (You can adapt the 7.x workbook to 6.x by changing the references.) Caution: In Excel 2007 and later, watch for a security warning when you open this workbook, and enable the macros.

To create a custom macro, use the @RISK VBA functions listed in the online manual. Check the @RISK Help File when @RISK is running, or click Windows Start » Programs » Palisade » Online Manuals » @RISK Macros. (In @RISK 5.5.1 and later, run @RISK and select @RISK's help, then Developer Kit.)

Performing a simulation that executes a macro after the recalculation of each iteration requires two steps:

  1. Create a new macro that writes the desired information to a numbered text file. In our example, this macro is named WriteToTxtFile( ).

  2. Create a main macro that sets the simulation settings and runs the simulation. In this macro you must:

    1. Set the property for running a macro after iteration recalc to True.
    2. Store the name of the macro you created in step 1 above.
    3. Open the text file where your simulation data will be written.
    4. Execute the VBA method to start the simulation.

You can either run the simulation by clicking View » Macros » View Macros » macroname » Run (in Excel 2003 and earlier, Tools » Macro » Macros » macroname » Run), or create a button as in the example, so that the macro runs when you click the button.

Last edited: 2018-01-23

6.18. All Iterations of One Input or Output

Applies to: @RISK 4.x–7.x

 

I want to see all the iterations of one particular input or output.  Is there any way other than bringing up the Simulation Data window or generating a Simulation Data report?  That produces all inputs and outputs, but I need just one or two.

Yes, there are two methods.

In @RISK 6.0 or newer, click Browse Results and select the input or output cell. In the upper right corner of the Browse Results window, click the drop-down arrow and select Data Grid. (See the attached illustration.) You will then have all the iteration data for this input or output in a column. You can copy/paste it to an Excel sheet or another program if you wish.

In @RISK 4 and later, you can place iteration data in your worksheet with RiskData( ). The RiskData functions are automatically updated at the end of a simulation.

Last edited: 2015-06-30

6.19. Sum of All Iteration Values

Applies to: @RISK for Excel 4.x–7.x

I see a RiskMean( ) function, but no RiskSum( ). I would like a sum that totals all of the values that occur in a given cell during a simulation. Is this possible within the framework of @RISK?

Response:
The accompanying workbook shows three methods.:

  1. The simplest, Method 1, multiplies the simulated mean by number of iterations obtained from the RiskSimulationInfo( ) function available in @RISK 6.0 and later.
     
  2. For earlier versions of @RISK, you can use Method 2. It's the same technique, but with a helper cell to calculate the number of iterations.
     
  3. You can also use Method 3. Insert all the data in the worksheet, using the technique in Placing Iteration Data in Worksheet with RiskData( ), then sum the data. This method has the advantage of being a little more transparent, but it uses a lot more space in your workbook, and it fails if you change the number of iterations to more than the number you originally planned.

Last edited: 2015-06-30

6.20. How Many Iterations Were within a Certain Range?

Applies to:
@RISK 4.x–7.x

I'd like to know how many iterations, or what percent of iterations, had a particular input or output between two limits. I know I could do this through filtering, or through moving the delimiters in a Browse Results graph, but is there a worksheet function?

Yes, you can do this with worksheet formulas. The basic idea is that

(number of iterations in range) =
[ (right percentile) − (left percentile) ] × (total number of iterations)

@RISK provides the pieces you need for that formula, and the process is the same for an input or an output.

Please take a look at the attached example. The numbers you can change are in blue on white; the formulas in other cells can be viewed but not changed.

The method, starting from x-values:

Suppose you'd like to know how many iterations saw a profit (cell C9) between $22,000 and $23,000 (cells F9 and G9). Referring to the formula above, you see that you need to ask which percentiles those limits represent. RiskXtoP will tell you that.

  • In cell H9, RiskXtoP(C7,F7) asks what percentage of iterations are below the value in F9.
  • In cell I9, RiskXtoP(C7,G7) asks what percent are below the value in G9.

The percentage between F9 and G9 is the difference of those percentiles. (The percentage between F9 and G9 is the part that is below G9 but is not also below F9.)

  • Cell J9 contains that difference, =I9−H9.

Multiply that by the total iterations from cell H2, and round to an integer. (See Placing Number of Iterations in the Worksheet.) You need to round the result, si that you don't end up with fractional iterations, because the RiskXtoP functions interpolate their values between the iterations that actually occur in any particular simulation.

  • Cell K9 contains =round(J9*$H$2,0).

The formula was "exploded" into multiple cells to show the steps. But you can do all of it in one formula; see columns M–P.

The method, starting from percentiles:

If you want the number of iterations between two stated percentiles, you don't need the RiskXtoP functions. Rows 11–13 show the formulas broken down into bits, and in one cell.

See also: For tracking logical values instead of computed inputs or outputs, see How Many Times Did an Event Occur?

Last edited: 2018-05-08

6.21. How Many Times Did an Event Occur?

Applies to: @RISK 6.x/7.x

We have a combined risk register that models the risks based on Monte Carlo sampling. I would like to get a table that shows how many times risk one, risk two, risks one and three, or risks one-two-three occurred.

This is a special case of a more general problem: in how many iterations did a given event or combination of events occur? Or, instead of how many iterations, you might want to know in what percentage of iterations some event occurred.

The basic technique is to construct a cell formula that is 1 when a desired event occurs and 0 when it doesn't. Constructing that formula is not hard if you know these rules:

  • Use parentheses around each condition, to avoid problems with order of operations. If you're tracking when G7 is 120 or more, for instance, code it as (G7>=120), not plain G7>=120.
  • If you're tracking a simple event, as opposed to a combination, add a 0 to it: =(G7>=120)+0, not =(G7>=120). This doesn't affect the final results, but it keeps this cell from showing as TRUE or FALSE when combinations show as 1 or 0.
  • To join conditions with AND, simply multiply them. =(G7>=120)*(P22<11) is 1 (true) when G7 is at least 120 and P22 is less than 11. If G7 is below 120 or P22 is at least 11, or both, the formula is 0 (false).
  • To join conditions with OR is a little bit more complicated. You can't just use + because the expression would then be 2, not 1, if both conditions are true. Probably easiest to read is this format: =0+OR(G7>=120,P22<11). This returns 1 (true) if G7 is at least 120 or P22 is under 11, or 0 if G7 is under 120 and P22 is at least 11. You don't need parentheses around the conditions, because the comma separator avoids problems with order of operations.

An example is attached to this article. It uses part of a sheet from our standard Risk Register example, in rows 1 to 9. The green box tracks seven events, showing how to compute the percentage of iterations where each event occurred, as well as the number of iterations where each event occurred.

See also: This is simpler with numeric data, as explained in How Many Iterations Were within a Certain Range?

Last edited: 2017-03-30

6.22. Which Sensitivity Measure to Use?

Applies to: @RISK 5.x–7.x

@RISK gives me a lot of options for sensitivities in my tornado graph: correlation coefficients, regression coefficients, mapped regression coefficients, change in output mean, and so on. How do I choose an appropriate measurement in my situation?

After a simulation, the Sensitivity Analysis window is your handy overview of sensitivities for all outputs. In the Results section of the @RISK ribbon, click the small tornado to open the Sensitivity Analysis window. (You can also see most of this information by clicking the tornado at the bottom of a Browse Results window for an output.)

Change in output statistic:

The change in output statistic, added in @RISK 6, is an interesting, differencing approach to sensitivity. You can select mean, mode, or a particular percentile: click the % icon at the bottom of the Sensitivity Analysis window, or the tornado icon at the bottom of the Browse Results window and select Settings.

The Change in Output Statistic tornado displays a degree of difference for just the two extreme bins, but the spider shows more information: the direction of the relationship, and the degree of difference for every bin.

Regression or correlation coefficients:

Regression coefficients and regression mapped values are just scaled versions of each other. Correlation coefficients are rank-order correlation, which works well for linear or non-linear correlations. In the Sensitivity Analysis window, when you select Display Significant Inputs Using: Regression (Coefficients), @RISK will display R² ("RSqr") in each column. You can use R² to help you decide between correlation coefficients and regression coefficients:

  • A low value of R² means that a linear regression model is not very good at predicting the output from the indicated inputs. In this case, you would focus more on correlation coefficients, because rank-order correlation doesn't depend on the two distributions having similar shape or being linearly related.
  • If R² is high, a linear regression model is a good fit mathematically. But even here, you should look at the variables to assure yourself that they are reasonable and to rule out a problem with multicollinearity. This would be signaled, for example, when @RISK reports a significant positive relationship between two variables in the regression analysis, and a significant negative correlation between those variables in the rank-order correlation analysis.

For a more detailed explanation of correlation and regression, see Correlation Tornado versus Regression Tornado and How @RISK Computes Rank-Order Correlation.

Contribution to variance:

R² is a measure of the percentage of the variance in a given output can be traced to the inputs — as opposed to measurement errors, sampling variation, and so on. @RISK adds input variables to a regression one by one, and each variable's contribution to variance is simply bow much larger R² grows as that input is added. In other words, a regression equation should predict output values from a set of input values. A variable's contribution to variance measures how much better the equation becomes as a predictor when that input is added to the regression. Unlike a regression coefficient, this measurement is unaffected by the magnitude of the input. For more about this, see Calculating Contribution to Variance.

See also: All Articles about Tornado Charts

Last edited: 2018-08-15

6.23. Regression Coefficients in Your Worksheet

Applies to: @RISK for Excel 5.x–7.x

The tornado diagram shows sensitivity of a simulated output to each input in units of standard deviation. Can I get the actual regression coefficients?

You can do a calculation from the coefficients that are displayed in the tornado, as explained in Interpreting Regression Coefficients in Tornado Graphs.

You can also use a worksheet function to obtain the regression coefficients directly, with no need for further calculation. The function is RiskSensitivity( ). In the function, set the fifth argument to 3 (result type = equation coefficient). @RISK will then return the actual coefficient that would appear in a multiple regression.

Example: Suppose you're interested in the sensitivities of the output in cell A1.  Then the function

=RiskSensitivity(A1, , 1, 1, 1)

will tell you the name (fifth argument = 1) of the input that has the largest impact or highest rank (third argument = 1), and the function

=RiskSensitivity(A1, , 1, 1, 3)

will tell you the unscaled regression coefficient (fifth argument = 3) of that input for the output in A1. For instance, if that RiskSensitivity( ) function returns 0.72, it means that a one-unit increase in that input corresponds to a 0.72-unit increase in the output.

Technical note: The rank number (third argument) can be anything from 1 to the number of @RISK inputs in the model; if it is too large the function returns #VALUE. However, @RISK only returns sensitivities for the inputs whose coefficients are significantly different from zero (to a maximum of 100 inputs). For all other inputs, @RISK returns zero as a coefficient.

Beginning with @RISK 6, you can also get the constant term of the regression equation, by setting the RiskSensitivity( ) function's fifth argument to 4 (result type = equation constant).  To find the regression constant in older versions of @RISK, please see Regression Equation from Calculated Sensitivities.

The attached example shows both types of regression tornado graphs, with (scaled) coefficients and with mapped values. It also shows how to use worksheet formulas to get those two plus the actual coefficients of the regression equation, including the constant term.

See also: All Articles about Tornado Charts

Last edited: 2017-06-09

6.24. Regression Equation from Calculated Sensitivities

Applies to:
@RISK for Excel 4.x–7.x
@RISK for Project 4.1

I know that @RISK for Excel and for Project display regression sensitivities in a tornado diagram, and @RISK for Excel calculates them in the worksheet function RiskSensitivity. But how can I assemble them into a regression equation? What's the constant term? Is the regression equation more accurate for some input values than for others?

First, make sure you have the actual regression coefficients in units of output per unit of input.

Your regression equation is

Y = b0 + b1X1 + b2X2 + b3X3 + ...

In this equation, Y is the @RISK output. b0 is the constant term (see next paragraph). The other b's are the regression coefficients, descaled if necessary (see above), and the X's are the @RISK input variables.

What is the value of the constant term, b0? In @RISK 6.0 and newer, you can get this from RiskSensitivity( ) with a result type of 4. In earlier versions of @RISK, you have to calculate it. @RISK doesn't reveal this directly, but you can compute it from the other information. The line of best fit (the regression line) is guaranteed to include the point where all the inputs and the output have their mean values. Get those mean values from the Results Summary window or the Detailed Statistics window, and substitute in the regression equation to solve for the constant term:

b0 = Ybar - b1Xbar1 - b2Xbar2 - b3Xbar3 - ...

where Ybar is the mean value of the output and the Xbar's are the mean values of the inputs.

When you have the constant term, you have the last piece of the regression equation.

Where is this equation valid? The coefficients are global properties of the overall set of data, so the equation is valid through the entire region of these input values. That is, each regression coefficient refers to the line that fits best through all the points, weighted equally. The regression equation takes all points (iterations) of all variables equally into account.

What if data are skewed? Just as with a simple two-variable X-Y regression, that will affect the residuals. If one region of the cloud of points is markedly different from another, the regression equation does the best it can overall, which may mean less than the best for particular regions. In that case the residuals would be large in some regions and small in others.

One caveat: All of this assumes that you have captured all the inputs that have any meaningful impact on this output. If you have only some of the significant inputs, then of course the regression line will lose some of its effectiveness.

See also: All Articles about Tornado Charts

Last edited: 2015-06-30

6.25. Placing Change in Output Statistics in Worksheet

Applies to: @RISK 6.1.1 and later

Can @RISK produce a tornado graph showing change in output mean, percentile, or mode? Is there any way to write those statistics to my worksheet?

Yes, you can use the RiskSensitivityStatChange function. This is documented in @RISK help.

The attached workbook shows examples of retrieving change in output mean, change in output mode, and change in output percentile. (You will notice that inputs often rank differently depending on which measure you use.)

See also:

Last edited: 2021-03-05

6.26. Calculating Contribution to Variance

Applies to: @RISK 7.5 and newer

The help file describes Contribution to Variance this way:

These values are calculated during the regression analysis. The sequential contribution to variance technique calculates how much more of the variance in an output is explained by adding each of a sequence of inputs to the regression model. The selection of the variables and the order in which they are added is determined by the stepwise regression procedure. As with any regression technique, when input variables are correlated, the regression can pick any of the correlated variables and ascribe much of the variance to it and not inputs correlated with it. Thus, caution in interpreting the contribution to variance results is critical when inputs are correlated.

Can you expand on that?

@RISK runs a stepwise regression on an output, to find several measures of sensitivity to the input distributions in the model. Stepwise regression is an iterative process where input variables enter into the regression sequentially. From the inputs that have not yet entered the regression, the next one to enter is the one with greatest significance to the output. However, rerunning the regression with that additional input variable can change the results for inputs that entered earlier. If an input no longer contributes significantly, it will leave the regression.

After performing the stepwise regression, @RISK performs a second regression, this time a forward regression. Variables enter this regression in the same order as before, but only the ones that did not leave the original stepwise regression; and no variables leave.

@RISK records the change in R² when each input enters the second, forward regression. (R² is between 0 and 1, and is a measure of how effectively the regression predicts output values. R² is the proportion of the output's total variance that is associated with input variables; 1–R² is the proportion associated with measurement errors, sampling variation, and random variations in general.)

The change in R² when an input enters is that input's percentage contribution to the total variance of the output. It's shown in the Contribution to Variance tornado graph. You can also place those numbers in your worksheet. The total of the percentages given by the worksheet functions will equal R². Because the number of bars on a tornado graph is limited, the total in the graph will be less than R² if not all contributing inputs fit on the graph.

A word on correlated variables: Some correlated variables may leave the first, stepwise regression, because some of their contribution to the output's variance overlaps with the contribution of the other correlated variables, and thus they don't add significant predictive power to the regression. In that case, they won't be part of the second, forward regression, and their contribution to variance is zero. The stronger the correlation, the stronger the tendency to omit some correlated variables. It's not easy to predict which variable is excluded in such cases; it could depend on slight changes in samples from one simulation to the next. But in that scenario it doesn't make much difference which of the correlated variables are used.

See also: All Articles about Tornado Charts

Last edited: 2017-10-24

6.27. Placing Contribution to Variance in Worksheet

Applies to: @RISK 7.5.0 and newer

I like the Contribution to Variance tornado, but how can I get those values into my worksheet?

When the new graph was created in @RISK 7.5.0, new values were added to the arguments of the RiskSensitivity function. For contribution to variance, follow these patterns:

  • RiskSensitivity(output, , k, 4, 1) returns name of the k-th most significant input.
  • RiskSensitivity(output, , k, 4, 6) returns percentage of total variance contributed by the k-th most significant input.
  • RiskVariance(output) * RiskSensitivity(output, , k, 4, 6) returns actual variance contributed by the k-th most significant input, as opposed to percentage of variance contributed.

The first three arguments to the RiskSensitivity function are output cell reference or name, simulation number (omitted = simulation 1), and input rank (>=1, where 1 selects the input with greatest effect). The fourth argument is 4 for contribution to variance. The fifth argument is 1 for the name of the input with that rank, or 6 for the percentage of variance contributed.

Please open the attached workbook and run a simulation. The contributions to variance will appear in cells O9:Q15. The tornado graph is also shown for reference.

See also: All Articles about Tornado Charts

Last edited: 2018-08-11

6.28. Confidence Intervals in @RISK

Applies to: @RISK 5.x–7.x

How can I compute a confidence interval on a simulated input or output in @RISK?

People don't always mean the same thing by "confidence interval" in the context of a simulation. Some want to estimate the mean of a distribution, and others want to know the middle x% of values.

Prediction Interval

Some people use "confidence interval" to mean the middle x% of the simulated data values, also known as a prediction interval. For instance, a 95% confidence interval by this definition would be the 2.5 percentile through the 97.5 percentile. @RISK can find these percentiles for you directly, with the RiskPtoX function. This downloadable workbook PredictionInterval.xls (attached) shows the calculation.

Confidence Interval about the Mean

Some people mean the confidence interval that is taught in statistics classes, an estimate of a "true population mean". The idea here is that the simulation is treated as a sample from the complete distribution, which contains infinitely many values. Your simulated result has a mean, the mean of a sample from the distribution, but if you repeated the simulation you'd get a different mean. What you want is a range that estimates the true mean of the distribution, with x% confidence in that range.

This confidence interval is the simulated mean plus or minus a margin of error. In turn, the margin of error is a critical t or z times the standard error. But the estimated standard error depends on your sampling method, Latin Hypercube or Monte Carlo.

Confidence Interval in a Worksheet Function

Beginning with @RISK 7.5, you can use the RiskCIMean( ) function to place the lower or upper bound of a confidence interval in your worksheet. =RiskCIMean(A1,.95) or =RiskCIMean(A1,.95,TRUE) gives you the lower bound for the 95% confidence interval about the mean of cell A1, and =RiskCIMean(A1,.95,FALSE) gives you the upper bound. If you prefer, you can use the name of an input or output, instead of a cell reference.

The confidence interval is computed using RiskStdErrOfMean( ), which equals the simulated standard deviation divided by the square root of the number of iterations. That's accurate if you're using Monte Carlo sampling. However, that same standard error is too large when you're using Latin Hypercube sampling. In turn, the larger standard error makes the confidence interval wider than necessary, possibly much wider than necessary. Thus, the RiskCIMean( ) function makes a conservative estimate under Latin Hypercube sampling. A truer estimate would require running multiple simulations, as explained below, which is not practical in a worksheet function.

Confidence Interval with Monte Carlo Sampling

The standard error is the simulated standard deviation divided by the square root of the number of iterations. The bounds of the confidence interval are therefore

sample_mean ± zcritical × standard_dev / sqrt(sample_size)

(Critical z is easier to compute and is often used instead of critical t. For 100 iterations or more, critical t and critical z are virtually equal.)

To find this type of confidence interval, @RISK offers several auxiliary functions but no single "confidence interval" function. The attached workbook ConfidenceInterval_MC.xlsx shows how to calculate this confidence interval using the @RISK statistics function. This worksheet is a proof of concept, and therefore the calculations are spread over several cells to show every step. In production, you would probably combine the calculations into a couple of cells, or put them into a user-defined function.

To predict how many iterations will be needed to restrict the confidence interval to a specified width, please see How Many Iterations Do I Need?

Confidence Interval with Latin Hypercube Sampling

(For computing confidence intervals based on Latin Hypercube sampling, we rely on Michael Stein, "Large Sample Properties of Simulations Using Latin Hypercube Sampling", Technometrics 29:2 [May 1987], pages 143-151, accessed 2016-06-28 from https://r-forge.r-project.org/scm/viewvc.php/*checkout*/doc/Stein1987.pdf?revision=56&root=lhs.)

The simulated sample means are much less variable with Latin Hypercube than with Monte Carlo sampling. (See Latin Hypercube Versus Monte Carlo Sampling.) Therefore:

  • standard_dev/sqrt(sample_size) over-estimates the standard error of the mean, quite possibly by a large amount.
  • A confidence interval using that standard error will therefore be very conservative: the interval and the margin of error will be much wider than necessary.
  • The RiskStdErrOfMean( ) and RiskCIMean( ) worksheet functions, as mentioned above, use that traditional calculation, and therefore they also overstate the standard error and produce an overly-wide confidence interval.

We recommend Latin Hypercube sampling, and it's the default in @RISK, because it does a better job of simulating your model than traditional Monte Carlo sampling. Just be aware that the confidence intervals that you or @RISK compute don't take the increased accuracy of Latin Hypercube into account. It may be enough just to bear in mind that the confidence intervals are bigger than necessary. But if you need confidence intervals that accurately reflect Latin Hypercube sampling, here is how you can compute them.

If the number of iterations is large relative to the number of input variables, and certain other conditions are met, the distribution of simulated sample means for each output will be approximately normal. Then you can find the standard error, margin of error, and confidence interval by this procedure:

  1. In Simulation Settings » Sampling » Multiple Simulations, set "Use different seeds". Set a number of iterations in each simulation that is large relative to the number of input variables.

  2. Run several simulations.

  3. Each simulation will have a mean, which we can call x-bar. Collect the simulated means, and take the mean of those x-bars. This is your estimate for the true mean, and will be the center of your confidence interval.

  4. Compute the standard deviation of the group of x-bars, and divide by the square root of the number of simulations (not iterations). This is the estimated standard error of the mean for Latin Hypercube sampling. Since the standard deviation of those simulated means is much less than the standard deviation of the iterations within any one simulation, this standard error will be much less than the standard error for Monte Carlo sampling.

  5. Compute your critical t in the usual way, with degrees of freedom set to number of simulations minus 1, not number of iterations minus 1. For instance, with 10 simulations, critical t is 2.26 for a 95% confidence interval. (Since the degrees of freedom is low, use t and not z.)

  6. Multiply critical t from step 5 by the standard error from step 4. This is the margin of error. Your final confidence interval is

    (mean of x-bars) ± tcritical × standard_error

The attached workbook ConfidenceInterval_LH.xlsx shows the calculation. The model is the same one that was presented above for Monte Carlo sampling. In the Monte Carlo example, there were 10,000 iterations in one simulation, and the standard error was on the order of $550,000. In the Latin Hypercube example, there are 1000 iterations in each of 10 simulations, totaling the same 10,000 iterations, but the standard error is much smaller, on the order of $5,000 instead of $550,000.

Last edited: 2017-08-02

6.29. Customizing the Quick Reports

Applies to: @RISK 5.x–7.x

I would like to make some changes in the layout or contents of the Quick Reports; how can I do that? What is the best way to reproduce the Quick Report? Is there any way to access the template file that generates it? Can I create customized forms of other graphs and reports?

By design, the Quick Reports are not very customizable. (You can change the type of tornado graph, see Tornado Graph in Quick Reports.) The idea is that they should always be in the same layout to make them as quick to read as possible. But there are several ways you can get the same information in customized graphs.

Option A (@RISK 7): Custom Reports

New in @RISK 7.0, Custom Reports let you mix and match graphs and statistics tables. By default, one Custom Report is produced for each output, but the Custom Reports tab in the Excel reports dialog lets you choose to report only particular outputs.

For more, see the "Custom Reports" and "Custom Report Outputs" topics in @RISK help.

Option B: Copy/Paste

This is fastest if you have a one-time need. You can store some customizations in Application Settings, but some can only be done manually for each graph.

  1. Open the Browse Results window for the desired input or output, and select the type of graph you want.
  2. Right-click the graph area and set your distribution format and other options.
  3. Size the Browse Results window to your preference.
  4. Right-click the graph area again, and select one of the Copy commands.
  5. Click into your worksheet and press Ctrl-V for Paste.

Option C: Report Templates

You can create one or more templates for your own customized reports and use them instead of the Quick Reports, or in addition to the Quick Reports. Set up a template on a dedicated tab (worksheet) within your workbook. The worksheet name must have the form RiskTemplate_reportname, where reportname is the desired name of the report sheet. See Creating and Using Report Templates for more.

Any Excel or @RISK formulas can be part of your template sheet. You can easily include statistics like means and percentiles through Insert Function » Statistic Functions » Simulation Results, and include graphs through Insert Function » Other Functions » Miscellaneous » RiskResultsGraph.

This is an automated solution, and it's quick to set up, but when you use RiskResultsGraph only a few customizations are available.

Option D (@RISK 6.2 and Newer): Visual Basic for Applications

Beginning with @RISK 6.2.0, there is a more flexible alternative for placing graphs in your worksheet. The new RiskGraph object gives you many customizations, but you do need to write Visual Basic code to use it. A new Automation Guide (Help » Developer Kit (XDK) » Automation Guide) explains how to create some basic graphs. For many more options, with a listing of every property and method, see the XDK help file (Help » Developer Kit (XDK) » @RISK XDK Reference).

A small example is attached, showing a RiskResultsGraph tornado and a RiskGraph tornado created through Visual Basic. (Run a simulation to see both of them. Depending on your screen resolution, one of them may hide part of the other, so that you'll need to move it.)

For another example, see Placing Graphs in an Existing Worksheet with VBA.

VBA automation is available in @RISK Professional and Industrial Editions only.

Last edited: 2015-06-30

6.30. Status Column of Output Results Summary Report

Applies to: @RISK for Excel 5.x–7.x

In the Results Summary window, and in the Output Results Summary report in Excel, a column is labeled "Status". What do the numbers mean?

If you have enabled convergence monitoring, this column tells you whether each output has converged by showing OK or a number. OK means the output has converged; a number is the estimated percent complete to convergence for that output.

For further information, please see Convergence Monitoring in @RISK.

Last edited: 2015-07-01

6.31. Changing Columns in Results Summary

Applies to: @RISK 5.x–7.x

I'd like to change the statistics that are displayed in the Results Summary window, or the Excel reports Input Results Summary and Output Results Summary. For example, I'd like to see the 10th and 90th percentiles rather than the 5th and 95th. Or I'd like to add the median or standard deviation to the columns. How can I do it?

Follow this procedure:

  1. Run a simulation and in the Results section of the @RISK ribbon click Summary.
  2. Right-click in the column headings and select Columns for Table.
  3. Make whatever changes you wish, by adding or removing check marks (tick marks). To change the percentiles, click the "..." next to the 5% and 95%, and change them to whatever you wish.
  4. Close the Results Summary window.

These changes will apply only to the Results Summary window and the Excel reports Input Results Summary and Output Results Summary created during the current session. To set these columns as defaults for all @RISK workbooks, both new and existing workbooks, follow this additional step:

  1. Click Utilities » Application Settings. In the Windows section, you'll see that "Results Window Settings" is set to Automatic. Click on the word Automatic to make a down arrow visible. Click on that arrow and select "Set to Current Window Columns". Click OK.

Last edited: 2015-07-01

6.32. Detailed Statistics with More Than Seven Significant Digits?

Applies to: @RISK 5.x–7.x

The Detailed Statistics window shows only seven significant digits, and if I choose Report in Excel I again get only seven significant digits. Is there any way to get more precision?

First, consider that these statistics are the result of a stochastic process, and it's highly unlikely that so many significant digits are meaningful. This is why @RISK rounds its results even though it actually does the calculations in full double precision.

But if you truly want to see more significant digits, you can get them from the @RISK statistics functions. These functions, including RiskMean( ) and RiskStdDev( ), return full double precision. You can put them in your worksheet. If you have a whole lot of them, for greater efficiency you could call them from a macro that you set to execute automatically at the end of simulation.

Last edited: 2015-07-06

6.33. Detailed Statistics: Live or Static?

Applies to: @RISK 4.x–7.x

On the Detailed Statistics sheet, I enter a target percent (P) and the target value (X) doesn't update, or I enter an X and the P doesn't update.

There are two Detailed Statistics sheets in @RISK: the Detailed Statistics report, which is prepared as an Excel worksheet, and the Detailed Statistics window, which is part of @RISK. Only the @RISK window is "live", meaning that when you enter an X or a P the other member of the pair changes automatically. The report in Excel is static and does not update.

Last edited: 2015-07-06

6.34. Detailed Statistics: Setting Default Targets

Applies to: @RISK 6.x/7.x

In the Detailed Statistics window after a simulation, @RISK gives me the 5th, 10th, 15th, ..., 95th percentiles. I can get additional percentiles in the Target section below that, but is there a way to make them appear automatically?

Open Utilities » Application Settings. In the Windows section, find the Detailed Stats Window Targets line. Click to the right of Automatic, click the drop-down arrow, and enter your desired target percentiles in the form

1, 2.5, 97.5, 99

You can specify up to ten percentiles in this way, with or without % signs. @RISK will display these percentiles, in addition to its standard ones, in the Detailed Statistics window and on the Detailed Statistics report.

Last edited: 2015-09-30

6.35. Some Iterations Show Error in Data Window. What Can I Do?

Applies to: @RISK for Excel 5.x–7.x

When I run my simulation and click the x-subscript-i icon to check the @RISK Data window, I see "Error" for some iterations in one or more outputs. What does that mean? There are no #N/A or #VALUE errors in my workbook.

In the @RISK Data window, each row is an iteration and each column is an @RISK input or output. "Error" in the @RISK Data window means that the formula in that cell (column heading) has an Excel error in that iteration (row heading). But the problem may or may not be in the formula in that cell. It might in be a formula in a cell that is referenced by that cell. (In Excel, when any cell has an error status, all the cells that use it in formulas share that error status.)

How can you have errors in particular iterations when there are no errors in the worksheet as displayed when a simulation is not running? For example, suppose you have RiskNormal(10,3) in one cell, and in another cell you take the square root of the first cell. The static value of the RiskNormal( ) is 10, so when a simulation is not running you won't see any error. But during an iteration, occasionally the RiskNormal( ) will return a negative value, and the square root of a negative value returns a #NUM error. If the cell that contains the square-root function, or any cell that depends on it, is an @RISK output, then you will see an error in the Simulation Data window for that iteration.

To find the source of the error:

In the Results section of the @RISK ribbon, click the Simulation Data icon, the small icon showing x-subscript-i. The @RISK Data window opens, showing all outputs in columns, and then all inputs. Locate your output, then an Error indication. Click on it, and then click the footprint or shoeprint icon at the bottom of the window. (The tool tip, if you hover your mouse over the icon, is "Update Excel with values from the selected iteration". If the button is grayed out, see Footprint Button Grayed Out.)

@RISK will put your workbook into the state it was in during that iteration. Then you can check the error cell and trace back through the formulas till you find the source of the error. (You may need to minimize the @RISK Data window, or grab its title bar with your mouse and move it out of your way.) You can click on other iterations in the @RISK Data window to display other iterations of the workbook.

(Actually, @RISK sets all inputs to the values they had during that iteration, then recalculates the workbook to let Excel fill in the outputs. If shoeprint mode shows different output values from the ones shown in the Data Window for the same iteration, see Random Number Generation, Seed Values, and Reproducibility.)

When you have found the problem, click the footprint icon again to return the workbook to its normal state, or simply close the Data Window.

See also:

Additional keywords: Shoeprint mode, footprint mode

Last edited: 2019-02-15

6.36. Additional Export Data options

Applies to: @RISK 8.2 and newer

Our new @RISK 8.2 release includes 2 new ways to get simulation Data: Export Simulation Data from Graph Windows and Simulation data report.

Export Simulation Data from Graph Windows

The Browse Results, Scatter Plot, and Summary Graphs windows now have the option to Export or Copy the Simulation Data for the displayed inputs or outputs:

 

 

Simulation Data Report

The new simulation data report now allows you to select inputs and/or outputs to generate a report with their simulation data in an Excel Worksheet.

Last update: 2021-07-29

6.37. Writing Simulation Data to Excel

Applies to: @RISK 8.2 onward

I want to get all the simulation data for a specific output or input, how can I do that?

You have two options on how to get this information.

The first way is through the Browse Results window. In the bottom right hand corner there is an Export button. From here you can either copy the data and paste the values into Excel, or Export the values to Excel. Copying will give you only the values, exporting will give you more information regarding the distribution name and location.

The second option is using a built in report. In 8.2 there is a new report added called Simulation Data. In this report you can either specify which distributions you are interested in, or select all of them. You can also choose to include the thumbnail graphs or filters.

Last edited: 2021-07-23

7. @RISK Simulation: Graphical Results

7.1. Interpreting or Changing the Y Axis of a Histogram

Applies to:
@RISK for Excel 4.5–7.x
@RISK for Project 4.0 and 4.1

How do I interpret the y axis of the histogram that is created from the results of my simulation?

@RISK can show the histogram of your result data in two different formats, probability density or relative frequency. This just a matter of different scaling for the y axis; the shape of the histogram doesn't change. The default histogram is probability density for continuous data, and relative frequency for discrete data.

TIP: Most people find relative frequency easier to understand than probability density. Especially for presentations, you may want to use the relative frequency format, or simply suppress the y axis. (See "How do I select the y axis format," below.)

How do I interpret relative frequency numbers on the y axis?

If a bar is as high as the 2% mark, for example, you know that 2% of all iterations fell within that bar. In other words, the height of each bar represents the proportion of the data (the fraction of all the iterations) in that bar. Since every data point must be in some bar of the histogram, the heights of all the bars add up to 100%.

(Before @RISK 6.2, relative frequencies were shown as decimals, for example 0.02 rather than 2%, but you read them the same way.)

How do I interpret probability density numbers on the y axis?

This is harder. Unlike the case of relative frequency, the height of a histogram bar isn't meaningful on its own. What matters is the area of the bar.

Consider the example at right. (You can click on it to get a larger image.) Look at the bar for $74,000 to $76,000. Its width on the x axis is $2,000, and its height on the y axis is about 4.9×10-5. As with any rectangle, you find its area by multiplying width and height: $2000×4.9×10-5 = 0.098 or 9.8%. The height of that bar by itself doesn't tell you anything, but in conjunction with the width it tells you that 9.8% of the iteration data for this input fell between $74,000 and $76,000. The total area of all the bars is 1 (or 100%).

When you're looking at a theoretical probability curve for an input, or in a fitted distribution, it will be presented as probability density. Again, the height of the curve doesn't tell you anything useful on its own. But the area under part of the probability density curve tells you what percentage of the data should fall within that region, theoretically. For example, the area under the curve to the left of $72,104 is 5.0% according to the bar at the top of the graph. This tells you that theory says 5% of the data for a Normal(80000,4800) should be less than $72,104.

Technically, the area under a part of the curve is the integral of the height of the curve, from the left edge of the region to the right edge. Thus, the 5% was found by integrating the height of the density curve from minus infinity to $72,104. Just as the total area of the bars in a histogram is 1, the total area under a probability density curve is 1.

How does @RISK create the y axis for a probability density histogram?

  1. Divide the data into intervals — see Number of Bins in a Histogram.
  2. Count the number of data points in each interval.
  3. Divide the counts by the total number of data points.
  4. Divide that result by the interval width as shown on the x axis, to obtain the height of the bar along the y axis.

In a probability density histogram or curve, the larger the numbers on the x axis, the smaller the numbers on the y axis must be to keep the total area at 1.

How do I select the y axis format?

In @RISK 5.x–7.x, click the histogram icon at the bottom of the Browse Results window and select Relative Frequency or Probability Density.  If you prefer, you can suppress the numbers on the y-axis entirely: right-click on any of the numbers on the vertical axis and select Axis Options. Then on the Y-Axis tab, under Display, remove the check mark by Axis.

TIP: If you find yourself changing the y axis often, you might want to change the default. In Utilities » Application Settings » Simulation Graph Defaults, change Preferred Distribution Format to Relative Frequency, or whatever you prefer.

In @RISK 4.x, right-click the histogram and select Format Graph...; then select the Type tab. In the Histogram Options section, click the drop-down arrow next to the Format field and choose Density or Relative Frequency.

Last edited: 2018-09-21

7.2. Number of Bins in a Histogram

Applies to: @RISK 5.x–7.x

When it makes a histogram, how does @RISK choose a number of bars? In other words, how many bins or intervals does @RISK divide the data range into? Can I change this?

By default, @RISK determines the number of bins from the number of iterations or data points n, as follows:

n

Less than 25

25 to 100

More than 100

Bins

5

Nearest integer to n / 5

Largest integer below 10 × log(n)

If you want to change this for a particular histogram, right-click the graph and select Graph Options. On the Distribution tab, the lower section lets you specify a number of bins (bars), as well as a minimum (left edge of the first bar) and maximum (right edge of the last bar). "Automatic" uses the calculation shown above, but you can specify a number from 2 to 200.

Last edited: 2015-07-15

7.3. Setting Y Axis Maximum Not to Exceed 1

Applies to: @RISK 4.5–7.x
@RISK for Project 4.x

I want the y axis values on my histogram to be between 0 and 1. But when I change the default maximum from 2 to 1, the top of the histogram is chopped off. Shouldn't probabilities be greater than 0 and less than or equal to 1?

You are probably looking at the default histogram format, which is probability density. With probability density, the heights of the bars are adjusted so that the total area of all the bars is 1. If your data range is small, then the heights of the bars may be greater than 1. If you change the graph format to relative frequency, the y axis will have a maximum of 1 or less.

Beginning with @RISK 6.2, relative frequencies are shown as percentages.  This gives a visual indication whether you're looking at probability density or relative frequency.

For more detailed information, please see Interpreting or Changing the Y Axis of a Histogram.

Last edited: 2015-07-06

7.4. Log Scale in Output Graphs

Applies to: @RISK 6.2 and newer

Beginning with @RISK 6.2, you can display the x and y axes of most graphs in logarithmic scales, using any of these methods:

  • Tick the "Log" box on the X-Axis or Y-Axis tab of the Graph Options dialog.
  • Right-click the graph and on the context menu select Log Scale X-Axis, Log Scale Y-Axis, or both.
  • In VBA, you can use the RiskGraph.XAxis.LogScale and RiskGraph.YAxis.LogScale properties.

Generally, graphs with numeric scaling support log scaling.  Here are the major exceptions:

  • Histograms with some data values less than or equal to 0.  However, you can switch to a cumulative display (S-curve) and get logarithmic scaling.
  • Histograms in probability density format.  (Interpretation of these would be confusing.)  However, histograms in relative-frequency format can be displayed on a log scale.  If you are using the default Automatic formatting, and you select a log scale, @RISK will automatically change the histogram to relative-frequency format.
  • The x axes of tornado graphs and summary graphs.  (A summary graph may appear to have a numeric x axis, but actually those numbers are just treated as labels.)

Last edited: 2013-09-25

7.5. Area Graphs

Applies to: @RISK 5.x–7.x

Can I smooth out a histogram to create an area graph, as I could in older versions of @RISK?

Yes, you can, though you have to go through a dialog box because you get more choices.

When you have the results graph displayed, follow this procedure:

  1. Right-click the graph and select Graph Options.
  2. Select the Curves tab.
  3. In the list at the left, select the histogram that you want to smooth if it's not already selected.
  4. Change Style to either Line or Solid. (The Automatic box will uncheck itself.)
  5. For Interpolation, select Spline Fit for a smooth curve or Linked Midpoints for a polygon. If you wish, you can also change color and style.
  6. Click OK and you will see your smoothed graph displayed as you wish.

Last edited: 2015-07-06

7.6. Copy/Pasting Thumbnails

Applies to: @RISK 7.x

I like the new Thumbnails feature of @RISK (in the Utilities menu). I'd like to paste a thumbnail into my Excel sheet, a PowerPoint slide, a Word document, or my graphics program. How can I copy a thumbnail to the clipboard?

Just hover your mouse over the input or output cell, then slide the mouse pointer over the thumbnail. Right-click and select Copy. That places a copy of the thumbnail on your clipboard, and you can then paste it anywhere with the usual commands (typically Ctrl+V).

 

Last edited: 2015-07-06

7.7. All Articles about Tornado Charts

Applies to: @RISK 6.x/7.x

The Knowledge Base has many articles about various aspects of tornado graphs. In Technical Support, we sometimes get multiple tornado-related questions at the same time, and it seems useful to collect all the links in one place.

Interpreting tornado graphs:

Which variables are in a tornado?

Putting numbers from tornado graphs into your worksheet:

Troubles:

Other articles:

Additional keywords: Tornado chart, tornado charts, tornado graph, tornado graphs, sensitivity tornado, sensitivity tornados, sensitivity tornadoes, sensitivity coefficients, sensitivity chart, sensitivity charts, sensitivity graph, sensitivity graphs

Last edited: 2018-08-15

7.8. Interpreting Regression Coefficients in Tornado Graphs

Applies to:
@RISK for Excel 5.x–8.x

How can I interpret the regression coefficients on the tornado diagram or sensitivity report produced by @RISK?

The regression coefficients are calculated by a process called stepwise multiple regression.

The main idea is that the longer the bar or the larger the coefficient, the greater the impact that particular input has on the output that you are analyzing. A positive coefficient, with bar extending to the right, indicates that this input has a positive impact: increasing this input will increase the output. A negative coefficient, with bar extending to the left, indicates that this input has a negative impact: increasing this input will decrease the output.

In Browse Results and with the RiskResultsGraph function, you can get "regression coefficients" or "regression coefficients—mapped values". With the RiskSensitivity function, you can get either of those measures and also the unscaled coefficients that would be used in a regression equation. Please open the attached workbook and click Start Simulation. It shows both types of regression tornados and all three types of coefficients.

Regression Coefficients

The graph labeled simply "regression coefficients" does not express them in terms of actual dollars or other units. Rather, they are scaled or "normalized" by the standard deviation of the output and the standard deviation of that input.

For the output, Input A has a regression coefficient (standard b) of 0.78. That means that for every k fraction of a standard deviation increase in Input A, the output will increase by 0.78k standard deviations (SD). To get from that coefficient to the actual coefficient in terms of units of input and units of output, multiply by the SD of the output and divide by the SD of the input. 0.78 × 12784 / 1000 = about 10,and therefore a 1-unit increase of A corresponds to a 10-unit increase of the output.

Regression – Mapped Values

The mapped regression values are scaled versions of the regression coefficients.

Mapped values are in units of output per standard deviation of input. For example, if Input A has a mapped coefficient of 10,023.53, it means that an increase of k standard deviations in Input A produces an increase of 10,023.53 * k units (not standard deviations) in the output.

If the standard deviation of input A is 1000 and k = 2, it means that increasing input A by two standard deviations (1000 * 2) increases the output by 20,001.06 (10,023.56 * 2) units.

admin?pg=file&from=0&id=306114

admin?pg=file&from=0&id=306113

 

Actual (Unscaled) Regression Coefficients

There is no option to show these on the graph, but you can get them from the worksheet function RiskSensitivity. The attached workbook shows examples in row 26; for more information please see Regression Coefficients in Your Worksheet.

Additional keywords: Sensitivity analysis

See also: All Articles about Tornado Charts

Last edited: 2021-04-19

7.9. Correlation Tornado versus Regression Tornado

Applies to: @RISK, all releases

Why do we have both? Why can a correlation tornado sometimes show bars that aren't on the regression tornado, or vice versa?

Regression and correlation both indicate the direction of the relationship. A positive means that as that input increases, the output increases; and a negative means that as that input increases, the output decreases. You can say that a regression coefficient shows the strength of the relationship, and a correlation coefficient shows the consistency of the relationship.

Correlation first. Imagine a scatter plot of just this input (horizontal axis) and output (vertical axis). Each point represents the value of that input and output in one iteration. As you sweep from left to right, you are going from low to high values of the input, in order. Now, consider two consecutive points in that sweep. The second point is to the right of the first, so it has a higher input value. But is the second point higher on the graph than the first (larger output value) or lower? In almost any simulation, the points will show some ups and some downs, but let's suppose that for every single pair of points, the point to the right is also higher. In this case you have a perfectly consistent relationship: increasing the input always increases the output. The correlation coefficient is +1 (maximum possible correlation).

Now suppose that the relationship is a little more realistic: usually when you go from left to right, the points are rising, but sometimes the right-hand point is lower than the nearest point to its left. Now the relationship is not perfectly consistent. Usually increasing the input increases the output, but not always. The higher the correlation coefficient, the more consistently increasing the input increases the output; the lower the correlation coefficient, the less often increasing the input increases the output. The lower correlation coefficient means that the relationship has less consistency to it.

Take a situation where, moving from left to right, half the time the second point is higher than the first and half the time it's lower. Increasing the input is just as likely to decrease the output as increase it. Your correlation coefficient is zero.

It works the same with negative correlations. A coefficient of –1 (the lowest possible) means that every single pair of points has the second output lower than the first. The relationship is perfectly consistent: every time you increase the input, the output decreases. As the correlation coefficient gets further from –1 and closer to 0, there is less and less consistency. The output still decreases with increasing input, more often than not, but the lower the coefficient the closer you get to 50-50 increase or decrease and zero correlation.

So the correlation coefficient tells you whether increasing the input generally increases or decreases the output, and how consistent that trend is, but it tells you nothing about the strength of the influence.

So much for correlation. What about regression coefficients? Regression coefficients tell you the size of the effect each input has on the output. For example, a regression coefficient of 6 means that the output increases 6 units for a 1-unit increase in the input; a coefficient of –4 means that the output decreases 4 units for each one-unit increase in the input.

(It's a little more complicated than that in @RISK, because you can get only scaled regression coefficients on a tornado; see Interpreting Regression Coefficients in Tornado Graphs. But you can get the actual regression coefficients in a worksheet; see Regression Coefficients in Your Worksheet.)

For more, see Which Sensitivity Measure to Use?. Also see the "Regression and Correlation" topic in the @RISK Help file.

See also: All Articles about Tornado Charts

Last edited: 2016-04-20

7.10. Interpreting Change in Output Statistic in Tornado Graphs

Applies to: @RISK 6.x/7.x

How do I interpret the double-sided tornado graphs in Quick Reports, Browse Results, and Sensitivity Analysis? What's the default behavior, and how can I change it?

Let's talk first about the default behavior for Change in Output Mean, which is the default statistic, and then we can go into the variations. We'll suppose that you have 2500 iterations in your simulation.

The baseline is the overall simulated mean of that output.

The double-sided tornado has one bar for each selected input, and each bar has numbers at its edges. Each bar is prepared by considering one input and ignoring everything else but the output. (The other inputs are not held constant; their values from the simulation are simply not used in the computation.)

The inputs are first sorted in ascending order and binned in that order, then an output mean is computed for just the iterations in each bin and shown on the bar in the tornado chart. Details for Change in Output Mean:

  1. @RISK puts all the iterations in order by ascending values of that input. (If an input value occurs multiple times, @RISK sub-sorts by ascending iteration number.)
  2. @RISK divides those ordered iterations into 10 bins or "scenarios". With 2500 iterations, the first bin contains the 250 iterations with the 250 lowest values of this input; the second bin contains the 250 iterations with the 251st to 500th lowest values of this input; and so on to the last bin, which contains the 250 iterations with the 250 highest values of this input.
    Note: The bins all have the same number of iterations. For a uniform distribution that means they all have the same width, but for most distributions the bins will have different widths so that they all have the same number of iterations. Another way to look at it is that the bins have equal probability and the same number of iterations, but most likely not equal width based on the shape of the distribution.
  3. @RISK computes the mean of the output values within each bin.
    Exception for discrete inputs: If every iteration in two or more bins has the same input value, @RISK pools the iterations for those bins, computes the output mean, and assigns the same output mean to each of those bins.
  4. @RISK looks at the ten output means from the ten bins. The lowest of the ten output means becomes the number at the left edge of the bar for this input, and the highest of the ten output means becomes the number at the right edge of the bar.

Different shading, beginning with @RISK 7.5, shows you which end of each bar represents high input values and which represents low input values. Thus you can easily tell which inputs have positive impact on this output (high inputs at the right) and which have negative impact (high inputs at the left).

In @RISK 6.0–7.0, there's no way to see from the graph which bin produced which output mean. For instance, if a Change in Output Mean bar goes from 1500 to 4980, you don't know whether that output mean of 1500 came from the bin with the 250 lowest input values, or the bin with the 250 highest input values, or a bin with intermediate input values. This is where correlation sensitivities or a scatter plot can help, to tell you whether increasing values of an input tend to associate with increasing values of the output, with decreasing values of the output, or with some more complicated trend.

Note: The change in output values does not necessarily indicate any influence of that input on the output. For more, see Change in Output Mean Inconsistent with Sensitivity Tornado.

Variation: number of scenarios (bins or divisions)
The default number of bins is 10, but you can change that. While displaying a tornado graph, click the tornado icon in the row at the bottom, and choose Settings. The first setting, "Divide input samples into ____ scenarios", controls the number of bins (number of divisions, number of scenarios) that @RISK uses to construct the tornado. If you increase the number of bins, @RISK will have more output means, each representing a smaller number of inputs. For most models, that translates to a greater range of output means. For a very simplified example, please have a look at the attached workbook. (You don't want so many bins that each one has only a few iterations; see next paragraph.)

Variation: number of iterations
With more iterations, from one simulation to the next you'll see less variability in Change in Output Mean, just as with any other output statistic. In other words, output statistics are more stable with more iterations. With fewer iterations, you'll see more variability in all your output statistics. However, from one simulation to the next, output statistics should vary only within normal statistical variability for the number of iterations.

Variation: choice of statistic
You can display a change in output percentile rather than a change in output mean. In this case, the computation is similar but instead of an output mean for each bin @RISK computes an output percentile for each bin. For example, with 2500 iterations and 10 bins, if you select Change in 90th Percentile then @RISK will compute the 90th percentile of the output values within each of the 10 bins (within each group of 250 iterations sorted by input values), and the edges of the bar will be the smallest and largest of those 90th percentiles. The baseline of the output becomes the overall 90th percentile of the simulation.

Setting preferences for double-sided tornado
To set default number of bars for double-sided tornado in Browse Results, Sensitivity Analysis, and Quick Reports, select Utilities » Application Settings » Sensitivity Defaults. If necessary, set Preferred Calculation Method to Change in Output Statistic. You can then change the number of bars and your preferred statistic. (The Change in Output Statistic tornado will never show more than 10 bars. If you set a maximum greater than 10, it applies only to the correlation and regression sensitivity tornado graphs. However, you can still use a worksheet function to retrieve the change in output statistic for lower-ranked inputs.)

To set the preferences for a particular Browse Results graph, click the tornado icon at the bottom of the graph window and select Settings.

See also: All Articles about Tornado Charts

Last edited: 2018-08-13

7.11. Variable Selection in Tornado Graphs

Applies to:
@RISK for Excel 4.x–7.x
@RISK for Project 4.x

How does @RISK decide which variables to include when I create a tornado diagram with regression coefficients?

To choose inputs to include in the tornado diagram, @RISK uses a stepwise multiple linear regression procedure. By default, each variable is accepted or rejected for the regression procedure at the critical value of 3.29 in the F distribution. For a technical reference, please see "The Stepwise Regression Procedure" in Draper and Smith, Applied Regression Analysis (Wiley, 1966).

An example is attached. For comparison, the Sensitivity sheet shows regression coefficients computed by @RISK, and the Regression sheet shows regression coefficients computed by StatTools. The order of variables is different between the two, because the two products read and store data in different ways. However, both sheets show the same coefficients, because they're doing the same type of analysis.

To set the maximum number of tornado bars in @RISK 6.x/7.x, please see Tornado Graph — How to Set Defaults. In @RISK 5.x, use Utilities » Application Settings » Simulation Graph Defaults.

See also: All Articles about Tornado Charts

Additional keywords: Sensitivity analysis

Last edited: 2016-10-20

7.12. Combining Inputs in a Sensitivity Tornado

Applies to: @RISK 5.x–7.x

How can I aggregate multiple inputs in the tornado graph, so that I see the output's sensitivity to the combination instead of sensitivities to the individual inputs?

I might be combining countries in a region, or I might want an NPV instead of individual cash flows, or ...

The RiskMakeInput( ) functions lets you do exactly this. If you already have a formula in your workbook that computes the aggregate you're interested in, just wrap it in a RiskMakeInput( ), like this:

=RiskMakeInput(formula, RiskName("name to appear in tornado") )

As an alternative, you could just create that formula in an empty cell.

RiskMakeInput( ) tells @RISK that its contents should be treated as an @RISK input distribution for purposes of sensitivity analysis, and RiskMakeInput( )'s precedents should be ignored in sensitivity analysis. Implications:

  • Precedent tracing stops with the RiskMakeInput( ), effectively. @RISK looks at the cells that the formula in RiskMakeInput( ) refers to, and all precedents of those cells. Any distributions that are direct or indirect precedents of any RiskMakeInput( ) function are excluded from all sensitivity calculations for all outputs. Even a RiskMakeInput( ) among those precedents is excluded from sensitivities.

  • The RiskMakeInput need not be a precedent of the output. For example, suppose you have =RiskMakeInput(A1+A2) in cell A3, and an @RISK output in cell A4. If the formula in A4 refers to A1 or A2 or to any of their precedents, then @RISK will treat the RiskMakeInput( ) in A3 as a precedent of the output in A4, even if the formula in A4 doesn't refer to A3 directly or indirectly. In this respect, @RISK treats RiskMakeInput( ) as a precedent even though Excel may not. Another way to look at it is that @RISK treats a RiskMakeInput( ) function as a precedent of an output if the two have any precedents in common.

  • The RiskMakeInput( ) affects all sensitivity measures, including all graphs and the RiskSensitivity( ) and RiskSensitivityStatChange( ) worksheet functions.

For an example, in @RISK 6 or 7 click Help » Example Spreadsheets » Statistics/Probability »  Using RiskMakeInput Function. In @RISK 5, click Help » Example Spreadsheets » RiskMakeInput.xls.

See also:

Last edited: 2015-06-21

7.13. Excluding an Input from the Sensitivity Tornado

Applies to: @RISK 5.x–7.x

How can I tell @RISK not to include one or more inputs in tornado charts and other sensitivity results, including the spider graph and the RiskSensitivity( ) and RiskSensitivityStatChange( ) functions?

You might want to do this if you have two inputs that are very highly correlated. This creates multicollinearity, which adds a redundant bar to the Change in Output Statistic tornado and distorts the Regression Coefficients tornado.

The key is the RiskMakeInput( ) function. @RISK excludes all the predecessors of a RiskMakeInput( ) from sensitivity analysis, whether or not that RiskMakeInput( ) is a precedent of any output. Thus, all you have to do to exclude P11 and J15 from sensitivity measurements is to put them in a simple RiskMakeInput in a previously empty cell:

=RiskMakeInput(P11+J15)

A nice feature of this approach is that you don't have to make any changes to the formulas in your actual model. Also, if you add or remove rows or move cells, Excel will automatically update the cell references, just as with any other formula. However, the RiskMakeInput( ) itself will now appear as an input in sensitivity functions and graphs. To prevent that from happening, multiply the included expression by 0, so that every iteration value is the same:

=RiskMakeInput(0*(P11+J15))

RiskMakeInput( ) will work as described here, whether Smart Sensitivity Analysis is enabled or disabled in Simulation Settings.

I don't want to recalculate results; I just want to suppress one or more bars of the tornado.

Right-click each bar you want to suppress, and click Hide Bar. If you want to bring the hidden bars back, right-click the graph and select Restore Hidden Bars.

(The Hide command isn't available with the spider graph.)

See also:

Last edited: 2017-07-10

7.14. Missing Labels in Tornado Graphs

Applies to: @RISK for Excel 5.x–7.x

@RISK generated a tornado graph, but only some of the inputs are labeled.

By design, all tornado graphs are created at the same size. If there are too many input labels to fit on the y axis, @RISK will show only every second (third, fourth, etc.) label. (This is also true with numeric labels, for instance in histograms.)

To display all labels, simply make the window larger vertically.

See also: All Articles about Tornado Charts

Last edited: 2015-07-06

7.15. Tornado Graph — How to Set Defaults

Applies to: @RISK 6.x/7.x

How do I set the defaults for tornado graphs?  I looked in Application Settings but I couldn't find that section.

To set defaults for the tornado graphs in Browse Results and Quick Reports, click Utilities » Application Settings » Sensitivity Defaults.

Tornado Maximum # Bars can be any whole number from 1 to 16. However, a value greater than 10 will apply only to the correlation, regression, and contribution to variance tornado graphs. The Change in Output Statistic tornado never displays more than 10 bars, even if you specify a higher number. If the limit of 16 (or 10) is too small, you can still use a worksheet function to retrieve correlation coefficients, regression coefficients, change in output statistic, or contribution to variance for lower-ranked inputs.

See also: All Articles about Tornado Charts

Additional keywords: Number of bars in tornado

Last edited: 2017-10-24

7.16. Which Tornado Graph for Quick Reports?

Applies to: @RISK 6.x/7.x

My Quick Reports have three graphs: a histogram, a cumulative S-curve, and a change in output mean (tornado plot). I'd like that third graph to be correlation or regression sensitivities rather than change in output mean. How can I do it?

The format of the tornado graph in Quick Reports is controlled in Application Settings. In @RISK, click Utilities » Application Settings » Sensitivity Defaults.  Change Preferred Calculation Method to Regression Coefficients, Regression Mapped Values, or Correlation Coefficients.  Click OK, then Yes to the confirming prompt. Any Quick Reports you generate after this will use the new format, both in the graph itself and in the table to the right.

Beginning with @RISK 7.0, as an alternative you can simply create a Custom Report.  Before a simulation, click Simulation Settings » View » Automatically Generate Reports at End of Simulation; after a simulation click the large Excel Reports icon in the ribbon. Either way, select Custom Reports in the list, then go to the Custom Reports Settings tab of that dialog. Click Sensitivity Graph, then Edit, and change the type of sensitivity.

Can I limit the number of bars that appear in the tornado charts in Quick Reports?

Yes, this setting is also in Application Settings » Sensitivity Defaults. It's called Tornado Maximum # Bars. See Tornado Graph — How to Set Defaults.

See also: All Articles about Tornado Charts

Last edited: 2015-10-02

7.17. Quick Report for Just One Output

Applies to: @RISK 5.x–7.x

I have quite a few outputs. When I generate Quick Reports, @RISK produces a separate worksheet for each of them. This takes a long time, and really I only want to report on just a few outputs. Is there a way to create Quick Reports just for one or a few selected outputs?

You can create a Quick Report for a single output from the Browse Results screen. While browsing that output, click the Edit and Export icon, next to the help icon at the bottom of the window. Quick Report (singular) is the first selection.

If you want a Quick Report for another output, click on that output and then the Edit and Export icon. TIP: You can use the Tab key to move the Browse Results window to the next output, and the next, and the next. Shift+Tab does the same, but in the opposite order.

Last edited: 2017-09-27

7.18. Interpreting Scenario Graphs

Applies to:
@RISK 6.x/7.x

I clicked the "%" icon in Browse Results and selected a scenario. How do I interpret the numbers in the bars?

Let's work with this graph, which was produced by the first scenario in the attached workbook. (The workbook has a fixed random number seed, so that you can run a simulation and get the same results we're using here.)

Profit scenario. Revenue bar to right shows 83.84% and 0.99; Cost bar to left shows minus 0.77 and 22.1%.

"Scenario" is just a name for a subset of the iterations. In this graph, the title tells you that the subset is iterations where the Profit output is above its 75th percentile; in other words, it's the most profitable 25% of the iterations.

Where do the numbers 83.84% and 0.99 in the first bar come from? They are two different measures of the median of the Revenue input in the filtered subset, versus the median of the Revenue input in the whole simulation. Looking at Browse Results for the Revenue input for the whole simulation, we see that the median is very close to $100,000, and the standard deviation is very close to $6,000. To find the median Revenue value in the scenario, we apply an iteration filter for Profit output greater than its 75th percentile. When we do that, the median of the Revenue input for that filtered subset is $105,923.

The decimal measure in the graph is 0.99. It says that the median Revenue in the subset is 0.99 standard deviation above the median Revenue in the whole simulation. Let's check that. The median in the subset is $105,923, which is 5,923 above the $100,000 median in the whole simulation. $5,923 is about 0.99 of the $6,000 standard deviation of the Revenue input in the whole simulation, so that checks with the scenario graph.

The percentage in the graph is 83.84%. It says that the median Revenue in the subset is at the 83.84 percentile of the median Revenue input in the whole simulation. Let's check that. The median in the subset is $105,923. If we disable filtering and type 105,923 in the right delimiter in the Browse Results window for revenue, the percentage shown to the right is 16.2%. 100% minus 16.2% is 83.8%, so the subset median of $105,923 is at about the 83.8 percentile of the Revenue distribution for the whole simulation, and that too agrees with the scenario graph.

Both numbers in the bar are derived from the median of the Revenue input within the subset of iterations where the Profit output is above its 75th percentile within the whole simulation. The decimal 0.99 says that the median Revenue within the subset is 0.99 of a standard deviation above the median Revenue of the whole simulation, and the percentage 83.84% says that the median revenue within the subset equals the 83.84 percentile of Revenue within the whole simulation.

With that under your belt, you can interpret the other bar. The median of Cost within the subset is 0.77 of a standard deviation below the median Cost of the whole simulation, and it's also equal to the 22.1 percentile of Cost in the whole simulation.

That's fine for the graph. What about the numbers in the Output Scenarios window, if I click the "%" icon in the Results section of the @RISK ribbon?

Those are exactly the same deal, though the dropdown boxes use slightly different words. Select Display Inputs ... using: All, and you'll see the same numbers we've just discussed.

Last edited: 2018-09-26

7.19. Creating and Using Report Templates

Applies to: @RISK for Excel 5.x–7.x

New in @RISK 7.0: If you have @RISK 7.0 or newer, and you're just trying to customize the Quick Reports, take a look at the Custom Reports option in Excel Reports. Items there can be edited, deleted, and rearranged; you can also add further items. That may meet your needs, but if not then read on ...

You can use RiskResultsGraph( ) and the @RISK Statistics Functions to place simulation results in any worksheet. They are filled in automatically when you run a simulation. However, the next time you run a simulation the results will be overwritten with new results.

If you want to create a set of custom-formatted results that do not get overwritten, place them in a report template. To create a report template:

  1. Within this workbook, create a new worksheet.

  2. Give the sheet a name that begins with RiskTemplate_, such as RiskTemplate_Projections. (The underscore is required.)

  3. Set up a worksheet the way you want your results, using any combination of @RISK and Excel functions. You can put any valid Excel formulas and @RISK functions in the template sheet, but these are especially useful in reports:

    • To embed means, percentiles, or other statistical results, use the statistic functions. In @RISK, click Insert Function » Statistic Functions » Simulation Results.

    • To create graphs of several types, use the RiskResultsGraph function. Click Insert Function » Other Functions » Miscellaneous; or click into an empty cell, type =RiskResultsGraph( including punctuation, and press Ctrl+A.  The RiskResultsGraph function provides limited customizations for the supported graph types; please see the help text for details.  (If you want to do more customization, use the RiskGraph object in Visual Basic. For more information, please click Help » Developer Kit (XDK). In the submenu, Automation Guide gives a brief introduction in the topic "Displaying Graphical Results of a Simulation"; @RISK XDK Reference documents all objects and methods in detail, and the Examples include several on creating graphs and reports. You need the Professional or Industrial Edition, release 6.2 or later, for the RiskGraph object.)

  4. In Simulation Settings » View, check (tick) "Automatically generate reports at end of simulation". On the selection dialog that opens, check (tick) "Template Reports".

Each time you run a simulation, @RISK will create a copy of any template sheets and will put that simulation's results in the copy. The original template, and any previous results created from the template, are undisturbed.

@RISK includes an example showing how to use a report template. Click Help » Example Spreadsheets » Other @RISK Features and select RiskTemplate.xlsx.

See also: Template Report Contains Formulas in Place of Numbers

Last edited: 2015-08-07

7.20. Excel Themes in @RISK Graphs and Reports

Applies to: @RISK 7.5 or newer

How do I get @RISK graphs and reports to use Excel themes?

@RISK's own windows are formatted by @RISK and don't use themes. When you use the Chart in Excel command to place a graph in an Excel worksheet, you can choose Excel format, or Image (Picture in some dialogs). Choose Excel format, and then the graph in the Excel sheet will update automatically when you change themes.

Here are some hints for particular reports:

Custom Reports: When choosing each custom report, choose Edit and change the format from Image to Excel Format. @RISK will remember this as a default, so you won't have to change this for the same report in other workbooks. Tables in Custom Reports always use Excel themes for fonts and colors; graphs in Custom Reports will use Excel themes if the graph is in Excel format.

Quick Reports: Tables always use Excel themes for fonts and colors. Graphs are always images (static pictures) and will not respond to Excel theme changes.

RiskResultsGraph: The fourth argument (Excel format) must be TRUE, and in addition you must set a System Registry key. Under HKEY_CURRENT_USER\Software\Palisade\@RISK for Excel\7.0\Application Settings\Reports, create a string value GraphThemeOrStandardColor if it doesn't already exists. If the data for that string value is Theme, and you've specified Excel format in RiskResultsGraph, then the generated graph will use Excel themes. If the string value is set to StandardColor (or the GraphThemeOrStandardColor string value doesn't exist), then the generated graph will use standard colors and will not respond to Excel theme changes. Even without that string value in the System Registry, if you selected Excel format in RiskResultsGraph then you can use all of Excel's graph editing tools on the generated graph. If you didn't select Excel format in RiskResultsGraph, then @RISK generates a static image and it can't be edited.

Graphs you create in VBA: Use the ChartInExcelFormatEx or ChartInExcelFormatEx2 method of the RiskGraph object. These new functions are not yet documented, but a simple example workbook is attached to this article. When writing your own code, use Visual Basic Editor's auto-complete to help you fill in the function arguments.

Last edited: 2017-02-02

7.21. Cell References in Tornado Graphs (RiskResultsGraph)

Applies to: @RISK 5.x/6.x

I used RiskResultsGraph( ) to make a tornado graph in my worksheet. The bars were labeled with both the names of the inputs and the worksheet names and cell references. How can I tell RiskResultsGraph( ) to show just the name of each input, not its location?

RiskResultsGraph( ) does place both input names and locations at the left of the bars in some releases of @RISK, and there's no way to change this.

If this is a one-time need, the easy way is to use Browse Results for the output in question. In tornado graphs in the Browse Results window, only the input names appear, not the input locations. Perform any customizations you want, then right-click the graph and select Copy Graph. Click at the desired location in your worksheet and press Ctrl+V, or right-click and select Paste Special.

if you want an automated solution, the Quick Reports don't show cell references in the tornado graphs. The same is true of the Custom Reports introduced in @RISK 7.0.0.

Beginning in @RISK 6.2.0, you can use Visual Basic for Applications to create your tornado and place it in your worksheet. These tornado graphs label each bar with the name of the input, not its location, and you can do many types of customization. A very simple example is attached.

  • For an introduction to making @RISK graphs in VBA, click Help » Developer Kit (XDK) » Automation Guide and look at the section "Displaying Graphical Results of a Simulation".
  • You need to set references when you have code that automates @RISK. See "Setting Library References", earlier in the Automation Guide.
  • You can place the code that generates the graph in a macro that @RISK will execute at the end of simulation, and register the macro in Simulation Settings » Macros.

See also: All Articles about Tornado Charts

Last edited: 2015-07-06

7.22. Custom Color Selection in Graphs

Applies to: @RISK 6.2/6.3/7.x, Professional and Industrial Editions

I right-click a graph and select the Curves tab. When I try to change the color, the Define Custom Color button is grayed out. How do I unlock it?

That button is grayed out by design, but if you want a color different from the available colors you can create the graph in VBA. Please click Help » Developer Kit (XDK) » Automation Guide for a friendly introduction to controlling @RISK with VBA. The Automation Guide contains sample code for generating reports, though not for setting colors.

For setting colors specifically, you need the CurveColor method of the RiskGraph object. It's in the @RISK XDK Reference in the Help » Developer Kit (XDK) menu, but probably it's easier just to look at an example. Please take a look at the attached file, which is a modified form of our example spreadsheet found at Help » Developer Kit (XDK) » Examples » RISK XDK – Creating Graphs 2.xlsx. In this example, on the Model worksheet, click Run Simulation and then Distribution Graphs of Outputs. All of the graphs are placed on the Graphs1 worksheet, in a 5×4 array.

Press Alt+F11 to view the code, and look at the Graphs1 subroutine. Before calling the ImageToWorksheet method, use CurveColor to set the color of the curve. (CurveColor takes an index argument to let you use different colors when a graph contains more than one curve.) RGB is an Excel macro that lets you set red, green, and blue, in that order, to any value from 0 to 255 inclusive.

Last edited: 2015-08-25

7.23. Multiple Browse Results Windows

Applies to:  @RISK 5.x–7.x

I'd like to compare several inputs and outputs by having several Browse Results windows on screen at the same time. Can I do that?

Here are two techniques.

METHOD A: Change the regular callout window for Browse Results to a floating window by clicking the icon at the lower right; see attached screen shot.  You can then click Browse Results again and select another output or input.

METHOD B: Paste graphs into an Excel sheet.  Chart in Excel creates a new worksheet for each graph, but you can place more than one graph — Browse Results or other graphs — in the same worksheet as follows:

  1. When you have the graph the way you want it, Edit and Export (third icon in the row of small icons at the bottom) and select Copy Graph, or Copy Graph and Grid if you want the statistics grid also.
  2. Click into the worksheet and press Ctrl-V to paste the graph or graph and grid.  (You don't need to close the Browse Results window first.)

Caution with @RISK 5.x/6.x: The pasted graphs will be correct in the Excel worksheet, but if you try to convert them to another format, such as PDF, you may lose details. See Pasted Graph Loses Some Details. @RISK 7.x uses a different technique and does not have this issue.

Last edited: 2017-03-30

7.24. "OnScreen Control" app causing display issues in @RISK

Applies to: @RISK 7.6

Issue: The use of the software causes the error: "Run-time error '91': Object Variable or with block variable not set"

After multiple testings from our end, we could confirm this application generates display issues in @RISK version 7.6 and crashes if the Browsing feature is enabled in the Browse Results window, however, those issues don’t show in version 8.x. So, to fix it you have three options:

  1. Upgrade to version 8.
  2. Turn off the OnScreen Control app before running v7.6.
  3. Make sure to run @RISK v7.6 as administrator (Right-click and run as administrator) so that OnScreen Control app doesn’t interfere with @RISK windowing which is running using higher privileges.


Last Update: 2020-04-27

7.25. Setting DPI for Images Generated by @RISK

Applies to: @RISK 6.x/7.x

I need a particular DPI (dots per inch) setting for publication. When I click Chart in Excel, the DPI doesn't seem to be one of the options. How can I set it?

Really this question is not so much about customizing the DPI or PPI setting, as it is about specifying the size of the image in pixels. Once you generate an image that's big in terms of the pixel width and height (and still sharp), you can change the DPI with a tool like Photoshop or the free Irfanview. If you don't want to install software you can use a Web page like Change DPI of Image. The DPI setting will tell the printer or publisher how big the printed image will be. But of course most pieces of software have other ways of specifying the printed size, without changing DPI. So DPI may not be the primary concern here.

The @RISK GUI (Graphical User Interface) doesn't have an option for specifying the sizes of images in pixels. This shouldn't be a big problem when handling one graph or a handful of them, since you can resize a picture in Excel manually and get one with a bigger pixel size that is as sharp as the original. This is generally true when you insert a raster/bitmap image into Excel—you resize it and it stays sharp, as if it was a vector image.

Attached is a sample @RISK graph that we resized in Excel, by dragging the corner, to get an image with about 2800x1900 pixel size. We then changed the DPI from 96 to 300 using the Web page mentioned above. You can see that the image is quite sharp.

If I can't specify image size in @RISK, can I do it in VBA?

Yes, you can do it in VBA (Visual Basic for Applications), if you have @RISK Professional or Industrial. Click Help » Developer Kit (XDK) » XDK Reference, and search for the ImageToFile method. The optional third and fourth arguments specify the width and height in pixels, with 600 × 400 as default.

Last edited: 2017-02-22

7.26. More Than 10 Overlays?

Applies to:
@RISK 5 and 6

Question:
It seems that @RISK will only let me overlay 10 variables on my results graph. Is there any way to get more?

Response:
No, not as overlays. If you want more than 10 variables on one graph, you can do a Summary Trend graph to get key numbers from each distribution.

last edited: 2014-06-20

7.27. Automatic Overlays from Multiple Worksheets

Applies to: @RISK for Excel 5.x–7.x

I want to have several inputs or outputs overlaid on one graph, to be displayed automatically at the end of a simulation. How can I do it?

You can embed overlay graphs in your worksheet with the RiskResultsGraph( ) function. The graphs will be updated automatically when you run a simulation, and the latest versions will be stored with your workbook even if you don't store simulation results. Please download the attached example (about 130 KB).

To pull together multiple distributions in one overlaid graph, you need a contiguous set of cells (row or column), defined as an output range with a common name. If the cells you want to graph are in different parts of a worksheet, or even in different worksheets, you can create a range of contiguous cells that are set equal to the cells you actually want to graph, and then designate the range as an @RISK output. The example has four such ranges, in column C.

Some additional points about this example:

  • The RiskResultsGraph( ) functions are hidden behind the graphs. To make them visible, delete the graphs, use Excel search for RiskResultsGraph in formulas, or press F5 (Go To) and enter F3, O3, F24, or O24.
  • The four graphs illustrate four possible formats for the results. (Graphs 1 and 4 are both histograms, but the first is probability density and the last is relative frequency.)
  • RiskResultsGraph( ) uses default graph titles, but you can set a title as in cell F3 in the example. Some limited customizations are available; search RiskResultsGraph in the help file.
  • The arguments to RiskOutput, in C3:C14, make all of them part of an output range. Note the comma immediately after the opening parenthesis.
  • The INDIRECT( ) functions in C3:C14 make the example more general by using worksheet names that are in worksheet cells rather than embedded in the formula. This inhibits Smart Sensitivity Analysis. Therefore, for your specific model, you probably want to replace the INDIRECT( ) functions with plain cell references, like =Sheet11!C45. If you do use INDIRECT( ) functions, you will want to disable Smart Sensitivity Analysis in Simulation Settings » Sampling, which has been done in this example. More about this issue is in the article Found invalid formula ... Continue without Smart Sensitivity Analysis?

An alternative is available in @RISK Professional and Industrial releases 6.2 and newer, if you're willing to use Visual Basic for Applications. The GraphDistribution method takes an Array-type argument that lets you specify non-contiguous cells for overlays. The Automation Guide, in the Help » Developer Kit (XDK) menu, introduces you to VBA programming and gives a couple of examples of that function; complete documentation is in the @RISK XDK Reference in the same menu.

Last edited: 2015-07-14

7.28. Sharing @RISK Graphs with Colleagues Who Don't Have @RISK

Applies to: @RISK 7.x
(Several options are also available in earlier @RISK releases.)

How can I output an @RISK histogram and/or tornado graph so that someone without @RISK can see them?

There are several possibilities, but the basic idea is that if you put a graph in the workbook, it is permanently there and independent of @RISK, so that a colleague can see it even if they don't have @RISK. The graph will be static, in that changing numbers in the workbook won't change the graph. Most graphs give you a choice of Excel format or a picture. An Excel-format graph can respond to themes, and you can edit its axes or titles or change colors. A picture is just that, a static image.

Here are some methods to place graphs in your workbook:

  • Swap Out @RISK. In @RISK 7.0 and newer, you have the option to embed thumbnail graphs in the workbook.

  • Use VBA (requires @RISK 6.2 or newer, Professional or Industrial Edition). To get started, see the Automation Guide under Help » Developer Kit (XDK) in the @RISK menu. For an example, see Placing Graphs in an Existing Worksheet with VBA.

  • Use the RiskResultsGraph function. To access the function, click into an empty cell, type =RiskResultsGraph and press Ctrl+A. The function has many arguments, so use the scroll bar at the right to see all of them. (Only the first two arguments are required.) For more complete help text, click the link at lower left, Help on this function.

  • Create your graph in Define Distributions, Browse Results, or another menu selection. Click the Edit and Export icon, which is near the left end of the row of tiny icons at the bottom of the graph window. Select Chart in Excel. In the Chart Setup dialog, choose either Excel Chart or Picture. The picture will be more faithful to the graph, but the Excel Chart option lets you use Excel themes and edit the properties of the generated graph.

  • Create your graph as before, then click Edit and Export and either Copy Graph or Copy Graph and Grid. That places the graph on the Windows clipboard as a picture, and you can paste it with Ctrl+V anywhere you like—into an Excel sheet, email, Word document, etc.

  • If you want the graph as a separate file, rather than embedded in an Excel sheet, click Edit and Export » Save Image File. You can choose from several image formats: BMP, JPG, PNG, and EMF.

See also: Sharing @RISK Models with Colleagues Who Don't Have @RISK

Last edited: 2017-06-06

7.29. Creating Scatter Plots with Inputs from Multiple Sheets

Applies to: @RISK 8.x

How can I create a scatter plot in @RISK using inputs that appear in different sheets of the same workbook?

In @RISK version 8, you can quickly create a scatter plot of two or more inputs by clicking Explore > Scatter Plot, and then adding cells to a dialogue box, note that it will be prefilled with cells with @RISK functions 

There is another option to create scatter plots that allow you to choose inputs from different sheets. Click Explore > Results Summary to bring up a window that summarizes all the inputs and outputs in the model. Then you can select multiple inputs simultaneously from anywhere in the workbook and click the Explore button at the bottom of the window to create a scatter plot. The same method can be used to create other graph types such as a summary box plot as well.

Last edited: 2021-07-29

 

7.30. Reproducing the Data from Spider Graphs

Applies to: @RISK 6.x–8.x

@RISK has a graphing option known as a spider graph that shows how various inputs affect a given output. How can I reproduce this data in Excel?

The attached workbook gives an example of how to do this. Here are the basic steps for the calculations that create the shape of the graph.

  1. Run the simulation.
  2. Use the RiskData function to extract the simulation samples. For more on this function, see Placing Iteration Data in Worksheet with RiskData().
  3. Use Excel sorting to order the samples.
  4. Calculate the percentiles for each input distribution and assign a bin number to each sample depending on the number of scenarios of the spider (i.e. 10). For more on bins, see Interpreting Change in Output Statistic in Tornado Graphs.
  5. Finally, calculate the average for each bin in the samples and assign it to the Change in Output column, which is the Y axis of the Spider Graph.

Last edited: 2021-03-04

8. Advanced Analyses in @RISK

8.1. Fix Distribution to Base Value When Not Stepping

Applies to: @RISK 5.x–7.x

In Advanced Sensitivity Analysis, on the Input Definition dialog, there is a check box, "Fix distribution to base value when not stepping". What does this mean?

You specify one output and one or more inputs in the Advanced Sensitivity Analysis dialog box. This check box matters only when you specify more than one input.

When multiple inputs are specified, Advanced Sensitivity Analysis begins with a set of simulations to determine the impact of the first input on the output. In each of those simulations, the first input has a particular value, which is one of the steps that you specified on the Input Definition screen. So for this first set of simulations as a group, we say that @RISK is stepping the first input.

Then Advanced Sensitivity Analysis continues with another set of simulations to determine the impact of the second input on the output. In this set, @RISK is stepping the second input. For each input, @RISK runs one simulation for each step you specified on the Input Definition screen for that input. After all the specified inputs have been stepped independently, @RISK prepares the sensitivity reports.

The question is, while @RISK is stepping a given input, what are all the other inputs doing? There are two possibilities:

  • If you leave the box "Fix distribution to base value when not stepping" empty, then when @RISK is stepping another input this one will take on a different random value at each iteration, just as it does during a regular simulation.
  • If the box is checked (ticked), then when @RISK is stepping other inputs this one is held constant at its base value. What is the base value? Usually it is the expected value (mean) of the distribution. But if you specified a static value on the Define Distribution dialog or in a RiskStatic( ) property function within the distribution, then the base value is that specified static value. (The setting "Where RiskStatic is not defined, use" in Simulation Settings does not determine the base value of a distribution during Advanced Sensitivity Analysis.)

The above applies only to inputs that are selected in the Advanced Sensitivity Analysis. During the simulations for that analysis, any @RISK inputs that are not selected as inputs in the Advanced Sensitivity Analysis dialog will vary randomly, just as they do during a regular simulation. You do have the option of locking any of them to prevent them from varying, but then you are doing an analysis without taking into account the uncertainties that you programmed into your model.

Last edited: 2015-07-14

8.2. Stressing Each Input in Its Own Simulation

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I need to run a Stress Analysis in @Risk. In the Option dialog, I have a choice of "Stress Each Input in Its Own Simulation" or "Stress All Inputs in a Single Simulation".  If I choose the first option, what are the other inputs doing while @RISK is running those separate simulations? Do the other inputs run the full range of their distributions, or are they fixed to single numbers?

Either way, you get one simulation called the baseline, where all variables vary according to their distributions.

  • With "Stress Each Input in Its Own Simulation", you also get one simulation per designated stress input, where one input is stressed while the others vary according to their distributions.

  • With "Stress All Inputs in a Single Simulation", you get a total of two simulations, the baseline plus a second simulation where all inputs are stressed simultaneously. The two results are then compared.

Take a look at the attached example. A stress analysis is run with two stress ranges: Revenue (cell C3) is restricted to the lower 5% of its distribution, and Cost (C4) is restricted to the upper 5% of its distribution.

  • With "Stress Each Input in Its Own Simulation", you get three simulations:

    1. Baseline — Revenue varies through its full range, and so does Cost (not shown).
    2. C3 0% to 5% — Revenue varies only through the lower 5% of its distribution, while Cost (not shown) varies through its full distribution.
    3. C4 95% to 100% — Cost (not shown) is restricted to the upper 5% of its distribution, while Revenue again varies through its full range.
  • With "Stress All Inputs in a Single Simulation", you get two simulations:

    1. Baseline — Both inputs vary through their full range. (Only Revenue is shown.)
    2. Stress Analysis — In this single simulation all inputs are restricted to their stress ranges. (Again, only Revenue is shown.)

By the way, to prepare these graphs after running the Stress Analysis, we clicked on Revenue (cell C3) and then the Browse Results icon. Then we clicked the # icon in the row at the bottom to select each simulation in turn. Right-clicking on the graph and selecting Copy, we then pasted each graph into the worksheet.

Last edited: 2015-07-14

9. @RISK Performance

9.1. How Long Did My Simulation Run?

Applies to: @RISK 4.x–7.x

Is there an @RISK function that will show the duration of a simulation?

All releases of @RISK show the simulation run time in the Quick Reports as "simulation duration".

In @RISK 6.x/7.x, place the function RiskSimulationInfo(2) in a worksheet cell, and after the simulation the cell will contain the simulation run time in seconds.

In @RISK 4.x/5.x, there's no specific @RISK function, but you capture the simulation run time in your worksheet as follows:

  1. Put =NOW( ) in one cell, such as E10.
  2. Put this formula in another cell:
    =24*60*60*(RiskMax(E10)-RiskMin(E10))

At the end of simulation, that cell will contain the number of seconds that the simulation took. (The 24*60*60 converts fractional days to seconds by multiplying by the number of hours in a day, minutes in an hour, and seconds in a minute.)

Additional keywords: SimulationInfo, simulation info

Last edited: 2015-07-14

9.2. For Faster Simulations

Disponible en español: Para simulaciones más rápidas
Disponível em português: Para simulações mais rápidas

Applies to: @RISK 5.x–8.x

Contents: 

What can I do to speed up @RISK's performance?

Here's our list of things you can do within @RISK, and things you can do outside @RISK. The ones that make the biggest difference are marked in bold face.

How should I set up Excel?

  • For really big simulations, switch to 64-bit Excel. All 32-bit Excels are limited to 2 GB of RAM and virtual memory combined. 64-bit Excel doesn't have that memory limit. (You need at least @RISK 5.7 if you have 64-bit Excel. Please contact your Palisade sales manager if you need to upgrade.) Note: 64-bit Excel lets you run larger simulations, but it is not intrinsically faster than 32-bit Excel; please see Should I Install 64-bit Excel? for more information about the trade-offs.

  • Enable multi-threaded calculations (Excel 2007 and newer). In Excel 2010, File » Options » Advanced » Formulas » Enable multi-threaded calculations. In Excel 2007, click the round Office button and then Excel Options » Advanced » Formulas » Enable multi-threaded calculations.
    Note: Though this is a global Excel option, it can also be changed by opening a workbook where the option was set differently. Check the status of the option while your particular workbook is open.

  • If your computer is running Excel from a network, install Excel locally instead. This eliminates slow-down due to network traffic. (If you're running @RISK on a terminal server, this is not a problem because everything happens on the remote computer.)

  • To make Excel start faster, remove any unnecessary add-ins.

    • In Excel 2010 and newer, click File » Options » Add-Ins. At the top of the right-hand panel, notice whether the unneeded add-ins are Excel add-ins or COM add-ins. Then at the bottom of the right-hand panel, after Manage select Excel or COM and click Go.
    • In Excel 2007, click the Office button then Excel Options » Add-Ins. At the top of the right-hand panel, notice whether the unneeded add-ins are Excel add-ins or COM add-ins. Then at the bottom of the right-hand panel, after Manage select Excel or COM and click Go.
    • In Excel 2003 or older, click Tools » Add-Ins. (Only Excel add-ins will be displayed; COM add-ins will not be visible.)

    Remove the check mark from any add-ins that will not be needed during your @RISK session. The next time you load Excel and @RISK after doing this, any slow-down due to loading extra add-ins should be eliminated.

  • If you have Excel 2007, install the latest service pack, or upgrade to Office 2010 or later. Office 2007 Service Pack 1 improved Excel's speed and fixed some bugs; Service Pack 2 fixed further bugs and improved Excel's stability. After you install an Office 2007 Service Pack, run a repair by following these instructions: Repair of Excel or Project.

  • Follow Palisade's and Microsoft's suggestions in Getting Better Performance from Excel and Recommended Option Settings for Excel.

Any hardware suggestions?

  • Add RAM, unless your computer already has plenty. Insufficient RAM is probably the biggest single bottleneck on a simulation. Watch your hard-drive usage light while a simulation is running. If it is constantly writing or reading information from disk during the simulation, you should consider increasing your system's memory. To estimate memory needs, see Memory Used by @RISK Simulations and Hardware Requirements or Recommendations.

  • Enable Turbo Boost, Turbo Core, Power Tune or similar if available on your computer. This is not overclocking (which we don't recommend). Turbo Boost and the others are technology built into some CPUs by Intel, AMD, and others to adjust processor speed dynamically, depending on your computing needs from moment to moment. Please consult your computer's documentation to learn whether you have this technology, how to find the current status, and how to enable it if it's not currently enabled..

What can I do in Windows?

  • For larger simulations, you may want to override the default page file size. See Virtual Memory Settings.

  • The temporary folder (%TEMP% environment variable) should be on the local computer, not in a network location. If you're not sure where it is, see Opening Your Temp Folder.

  • Clean out your temporary folder. See Cleaning Your Temp Folder. (This may also solve some problems with Excel crashing.)

  • Make sure to have plenty of free space on your disk. Applications and Windows itself get very slow if it doesn't have enough disk space. (Defragmenting your disk can't hurt, but in recent versions of Windows it's unlikely to have enough of an effect that you'd notice.)

  • Close other applications and background services, such as Windows Indexing Service. Other programs take CPU cycles from @RISK. Also, by taking up physical memory they may force Excel and @RISK to swap more information out to disk, which can really slow down a simulation.

  • Tell your antivirus program not to scan .XLS or .XLSX files. (Use this setting with caution if you run .XLS files that come to you from someone else.)

How should I structure my @RISK model?

In our experience, poorly structured models are the most common cause of poor performance. So it's worth spending time to structure your model efficiently.

  • If your RiskCompound( ) distributions contain only cell references, with the actual distributions in other cells, the simulation can run noticeably faster if you embed the actual severity distributions within RiskCompound( ). (This is not important for the frequency distributions, only the severity distributions.) The more RiskCompound( ) functions in your model, the more difference this will make; and the same is true if you have large frequencies in even a small number of RiskCompound( ) functions. See Combining Probability and Impact (Frequency and Severity) for more on RiskCompound( ).

  • Fix all invalid correlation matrices (non-self-consistent matrices). If your @RISK distributions reference any invalid matrices, you'll have to answer a pop-up every time you simulate, and @RISK will have to take time to find valid matrices every time. The time to do this increases by a power of the number of rows in the matrix, so your simulation will take a lot of extra time if you have any medium to large correlation matrices that aren't valid. See How @RISK Tests a Correlation Matrix for Validity for how you can check matrix consistency, and How @RISK Adjusts an Invalid Correlation Matrix for how you can adjust an invalid matrix once and for all.

  • Remove extraneous elements from your model:

    • Consider removing unnecessary graphs and tables from your model. These may take significant time to calculate and update.
    • Eliminate external links if possible, particularly links to a network resource.
  • Eliminate linked pictures. If a workbook contains linked pictures, Excel's performance in updating cells can slow to a crawl. @RISK may appear to crash or hang, but actually it is just waiting for Excel to finish the cell updates.

  • If you have @RISK functions inside Excel tables, move them outside. For details, please see Excel Tables and @RISK.

  • Avoid unneeded INDIRECT, VLOOKUP, HLOOKUP, and similar functions. In our experience, these are rather slow, and if your model contains a lot of them it will definitely run slowly. VLOOKUP and HLOOKUP can be replaced with INDEX+MATCH functions. There are great resources on the Web, and you'll find them with this Web search:

    excel "index function" "match function"

  • Don't save simulation results in your workbook, or if you do then clear them before starting the simulation. Saved results will cause Excel to take longer to recalculate each iteration; how much difference this makes depends on the size of the results. See Excel Files with @RISK Grow Too Large.

  • Open only the workbook(s) that are part of the simulation. During a simulation, in every iteration Excel recalculates all open workbooks. If you have extraneous workbooks open, it can slow down your simulation unnecessarily.

  • See also: Microsoft's article How to clean up an Excel workbook so that it uses less memory (applies to Excel 2013 and 2016).

What do you recommend for @RISK simulation settings?

  • General tab: Set Multiple CPU Support to "Automatic" or "Enabled" if you have dual core, quad core, etc. Starting with @RISK 5.5, this is available in all editions of @RISK, not just Industrial. If you have @RISK Professional or Standard, and you have a quad core or better, you will probably see significant speed improvements if you upgrade to Industrial.

    In rare situations, if you have a large number of CPUs the overhead of the parallel processing might exceed the CPU cycles shared. Or, with all those CPUs sharing a fixed amount of RAM you may find that virtual memory gets used much more and the disk swaps slow down the simulation. In this case, reducing the number of CPUs available to @RISK may help. Unfortunately, there's no way to predict this, and you just have to experiment after you've tried the other tips. For instructions, please see CPUs Used by @RISK 7.x or CPUs Used by @RISK 4.x–6.x.

    Simulations with Microsoft Project cannot use multiple CPU. If your simulation settings have Multiple CPU Support: Enabled, it is automatically changed to Disabled when you click Start Simulation on a Project simulation.

  • View tab: Deselect Demo Mode. Uncheck Update Windows During Simulation. Uncheck Show Excel Recalculations.

  • Sampling tab:

    • Set Sampling Type to "Latin Hypercube" (the default). Particularly if you are testing for convergence, this will make the simulation faster. Exception: if you select iterations in the millions, Latin Hypercube will slow down dramatically and Monte Carlo will be faster. However, it would be an extremely rare model that would need millions of iterations in Latin Hypercube.
    • Set Collect Distribution Samples to "None" or "Inputs Marked with Collect". For the implications, see Collecting Input Distributions in the article Out of Memory.
    • Disable Smart Sensitivity Analysis. This won't make the iterations any faster, but it can significantly speed up the start of the simulation. For the meaning of Smart Sensitivity Analysis and the implications of disabling it, see Precedent Checking (Smart Sensitivity Analysis).
    • Set Update Statistic Functions to "At the End of Each Simulation". (This is the default in @RISK 5.5 and above, but is not available in @RISK 5.0.) This will greatly increase the speed of your simulation, if you have a lot of statistics functions such as RiskMean and RiskPercentile. Apart from the speed increase, there may be good logical reasons to choose this setting; however, your simulation results may differ from earlier versions of @RISK. Please see "No values to graph" Message / All Errors in Simulation Data.
  • Convergence tab: Consider enabling convergence testing. If you are running more iterations than necessary, you're just wasting simulation time. On the other hand, testing convergence itself involves some minor overhead. Try convergence testing and see if the simulation converges in significantly less time than you were before. If so, you leave convergence testing turned on; otherwise, go back to your fixed number of iterations. (When you enable convergence testing, also set the number of iterations to "Auto" on the General tab and select "Latin Hypercube" on the Sampling tab.)
    With convergence monitoring, by default, the simulation will stop after 50,000 iterations even if not all outputs have converged. More Than 50,000 Iterations to Converge explains how you can override that limit.

What can the progress window tell me during simulation?

Take a look at the number of iterations per second. It should increase during the first part of the simulation, and then stay steady, assuming no other heavyweight Windows programs start up.

But sometimes, if Excel doesn't have focus, the number of iterations per second will gradually fall, as the simulation runs slower and slower. In this case, give focus to Excel by clicking once in the title bar of the Excel window. You should see the number of iterations gradually rise to its former level.

This doesn't always happen, and it's not clear exactly what interaction between Excel and Windows causes it when it does happen, but giving focus to Excel usually reverses a falling iteration rate. (If that doesn't work, try giving focus to the simulation progress window by clicking in its title bar.)

What about using @RISK with projects?

This applies to @RISK 6.x/7.x only, Professional and Industrial Editions.

  • Upgrade to the latest @RISK if you have @RISK 6.0. The accelerated engine introduced in @RISK 6.1 makes many simulations with projects run dramatically faster, and there were further improvements in later versions.

  • Use the accelerated engine. In Project » Project Settings » Simulation, ensure that the simulation engine is set to automatic, and @RISK will then use the accelerated engine if your model is compatible with it. See the topic "Simulation Engine" in the @RISK help file for a list of fields that are compatible with the accelerated engine.

    • If you see that @RISK still uses the standard engine, your model contains features that are not compatible with the accelerated engine. Click the Check Engine button on the same dialog, and @RISK will list the problem features in your model. (Also see the topic "Check Engine Command" in the @RISK help file.) If you can change those without losing essential functionality, your simulation should run much faster.
    • If @RISK still uses the standard engine, click View » Simulation Settings and turn off "Demo Mode" and "Show Excel Recalculations".
  • If you have experience with @RISK 4, you may have used probabilistic branching. This is intrinsically time consuming because of the changes that have to be made to the predecessor/successor relationships each iteration, and reset prior to the next iteration. In @RISK 6.x/7.x, these issues are magnified by the communication between Microsoft Excel and Microsoft Project. To incorporate risk events, consider a risk register rather than probabilistic branching. For examples, click Help » Example Spreadsheets » Project Management.

  • If you have Project 2007, switch to Project 2010 or newer. Project recalculations are slowest in Project 2007; see Simulation Speed of @RISK with Microsoft Project. Project 2003 is fastest if you have @RISK 6.x, but @RISK 7.x requires Office 2007 or newer.
  • On the Project Settings » Simulation tab, if you don't need the information for Calculate Critical Indices, Calculate Statistics for Probabilistic Gantt Chart, and Collect Timescaled Data, uncheck those boxes.

  • On the Project Settings » Simulation tab, set Date Range for Simulation to "Activities After Current Project Date" or "Activities After Project Status Date". This will make your simulation run faster because @RISK won't simulate tasks that have already completed.

  • Don't re-import .MPP files. You only need to import the .MPP file once, and store the Excel workbook when @RISK prompts you. After that, in @RISK don't open the .MPP file directly. When you open the Excel workbook associated with your project, @RISK will automatically connect to the linked .MPP file and use any changes to update the workbook. This takes much less time than re-importing from scratch.

What settings do you recommend in Microsoft Project?

  • If the project is on a network drive, copy it to your C: drive or another local drive (optimally, a local SSD drive) before opening it.

  • Zero out margin spans.

  • Set future constraints to ASAP.

  • Remove all deadline dates.

  • Check for negative slack and unstatused tasks, and correct any issues.

  • Create a table that contains just the fields you will want to see in @RISK, and apply it before importing the project.

See also: For Faster Optimizations

Last edited: 2020-07-28

9.3. Simulation Speed of @RISK with Microsoft Project

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions
@RISK for Project, all releases

I recently upgraded Microsoft Project 2003 to a newer version, and my simulations seem to take longer to run. Do I need to change some setting?

Recalculation speed has changed between versions of Microsoft Project, and this impacts the run times of @RISK simulations. Why? Because for each iteration of a simulation @RISK must fully recalculate Microsoft Project.

Recalculations are fastest in Microsoft Project 2003 and slowest in Microsoft Project 2007. Microsoft Project 2010 is an improvement over 2007, but still is substantially slower than Microsoft Project 2003. However, Project 2010 offers many new features over Project 2003, and Project 2003 can't support @RISK 7.x. If you have large projects in which simulation run time is an issue, use the fastest possible hardware configuration.

How do Excel and Project 2013 and 2016 compare to 2010? Benchmarking Windows programs is problematic, because there are so many variables, not only different hardware but different Windows configurations, different programs running in background, and so forth — not to mention different @RISK models. We ran tests with 10,000 iterations of our Parameter Entry Table example from Help » Example Spreadsheets. We used @RISK 7.5.2 in 32-bit Excel and Project 2010, 2013, and 2016, on 64-bit Windows 8, with a 2.8 GHz i7 chip and 8 GB of RAM. We offer our results as anecdotal evidence; they may or may not apply to your system, or your model. And obviously the Parameter Entry Table example is a small one, only eight tasks, so any real project is going to take significantly longer to run.

With those caveats, here is what we found in that example with that system:

Average Times in Seconds
Excel and Project versions2010
32-bit
2013/2016
32-bit
Standard Engine  255 s 199 s
Accelerated Engine  34 s 56 s
Multiple runs of one Excel/Project version showed little variation. Differences
between Excel/Project 2013 and Excel/Project 2016 were not significant.

The accelerated engine is available when @RISK distributions and outputs are in just a few commonly used fields of Project; the standard engine allows distributions and outputs in any Project field.

How does our test system compare to your system? Almost everyone has 64-bit Windows. There's more of a split between 32-bit and 64-bit Office, but the majority have 32-bit Office. Switching to 64-bit Office will not increase simulation speed for most @RISK models.

See also: For Faster Simulations

Last edited: 2018-03-05

9.4. CPUs Used by @RISK 7.x

Applies to: @RISK 7.x
(If you have an older @RISK, see CPUs Used by @RISK 4.x–6.x.)

How many CPUs (cores or processors) do @RISK and RISKOptimizer use?

When you click Start Simulation, by default @RISK estimates how long a simulation will take and uses one or more CPUs to complete the simulation as quickly as possible.

  • @RISK 7.x Industrial will use anywhere between one core and all the cores in your computer, depending on its estimate of the tradeoff between overhead of starting and managing multiple copies of Excel versus the savings from parallel processing.
  • @RISK 7.x Standard and Professional will use no more than two cores, no matter how many you have.
  • Simulations and optimizations with RISKOptimizer 7.5 use multiple cores, This lets the optimization run multiple simulations in parallel, to make progress faster.
  • Simulations and optimizations with RISKOptimizer 7.0 use only one CPU.
  • Simulations with Project use only one CPU.

If for any reason you want to limit @RISK to only one core when simulating or optimizing this workbook, open Simulation Settings and, on the General tab, change Multiple CPU to Disabled.

@RISK recognizes as a "CPU" anything that Windows recognizes as a CPU. To find the number of CPUs in your computer, press Ctrl-Shift-Esc to open Task Manager, then select the Performance tab. Real CPUs should make a major improvement in speed of large simulations, but hyperthreaded CPUs will give only modest speed improvement.

Multithreading, as opposed to multiple CPUs, is an Excel option, and you should generally turn it on in any edition of @RISK. See For Faster Simulations and Recommended Option Settings for Excel.

Can I limit the number used by @RISK, thus leaving some CPUs (cores) available for other programs? If @RISK decides to use only some cores, can I tell it to use more?

The default simulation setting of Multiple CPU — "Automatic" beginning with 7.5, "Enabled" in 7.0 — tells @RISK to decide the optimum number of CPUs. To tell @RISK to use only one CPU when simulating this workbook, go into Simulation Settings and change Multiple CPU Support to Disabled. To specify a number of CPUs greater than 1, the mechanism is different between @RISK 7.5 and @RISK 7.0.

Number of CPUs in @RISK 7.5 and newer:

Click Simulation Settings. On the General tab, look at the third setting, Multiple CPU Support. You have three options:

  • The default is "Automatic" (equivalent to "Enabled" from earlier releases of @RISK). @RISK will decide how many cores to use — between 1 and the number in your computer in @RISK Industrial, 1 or 2 cores in @RISK Professional and @RISK Standard.
  • "Enabled" has a new meaning. You specify a number in the #CPUs box, and then @RISK will always create that number of copies of Excel, even if the number is greater than the number of cores on your computer. If you use this setting, don't specify a number so high that the extra Excels bog down your computer. (Regardless of the number you specify, @RISK Professional and Standard won't create more than one "worker" Excel, for a total of two.)
  • "Disabled" means that @RISK always uses just one core.

In earlier releases of @RISK, your setting for Multiple CPU Support applied only to simulations. Beginning with @RISK 7.5, it also applies to optimizations with RISKOptimizer.

The System Registry values RiskUseMultipleCores, ForceMultiCore, and NumCPU, and the Excel name _AtRisk_SimSetting_MaxCores, are no longer used in @RISK 7.5, and will be ignored if they are set.

Number of CPUs in @RISK 7.0:

Name Manager dialog to create 'RiskUseMultipleCores' nameTo tell @RISK to use a certain number of CPUs, define a workbook-level name, RiskUseMultipleCores.

On Excel's Formulas tab, click Name Manager. If the name RiskUseMultipleCores already exists, click it and click Edit; otherwise click New and enter that name. The value can be any of the following:

  • A specific number of cores that you want @RISK to use. If you specify more than the computer has, @RISK will use as many as you have but won't display an error message. If you specify a number greater than 2 in @RISK Professional or Standard, @RISK will use two cores but won't display an error message.
  • The keyword all.
  • The keyword off (equivalent to 1).
  • The keyword auto (tells @RISK to decide the optimum number of CPUs).
  • An absolute cell reference with leading equal sign, such as =$B$12. This lets you place the setting in the workbook in case you want to change it later without going through Name Manager, for instance if you're testing simulation speed with various numbers of cores.

If you define the name RiskUseMultipleCores in a workbook, it overrides the Multiple CPU setting (Enabled or Disabled) in Simulation Settings when that workbook is open.

The System Registry values ForceMultiCore and NumCPU, and the Excel name _AtRisk_SimSetting_MaxCores, are no longer used in @RISK 7.0, and will be ignored if they are set.

Additional keywords: Number of cores, multiple cores, how many cores, how many CPUs

Last edited: 2017-10-05

9.5. CPUs Used by @RISK 4.x–6.x

Applies to:
@RISK 5.5, 5.7, and 6.x, all editions
@RISK 4.x and 5.0, Industrial Edition only
RISKOptimizer, releases 1.x and 5.x

(If you have @RISK 7, see CPUs Used by @RISK 7.x.)

What is the maximum number of CPUs (cores or processors) that @RISK and RISKOptimizer will use? Can I limit the number used by @RISK, thus leaving some CPUs (cores) available for other programs? Is there any other reason to limit the number of CPUs used?

@RISK recognizes as a "CPU" anything that Windows recognizes as a CPU. To find the number of CPUs in your computer, press Ctrl-Shift-Esc to open Task Manager, then select the Performance tab. Real CPUs should make a major improvement in speed of large simulations, but hyperthreaded CPUs will give only modest speed improvement.

Multithreading, as opposed to multiple CPUs, is an Excel option, and you should generally turn it on in any edition of @RISK. See For Faster Simulations and Recommended Option Settings for Excel.

@RISK uses a heuristic to guess how long a simulation will take. If @RISK judges that the overhead of starting multiple copies of Excel would outweigh the time saved through parallel processing, it will use only one core even if you have enabled Multiple CPU. For much more about this, please see Multiple CPU — Only One CPU Runs.

With larger simulations,

  • @RISK Industrial can use all CPUs that exist in your computer. You can enable or disable multiple CPUs on the first tab of the Simulation Settings dialog.
  • @RISK Standard and @RISK Professional 5.5 and later can use up to two CPUs if present. You can enable or disable multiple CPUs on the first tab of the Simulation Settings dialog. (@RISK Standard and @RISK Professional 5.0 and earlier run all simulations with one CPU.)
  • Simulations and optimizations with RISKOptimizer use only one CPU. (There is significant overhead to communicating between the master CPU and the workers. In the great majority of optimizations that we have seen, this overhead would eliminate all or nearly all the savings from multiple CPUs.)
  • Simulations with Project use only one CPU.

Optimum number of CPUs

Up to around four, more CPUs is almost always better. Beyond that, at some point you can actually have too many CPUs. You can reach a point where CPUs are starved for RAM and have to use virtual memory, which means relatively slow disk operations instead of fast operations in real memory. Or if you have a lot of CPUs in a simulation, the overhead can swamp the savings and a simulation can actually take longer. To some extent, determining the optimum number is a matter of experimentation, because it depends on the size of your model, the Memory Used by @RISK Simulations, and the available RAM in your computer.

You can get an idea of an appropriate number of CPUs for this model and the amount of RAM in your computer. Follow the process in Memory Used by @RISK Simulations to determine how much memory your simulation needs. Take the amount of RAM in your computer, subtract what's used by Windows and other programs, and divide by the amount of RAM used in a single-CPU simulation. You don't want to use more CPUs than that, though it might still be more efficient to use fewer.

Limiting the number of CPUs used

You can do this via a System Registry key or by defining a special name in Excel. If you do both, in @RISK 6.2 or 6.3, @RISK will use the workbook name and ignore the Registry key.

Either way, if your Simulation Settings have Multiple CPU set to Disabled, @RISK will use just one CPU.

Registry setting (all releases 4.x–6.x)

If you want to have @RISK Industrial use multiple CPUs, but not all the CPUs in your computer, you can do this by editing the System Registry:

  1. With Excel not running, click Start » Run and enter the command REGEDIT. Click OK.
  2. Navigate to the key HKEY_LOCAL_MACHINE\Software\Palisade, or HKEY_LOCAL_MACHINE\Software\WOW6432Node\Palisade in 64-bit Windows.
  3. In the right-hand panel, right-click and select New » DWORD Value and type the name NumCPUs — note the "s" at the end.
  4. Double-click NumCPUs, enter your desired maximum in the Value Data box, and click OK.
  5. Select File » Exit to close the Registry Editor.

When you enable Multiple CPU in Simulation Settings, @RISK will not use more than the number of CPUs specified in the System Registry. (If you actually have fewer CPUs in your computer, @RISK will just use the ones it finds.)

Workbook setting (@RISK 6.2 and 6.3)

If you can't edit the System Registry or prefer not to, create a name _AtRisk_SimSetting_MaxCores and set it to the desired maximum.

Notes:

  • To create the name in Excel 2007 and later, click Formulas » Name Manager » New; in Excel 2003, Insert » Name » Define.
  • The name begins with an underscore.
  • The "Refers to" value must be preceded by an = sign, as shown in the illustration.
  • If you actually have fewer CPUs in your computer, @RISK will just use the ones it finds.
  • If you set a limit with NumCPUs in the System Registry (above), @RISK will honor a lower number specified in AtRisk_SimSetting_MaxCores, but will ignore a higher number.
  • If you have several workbooks open, and more than one of them defines this name, @RISK will use the lowest number.

Additional keywords: Number of cores, multiple cores, how many cores, how many CPUs

Last edited: 2018-10-26

9.6. Memory Used by @RISK Simulations

Applies to: @RISK for Excel 5.x–7.x

How much memory is used during a simulation?

@RISK saves the values of each output, each input (unless you have changed the default on the Sampling tab of Simulation Settings), and each cell referred to by a statistics function such as RiskMean( ) or RiskPtoX( ). The memory required is 8 bytes per value per iteration per simulation. However, to avoid overflowing 32-bit Excel's limited memory space (below), @RISK pages data to disk as needed.

@RISK needs additional memory for its own code and for data other than the iterations of simulation inputs and outputs. To get an idea of overall memory requirements for your simulation:

  1. In Simulation Settings » General, change Multiple CPU to Disabled and run a simulation with the number of iterations unchanged.

  2. When the @RISK progress window shows iterations being run, open Task Manager (Ctrl+Shift+Esc) and look in the Commit Size column to see how much memory Excel.exe is using.

    • In Windows 7, Vista, or XP, look at the Processes tab. If you don't see the Commit Size column, click View » Select Columns » Memory–Commit Size.
    • In Windows 8 or 10, look at the Details tab. If you don't see the Commit Size column, right-click on any column head and click Select Columns » Commit Size.
  3. You can then shut down the simulation with the "stop" button in the progress window.

When you re-enable Multiple CPU, the master CPU will use about this much and each worker CPU will use somewhat less.

When I disable Smart Sensitivity Analysis, my simulation starts faster, but does it also reduce memory use?

Yes and no. After running a Smart Sensitivity Analysis, @RISK saves the results of the precedent tracing but frees the memory used for the trace. So there is no appreciable memory saving once the simulation starts.

However, if your model is large and complicated enough, @RISK could run out of memory during the process of tracing precedents. In that case, turning off Smart Sensitivity Analysis will bypass precedent tracing and the associated out-of-memory condition.

I have heard that Excel has a memory limit of 2 GB. Does @RISK have such a limit?

Well, sort of. 64-bit Excels have effectively no limit to memory space. The part of @RISK that runs in the Excel process shares in this. The part of RISK that is separate executables, such as the model window and progress window, used to be subject to the 2 GB limit, but as of @RISK 7.5.2 those executables are Large Address Aware (next paragraph).

As for 32-bit Excel, it's complicated. Historically, every 32-bit process, including 32-bit Excel, was limited to 2 GB of address space. However, during the year 2017, updates to Excel 2013 and 2016 gave 32-bit Excel the ability to access 4 GB of memory space when running in 64-bit Windows, or 3 GB in 32-bit Windows. See Large Address Aware in Should I Install 64-bit Excel?

  • Beginning with 7.5.1, the operations of @RISK that share 32-bit Excel's memory space are Large Address Aware.
  • Beginning with 7.5.2, all parts of @RISK are Large Address Aware.
  • All versions of @RISK will page data to disk during a simulation to avoid overrunning the Excel's memory limit.

If you are using multiple processors, then each Excel process has a separate memory limit, so in 32-bit Excel the overall simulation can use up to 2 GB (or 3 or 4 GB) times the number of processors. Add to that whatever is used by executables whose names start with Pal or Risk. If you want to limit the number of processors used by a simulation, please see CPUs Used by @RISK.

All the above is subject to additional constraints. Not all the RAM in your computer is available to Excel and @RISK: the operating system and other running applications need some as well. You should make certain that you've allocated enough virtual memory. On the Processes tab of Task Manager, you can see how much memory is in use by which processes.

Does @RISK take advantage of 64-bit Excel?

The great majority of simulations run just fine in 32-bit Excel and @RISK and do not see significant benefit from switching to a 64-bit platform. If your simulation generates gigabytes of data, and you have enough RAM to hold it all, you may see some benefit. Please see Should I Install 64-bit Excel? for more information.

See also: "Out of Memory" and "Not enough memory to run simulation" for techniques to reduce the memory used.

Last edited: 2018-02-12

9.7. GPU Computations to Speed up @RISK?

Applies to: @RISK 5.x–7.x

Does @RISK take advantage of CUDA functionality, using the GPU (graphics processing unit) in addition to the main CPU to increase simulation speed? As I understand, the graphic card CPUs are very good at parallel processing, which is what is needed to increase simulation speed.

CUDA is one type of GPGPU (general-purpose computation on graphics processing units), and is specific to NVidia GPUs. AMD has a different scheme, called OpenCL.

In a typical simulation, most of the compute power is used not by @RISK but by Excel, in recalculating all open workbooks for each iteration. And as of this writing (July 2017), Excel versions up through Excel 2016 don't use GPGPU. @RISK 5.x–7.x do use multiple threads, to try to use all CPU resources available, but not GPGPU. There are few calculations within @RISK itself that could benefit from GPGPU.

We will continue to re-evaluate this issue as technology advances.

Last edited: 2017-07-28

10. VBA Programming with @RISK

10.1. Automating Time Series Fitting in VBA

Applies to:
@RISK 6.x/7.x, Industrial Edition

What are the VBA objects and methods for Time Series fitting? I'd like to automate my fitting process.

Unfortunately, there is no VBA interface in the @RISK XDK for Time Series. This may be added in a future release, but for now Time Series can be done only through the user interface.

Last edited: 2016-10-03

10.2. Setting References in Visual Basic

Applies to:
@RISK 5.x–8.x (Professional and Industrial Editions)
Evolver 5.x–8.x
NeuralTools 5.x–8.x
PrecisionTree 5.x–8.x
StatTools 5.x–8.x

You can set up VBA macros (macros written in Visual Basic for Applications) to automate these programs or to access their object model without depending on worksheet functions. To do this, you must tell the Visual Basic editor where to find the definitions of objects; this is known as setting references.

Therefore, if your VBA code needs to access objects, properties, and methods that are part of Palisade software, you must set references to one version of whichever Palisade tool contains the objects you need. Typically this comes up when you want to control @RISK or another application, for instance by setting simulation options, running a simulation, or fitting a distribution. On the other hand, if you just want @RISK to execute your code before or after every iteration or simulation, and your code doesn't directly access any @RISK objects, you don't need to set references in VBA.

To set references:

  1. In Visual Basic editor, click Tools » References.
  2. Remove check marks from any outdated libraries, such as AtRisk, RiskXL, Risk, and Risk5.
  3. In the References window, select the appropriate item or items for your program and release number, as listed below. (You will see many Palisade entries. Select the ones listed in this article, and no others. Select only one version; you cannot have both versions 6 and 7 checked, for instance.)
  4. Click OK to close the References window.

References are stored in the workbook when you click Save.

When you double-click a workbook that has references set, or open such a workbook through File » Open in Excel, the indicated Palisade software will open automatically, if it's not already running.

Release 8.x (using "8.x" as an abbreviation for 8.0, 8.1, or 8.5 as appropriate):

If you share a workbook with someone who has a different 8.x release number, the reference will adjust automatically on that person's computer. If they edit the workbook and send it back to you, the reference will again adjust automatically to match your computer. This works within 8.x versions, but between 5.x, 6.x, 7.x, and 8.x you must change the reference manually.

  • For @RISK 8.x: both RiskXLA and Palisade_RISK_XDK8.
  • For Evolver 8.x: both EvolverXLA and Palisade Evolver 8.x for Excel Developer Kit.
  • For NeuralTools 8.x: NeuralTools only (without "Palisade").
  • For PrecisionTree 8.x: both PtreeXLA and Palisade PrecisionTree 8.x Object Library.
  • For StatTools 8.x: Palisade StatTools 8.x Object Library only.

Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular.

In @RISK, to access the Automation Guide, click Resources » Automating @RISK (XDK) » XDK Automation Guide.

In other applications to access the Automation Guide, click Help » Developer Kit (XDK) » Automation Guide.

Release 7.x (using "7.x" as an abbreviation for 7.0, 7.5, or 7.6 as appropriate):

If you share a workbook with someone who has a different 7.x release number, the reference will adjust automatically on that person's computer. If they edit the workbook and send it back to you, the reference will again adjust automatically to match your computer. This works within 7.x versions, but between 5.x, 6.x, and 7.x you must change the reference manually.

  • For @RISK 7.x: both RiskXLA and Palisade @RISK 7.x for Excel Object Library. If you have RISK Industrial and you want to use the RISKOptimizer part of the object model, select Palisade RISKOptimizer 7.x for Excel Developer Kit also.
  • For Evolver 7.x: both EvolverXLA and Palisade Evolver 7.x for Excel Developer Kit.
  • For NeuralTools 7.x: NeuralTools only (without "Palisade").
  • For PrecisionTree 7.x: both PtreeXLA and Palisade PrecisionTree 7.x Object Library.
  • For StatTools 7.x: Palisade StatTools 7.x Object Library only.

Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular. To access an Automation Guide, click Help » Developer Kit (XDK) » Automation Guide.

Release 6.x (using "6.x" as an abbreviation for 6.0, 6.1, 6.2, or 6.3 as appropriate):

If you share a workbook with someone who has a different 6.x release number, the reference will adjust automatically on that person's computer. If they edit the workbook and send it back to you, the reference will again adjust automatically to match your computer. This works within 6.x versions, but between 5.x, 6.x, and 7.x you must change the reference manually.

  • For @RISK 6.x: both RiskXLA and Palisade @RISK 6.x for Excel Object Library. If you want to use the RISKOptimizer part of the object model, select Palisade RISKOptimizer 6.x for Excel Developer Kit also.
  • For Evolver 6.x: both EvolverXLA and Palisade Evolver 6.x for Excel Developer Kit.
  • For NeuralTools 6.x: NeuralTools (without "Palisade").
  • For PrecisionTree 6.x: both PtreeXLA and Palisade PrecisionTree 6.x Object Library.
  • For StatTools 6.x: Palisade StatTools 6.x Object Library.

Beginning with release 6.2, Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular. To access an Automation Guide, click Help » Developer Kit (XDK) » Automation Guide.

Release 5.x (using "5.x" as an abbreviation for 5.0, 5.5, or 5.7 as appropriate):

  • For @RISK 5.x: Palisade @RISK 5.x for Excel Object Library
  • For RISKOptimizer: Palisade RISKOptimizer 5.x for Excel Developer Kit.
  • For Evolver 5.x: Palisade Evolver 5.x for Excel Developer Kit.
  • For NeuralTools 5.5 and 5.7: NeuralTools (without "Palisade").
    (There was no NeuralTools 5.0 automation interface.)
  • For PrecisionTree 5.x: Palisade PrecisionTree 5.x Object Library.
  • For StatTools 5.x: Palisade StatTools 5.x Object Library.

See also: Using VBA to Change References to @RISK.

last edited: 2017-02-08

10.3. Using VBA to Change References to @RISK

Applies to:
@RISK 5.x–7.x, Professional and Industrial Editions
Evolver 5.x–7.x
NeuralTools 5.x–7.x
PrecisionTree 5.x–7.x
StatTools 5.x–7.x

We wrote a bunch of automation code for @RISK (Evolver, NeuralTools, PrecisionTree, or StatTools) release 5 or 6. Now we've upgraded to release 7, and all the references in all our workbooks need to be updated. Is there any kind of automated solution, or do we have to make a lot of mouse clicks in every single workbook?

For the problem at hand, you could write a macro to delete the obsolete references and add the new ones; see the references below. You could put that macro in a separate workbook, then have it available for people to run when references in @RISK model workbooks need to be updated. The problem is that there are two prerequisites for executing such code: you need to tick "Trust access to the VBA project object model" in Excel's Trust Center settings, and you need to set a reference to Microsoft Visual Basic for Applications Extensibility. These can't be done programmatically and must be done by hand.

There's significant risk with "Trust access to the VBA project object model". That lets workbook macro code do pretty much anything, and if you unknowingly download and open a malicious workbook you'll have a serious security breach on your hands. (See also Enable or disable macros in Office documents.) This kind of risk is one reason why we don't offer automatic code to adjust the references.

See also:

Are there any programming practices we can follow so that we're not in this position again, when we upgrade from version 7 to version 8?

We have a couple of suggestions. One possibility is late binding, where you don't have references set in the workbooks but instead connect to the @RISK (Evolver, ...) object model at run time. While this preserves maximum flexibility, you lose the benefit of Intellisense (auto-complete of properties, tool tips for function arguments, and so forth) during code development. To learn more about late binding, in your Palisade software click Help » Developer Kit (XDK) » Automation Guide. Look near the end for the topic "Demand Loading @RISK" ("Demand Loading Evolver", ...).

If the workbooks have mostly the same macro code, another possibility is to move your macros to one workbook. In effect, you write your own add-in to @RISK. Then when references have to be updated you can do it only once, and redistribute the updated workbook. If you have enough commonalities, this will also reduce your maintenance burden — if any kind of problem is discovered in macro code, it can be fixed once, with no need to try to find all the workbooks that contain the problem code.

Last edited: 2015-12-29

10.4. Sampling @RISK Distributions in VBA Code

Applies to: @RISK 5.0 and newer, Professional and Industrial Editions

How can I generate a random sample within a VBA macro or function?

Use the Sample method with the Risk object. Here's an example:

x = Risk.Sample("RiskBinomial(10,0.2)")

The Sample method normally returns a numeric value, but if there's an error in the definition of the distribution then the method returns an error variant in the usual way for Excel.

The sampled values are not the same numbers you would see from that function in a worksheet. They always use the Monte Carlo method, as opposed to Latin Hypercube; RiskCorrmat and RiskSeed are ignored. If you want to access simulation data, use members of the Risk.Simulation.Results object after the simulation finishes.

To call @RISK functions from Visual Basic, you must set up a reference from Visual Basic Editor to @RISK via Tools » References in the editor. Setting References in Visual Basic gives the appropriate reference(s) and how to set them.

Please see the XDK or Developer Kit manual for details on the objects and methods mentioned in this article, as well as alternative methods. (Beginning with @RISK 6.2, start with the Automation Guide for a high-level introduction: Help » Developer Kit (XDK) » Automation Guide.)

Am I restricted to just numeric arguments, or can I use cell references?

Yes, you can use cell references:

x = Risk.Sample("RiskBinomial(A1,B1)")

The cell references must be in A1 format, not R1C1, and they are taken to refer to the active worksheet. If you don't want to worry about which sheet is active, specify the worksheet or use defined names:

x = Risk.Sample("RiskBinomial('My Sheet'!A1,'My Sheet'!B1)")

x = Risk.Sample("RiskBinomial(BinomialN,BinomialP)")

I want to sample a RiskDiscrete with a long list of x and p. How can I use cell references?

It follows the pattern of Cell References in Distributions. Here's an example:

x = Risk.Sample("RiskDiscrete('My Sheet'!A1:A10,'My Sheet'!B1:B10)")

As an alternative, in the worksheet you can define names for the arrays, and then use the names in the Risk.Sample function:

x = Risk.Sample("RiskDiscrete(Xarray,Parray)")

Can I write the sampled value to my workbook?

Yes, just use Excel's Value property. You can apply it to a specific cell or to a defined range name:

Range("B1").Value = Risk.Sample("RiskBinomial(A1,A2)")

Range("myKeyLocation").Value = Risk.Sample("RiskBinomial(A1,A2)")

If the Risk.Sample method returns an error such as #VALUE, that will be written to the worksheet. This will not register as a VBA error that interrupts execution of your macro.

See also:

Last edited: 2018-02-28

10.5. Automating @RISK Simulations in VBA

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions
(@RISK Standard Edition does not support automation.)

How can I write Visual Basic for Applications code to automate several independent simulations? I want to simulate the files one at a time, not all at once.

This is easy to do with the @RISK and Excel object models. Just open the first workbook, call Risk.Simulation.Start, and close the workbook.  An example is attached. It is set up for @RISK 7.x, but you can change the references in the Visual Basic Editor and run it in @RISK 6.x as well.

The Automation Guide that was introduced in 6.2.0 is a good introduction to automating @RISK with VBA. but it's not intended to document the complete object model. For methods and properties not mentioned in the Automation Guide, consult the XDK Reference, which is also found in the @RISK help menu.

Last edited: 2015-12-18

10.6. Accessing Simulation Data in VBA Code

Applies to: @RISK 5.x–7.x

Can I access @RISK worksheet functions in Visual Basic?

To call @RISK functions from Visual Basic, you must set up a reference from Visual Basic Editor to @RISK via Tools » References in the editor. Please see Setting References in Visual Basic for the appropriate reference and how to set it.

Please see the XDK or Developer Kit manual for details on the methods mentioned here, as well as alternative methods. (Beginning with @RISK 6.2, start with the Automation Guide for a high-level introduction: Help » Developer Kit (XDK) » Automation Guide.)

How can I retrieve simulation data, in a way similar to the RiskData( ) worksheet function?

Use the GetSampleData method to fill an array with the simulated data. Here's an example:

numSamples = _

Risk.Simulation.Results.GetSimulatedOutput("MyOutput"). _

GetSampleData(sampleData, True)

This fills the VBA array sampleData with all the data from the named output, and returns the number of samples. (Although this example shows getting data from an output, you can also use GetSampleData with GetSimulatedInput.)

How can I get the statistics of a simulated input or output, such as a simulated mean or percentile?

Use Mean, Percentile, or a similar property of the RiskSimulatedResult object. Here's an example:

MsgBox "The mean of MyOutput is " & _

Risk.Simulation.Results.GetSimulatedOutput("MyOutput").Mean

(Again, you could also use this technique with GetSimulatedInput to get statistics of a simulated input.)

See also: Sampling @RISK Distributions in VBA Code to get random numbers from an @RISK distribution without running a simulation.

Last edited: 2018-02-28

10.7. Alternatives to =IF for Picking Distributions

Applies to:
@RISK 6.2 and newer, Professional and Industrial Editions

I want to set up my worksheet so that I can use different distributions depending on a code. I've got a bunch of formulas like this:

=IF(C47=1,RiskBinomial(D47,E47), IF(C47=2,RiskPert(F47,G47,H47), IF(C47=3,RiskLognorm(I47,J47), RiskTriang(K47,L47,M47))))

Is there any reason not to do it this way? Is there an alternative that might be more efficient?

There are several reasons why IFs in the worksheet are not the best way to model a choice of distribution. You'll have spurious entries in your Model Window, your Simulation Data window and report, etc. You'll also see error values for each distribution in all the iterations where it's not selected. Also, having four times as many distributions will definitely slow down your simulation, but whether it will slow it down by enough to matter depends on how many distributions there are, what the rest of your model looks like, and how many iterations you've chosen.

A quick-and-dirty possibility is to wrap such formulas inside a RiskMakeInput function. It's quick to do, though it does add another layer and it doesn't address the efficiency issue. But at least it gets rid of the spurious data collection. For the formula above, a RiskMakeInput would look like this:

=RiskMakeInput( IF(C47=1,RiskBinomial(D47,E47), IF(C47=2,RiskPert(F47,G47,H47), IF(C47=3,RiskLognorm(I47,J47), RiskTriang(K47,L47,M47))))

RiskMakeInput is very powerful and has many uses. For some examples, see Combining Inputs in a Sensitivity Tornado; Excluding an Input from the Sensitivity Tornado; Same Input Appears Twice in Tornado Graph. See also: All Articles about RiskMakeInput.

Probably a cleaner approach is to move that logic into a macro, where it is executed once only. The attached example contains such a macro, linked from a button in the worksheet.

For the sake of illustration, this worksheet is set up with a choice of seven distributions: triangular (RiskTriang and RiskTrigen), Pert (RiskPert), uniform (RiskUniform), normal (RiskNormal), log-normal (RiskLognorm), and Johnson (RiskJohnsonMoments). In column K, you specify which distribution to use for each risk. The parameters are in columns B through J, and cells N2:O2 give the row numbers to be handled by the macro.

When you click the worksheet button, the macro looks at each entry in column K and writes the appropriate distribution and parameters in column L. The macro includes a RiskName property function referring to the risk name given in column A. If any of the cells in column K contain incorrect distribution names, the macro displays an error message; otherwise, it runs a simulation. If you want to run further simulations without changing distributions, click the Start Simulation button in @RISK or click the button in the worksheet, but you must use the worksheet button after changing any distributions in column K.

Last edited: 2018-02-20

10.8. Distribution Functions as Arguments to User-Written Functions

Applies to: @RISK 5.x and newer

I have written a function in Visual Basic code, and I use that function in formulas in my Excel sheet. When a simulation is running the function seems to work, but when a simulation is not running my worksheet displays #VALUE. What is wrong?

In @RISK 5.0 and above, during a simulation the @RISK distribution functions return a single number of type Double. But when a simulation isn't running, the @RISK distribution functions return an array, of which the first element is the random number drawn by the function. (This change from 4.x was made to support the RiskTheo statistics functions, among other reasons.)

Therefore, your own function needs to declare the argument as a Variant, not a Double, and it needs to test the type of the argument at run time. Please see the accompanying example.

Last edited: 2015-08-12

10.9. Solver and Other Excel Recalculations within Your Macros

Applies to: @RISK For Excel, all releases

I have a macro set to run between iterations, on the Macros tab of @RISK Simulation Settings. That macro uses Excel Solver or does something else that triggers an Excel recalculation, but then the @RISK functions all get resampled. How can I hold the @RISK functions constant while my macro runs?

Solution (beginning with @RISK 7.5):

You may be able to use a point-and-click interface in Simulation Settings instead of writing VBA code. On the Macros tab of Simulation Settings, select "Excel Tool" instead of "VBA Macros". This will let you choose to run Excel Goal Seek, Excel Solver, or Palisade Evolver during each iteration. For Evolver or Solver, you have to set up the model in advance, but for Goal Seek you can enter the settings right in the Macros tab. If you prefer, or if your situation is more complicated, you can still use VBA and set the recalculation option, as described in the next section.

Solution (beginning with @RISK 6.2):

There's an important option in Simulation Settings. If you have any macros that trigger Excel recalculations, and those macros get executed during a simulation, open Simulation Settings and on the Macros tab select the option "If Excel Recalculations Occur during Macros, Distributions Return" » "Fixed Samples". Click OK and save the workbook. @RISK will remember this setting with all the other simulation settings in this workbook.

See the attached example KB187 for newer @RISK.

Caution: When you select Fixed Samples, @RISK will return the same value from a given distribution function every time it is called within one iteration — provided that your macro code doesn't change the parameters of the distribution. If the macro does change the argument values for an @RISK distribution function, @RISK will return a new sample for that distribution function.

Solution (@RISK 6.1 and earlier):

This is exactly the requirement: The @RISK functions must not change their values during the extra worksheet recalculations. If the @RISK functions resample, the model will not remain static during the processing of the macro, and Solver will be trying to hit a moving target. To demonstrate this problem, try running Excel Solver on a model where the objective calculation is dependent on a "=RAND()" Excel function. You will see that the objective calculation is a moving target and the Excel Solver optimization will not converge on an optimum solution.

To prevent @RISK functions from resampling when your macro triggers a recalc, ensure that no @RISK functions are precedents of any cells that are affected by your Excel Solver optimization. For example, suppose you have an a @RISK function in cell A145. None of your Excel formulas will reference A145. Instead, you establish a second cell, say A146, and all your formulas reference A146. When your macro runs, it will read the value of the @RISK function from cell A145 and write that as a plain numeric value (not a formula) to cell A146. This technique takes the @RISK functions out of Excel's precedent tracing, so that Excel doesn't call those functions when it does a worksheet recalculation.

Please have a look at the attached example, KB187 for older @RISK. In this example, there is a macro called "PlaceSampleAndRunSolver", which first places the static copy of the @RISK function sample into the model and then starts the Excel Solver optimization. The following events occur with each iteration:

  1. @RISK generates a random sample for the @RISK function in cell K41
  2. @RISK calls the "PlaceSampleAndRunSolver" macro, which (a) copies the value of the sample in cell K41 to cell E41 and (b) starts the Excel Solver routine to find the optimum solution.
  3. After Excel Solver finishes, @RISK saves the outputs of this iteration, which are the cells tagged with RiskOutput().

Try running a simulation. The resulting population will be a distribution of optimal results.

Last edited: 2016-07-12

10.10. Placing Markers in Graphs

Applies to:
@RISK 6.2 and newer

I'd like to place a histogram in my worksheet with markers on it, such as the mean or a percentile. How can I do that? Do you have an example?

You can do this by using VBA (Visual basic for Applications). Specifically, you'll need these members of the RiskGraph object: MarkerEnabled and possibly MarkerRedefine.

To get started in controlling @RISK with VBA, see the Automation Guide, Help » Developer Kit (XDK) in the menu. The Automation Guide includes a basic graph example. For customizations, see the XDK Reference in the same menu.

The attached example, based on the "Risk.XDK First program" example in the Automation Guide, shows how to add several of the markers to a histogram. Some extra code in the example displays all of the available markers in the Immediate Window. To run the example, download it and open it in @RISK, then press Alt+F11 to display the VBA Editor. The FirstProgram subroutine should be displayed. If it is, scroll to the bottom to see the marker code, then click anywhere in the code and press F5 to run it.

You may need to change one or two things in the example, depending on your situation:

  • Running @RISK 6.2 or 6.3? Follow Setting References in Visual Basic to set the references for your version of @RISK.
  • Code window not visible? Press F7.
  • Sub FirstProgram not in the code window? In the Project Explorer window, find VBA Project (KB1733_GraphMarkers.xlsm), expand it if necesary, and double-click Module 1.
  • Project Explorer not visible? Press Ctrl+R.

A couple of the marker names might be a bit obscure:

  • Off (used in the example) is actually two markers, for the mean plus or minus n standard deviations; you specify n in MarkerRedefine.
  • SP is the split point in a RiskSplice distribution.

Last edited: 2019-01-10

10.11. Placing Delimiters in Graphs

Applies to:
@RISK 6.2 and newer, Professional and Industrial Editions

How can I place delimiters in a scatter plot? I want them in locations different from the defaults.

Use the Risk.Simulation.Results.GraphDistribution method to create your graph, and then before placing it in the worksheet use the DelimitersChangePosition method to specify delimiters by data value or by cumulative percentile.

There are two types of the DelimitersChangePosition method for histograms:

  • .DelimitersChangePosition RiskDelimiterXValues, 1800, 2500
    places delimiters at data values 1800 and 2500. The parameters are actually Doubles, which means that you can use decimals if you wish.
  • .DelimitersChangePosition RiskDelimiterPValues, 0.01, 0.99
    places delimiters at the 1st and 99th percentiles.

Last edited: 2019-02-01

10.12. Placing Graphs in an Existing Worksheet with VBA

Applies to:
@RISK 6.2.0 and later, Professional and Industrial Editions

I'm using the new RiskGraph object in VBA to create Excel-format graphs, but each one is on a separate sheet. How can I place them at a desired location in an existing sheet, and resize them as I wish?

Please see the attached model, which contains the appropriate VBA code. The "Graph Distribution JPG" button creates graphs as pictures; the "Graph Risk Result" graph creates a real Excel graph.

The Visual Basic for Applications references are customized for @RISK 7. For @RISK 6, you'll need to change them. See Setting References in Visual Basic.

last edited: 2017-05-31

10.13. Launching @RISK from a Visual Basic Macro in Excel

Applies to: @RISK 5.x–7.x, Professional and Industrial Editions
(@RISK Standard Edition cannot be automated with VBA macros.)

I'm writing an elaborate workbook that will use @RISK. Users will have @RISK installed, but I'd like to start @RISK through VBA in my workbook instead of having them click a desktop icon or use the Windows Start button. How can I do it?

To test whether @RISK is already loaded:

If workbook RISK.XLA is open, @RISK is loaded. If that workbook isn't open, @RISK is not loaded. (This is tested in the sample code below for opening @RISK.)

To load @RISK if it is not already loaded:

The Shell method is simplest and will load @RISK asynchronously. This means that you launch @RISK and immediately return control to Excel, as opposed to waiting in the code till @RISK has loaded.

It's a good idea not to hard-code the path to the Risk.exe executable, but instead read it from the System Registry. Here's some sample code:

Option Explicit

' Check whether @RISK is running, and load it if it's not.
Sub loadAtRisk()
    ' This is the folder for the version of @RISK that should be loaded.
    ' If you want to load @RISK 6 or 5 instead, change the 7 to 6 or 5.
    Const AtRiskFolder = "Risk7"
    
    ' If the @RISK add-in is already open, there's no need to open it again.
    Dim wb As Workbook
    On Error Resume Next
    Set wb = Workbooks("Risk.xla")
    On Error GoTo 0
    If Not (wb Is Nothing) Then Exit Sub
    
    ' Risk.xla isn't open, so open @RISK by using the Risk.exe launcher.
    ' It will be in a sub-folder under the Palisade main folder.
    Dim sPath As String
    If Palisade_MainDirectory() = "" Then
        MsgBox "No Palisade key found in System Registry - @RISK isn't intalled.", , _
            "loadAtRisk( )"
        Exit Sub
    End If
    sPath = Palisade_MainDirectory() & AtRiskFolder & "\Risk.exe"
    If Dir(sPath) = "" Then
        MsgBox "@RISK not found at " & Chr(13) & sPath, , "loadAtRisk( )"
    Else
        Shell sPath
        ' Control must pass immediately to Excel.
        Exit Sub
    End If
End Sub

Function Palisade_MainDirectory() As String
    ' Adapted 2015-08-31 from
    ' http://www.jpsoftwaretech.com/vba/grab-registry-settings-through-vba-using-wmi/
    Const HKEY_LOCAL_MACHINE = &H80000002
    Dim temp As Object
    Dim sKey As String
    Dim sValue As String
    Dim sData As String

    Set temp = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & _
        ".\root\default:StdRegProv")

    ' This retrieves the key if Windows is 32-bit.
    ' If the key's not there, sData is is set to a zero-length string.
    sKey = "Software\Palisade"
    sValue = "Main Directory"
    temp.getstringvalue HKEY_LOCAL_MACHINE, sKey, sValue, sData
    
    ' This retrieves the key if Windows is 64-bit.
    ' If the key's not there, sData is is set to a zero-length string.
    If sData = "" Then
        sKey = Replace(sKey, "Palisade", "WOW6432Node\Palisade")
        temp.getstringvalue HKEY_LOCAL_MACHINE, sKey, sValue, sData
    End If
    
    ' Palisade's Main Directory key exists, so it looks like some Palisade software is
    ' installed. Some versions have a trailing \ in this key and others do not, so
    ' make it uniform.
    If Len(sData) > 0 And Right(sData, 1) <> "\" Then sData = sData & "\"
    
    Palisade_MainDirectory = sData
End Function

See also: Shutting Down @RISK from VBA Code.

Additional keywords: Run or open @RISK programmatically

Last edited: 2015-11-20

10.14. Shutting Down @RISK from VBA Code

Applies to: @RISK for Excel 5.x–7.x, Professional and Industrial Editions
(@RISK Standard cannot be automated with VBA macros.)

I have written an application using VBA to control @RISK. How can I unload @RISK from within my Visual Basic?

The base method is Risk.UnloadAddIn. However, there are some differences between @RISK 5 and later @RISK in how you use this method.

In @RISK 6.x/7.x:

In the Visual Basic Editor, click Tools » References and select Palisade @RISK n.n for Excel Object Library, whichever one appears. Don't select Risk.xla, as you normally would when automating @RISK 6. Shutting down @RISK includes closing Risk.xla, but that's impossible if it's checked in Tools » References.

All the class definitions are in the @RISK n.n for Excel Object Library. The only thing VBA gets from Risk.xla is the object called Risk. So you just create that object (actually a function) yourself, using a technique called demand loading.

Here's some sample code:

Dim Risk As AtRiskOL6.Risk

Set Risk = Application.Run("Risk.xla!Risk")

...

Risk.UnloadAddin

After calling the UnloadAddIn method, you must immediately return control to Excel (Return statement or end of macro). The unload process is asynchronous.

For a fuller explanation of demand loading, please open @RISK Help » Developer Kit (XDK) » Automation Guide and see "Demand-Loading @RISK", near the end of the PDF. (The Automation Guide is available in @RISK 6.2 and later.)

In @RISK 5.x:

In the Visual Basic Editor, click Tools » References and select Palisade @RISK 5.0 for Excel Object Library, Palisade @RISK 5.5 for Excel Object Library, or Palisade @RISK 5.7 for Excel Object Library, whichever one appears.

Use the

Risk.UnloadAddin

method, and then immediately return control to Excel (Return statement or end of macro). The unload process is asynchronous.

Can I unload @RISK synchronously?

Instead of unloading @RISK, you can hide its tab of the ribbon (in Excel 2007 and later) or its toolbars and menu (in earlier Excels). Use

Risk.InterfaceHidden = True

That leaves @RISK loaded but invisible. You can turn the interface back on by setting that property to False. When you start @RISK, the interface is always visible, regardless of any previous setting of the Risk.InterfaceHidden property.

See also: Launching @RISK from a Visual Basic Macro in Excel.

Last edited: 2015-08-31

10.15. Which Version of @RISK Is Running?

Applies to: @RISK 4.x–7.x

Can I use Visual Basic code to determine which version of @RISK is running?

Here is a VBA function from our developers:

'Determine the version number of the running copy of @RISK.
'If @RISK 5.x–7.x is running, the exact version number is returned.
'If @RISK 4.x is running, the string '4.x' is returned.
'If no copy of @RISK 4.x–7.x is running, a blank string is returned.

Public Function GetAtRiskVersion() As String
   Dim rc As String
   Dim addinWorkbook As Workbook
   Dim dummyValue As Long

   'Make sure the Risk.xla add-in workbook is open.  If it isn't, @RISK 4.x–7.x
   'isn't loaded:

   On Error Resume Next
   Set addinWorkbook = Application.Workbooks("Risk.xla")
   If Err <> 0 Then Set addinWorkbook = Nothing
   On Error GoTo 0
   If (addinWorkbook Is Nothing) Then rc = "": GoTo exitPoint

   'If @RISK 5.0.1 or higher is loaded, the version number is easy to get.  I
   'get it here using a late-bound method (using Application.Run) simply because
   'it is unlikely this code will have a reference to the @RISK 5.0 object
   'library if it needs to call this routine!  (In @RISK 5.0.0, itself, the call
   'RiskGetAutomationObject didn't exist, so I must look especially for @RISK
   '5.0.0 below.)

   On Error Resume Next
   rc = Application.Run("Risk.xla!RiskGetAutomationObject").ProductInformation.Version()
   On Error GoTo 0
   If rc <> "" Then GoTo exitPoint

   'Now we need to distinguish between @RISK 5.0.0 and @RISK 4.x.
   'In the former case the routine RiskGetInterfaceMode should exist.  In the
   'latter calling that routine will raise an error.

   On Error Resume Next
   dummyValue = Application.Run("Risk.xla!RiskGetInterfaceMode")
   If (Err <> 0) Then rc = "4.x" Else rc = "5.0.0"
   On Error GoTo 0

exitPoint:

   On Error GoTo 0
   GetAtRiskVersion = rc

End Function

Additional keywords: XDK

Last edited: 2015-08-31

10.16. @RISK 6.x VBA Macro Compatibility with 7.x

Applies to: @RISK for Excel 6.x Professional or Industrial Edition, upgrading to @RISK 7.x.
For compatibility of other releases, see Upgrading Palisade Software.

I wrote some macros that call @RISK functions listed in the XDK documentation in @RISK 6.x. Will they work in @RISK 7.x?

All @RISK 6.x macros will work in @RISK 7.x, but you will need to update the reference in Visual Basic Editor » Tools » References. Do this for each workbook where your VBA code calls @RISK functions:

  1. Launch @RISK and open your workbook.

  2. Press Alt-F11 to launch Visual Basic Editor.

  3. Remove check marks for all Palisade or Risk references, and tick (check) RiskXLA and Palisade @RISK 7.x for Excel Object Library only. If you have RISK Industrial and you want to use the RISKOptimizer part of the object model, select Palisade RISKOptimizer 7.x for Excel Developer Kit also. Don't select any others for @RISK, beyond these two or three.

  4. Click OK, close Visual Basic Editor, and save your workbook.

You need to update the reference when transitioning to @RISK 7 from an earlier major version, but not when transitioning between two @RISK 7.x version numbers.

What about the other products: StatTools, PrecisionTree, NeuralTools, and Evolver?

Update the references from 6.x to 7.x in the same way.  Please see Setting References in Visual Basic for these products.

See also: Using VBA to Change References to @RISK.

Last edited: 2018-01-08

10.17. @RISK 5.x VBA Macro Compatibility with 6.x/7.x

Applies to: @RISK for Excel 5.x, upgrading to @RISK 6.x/7.x Professional or Industrial Edition.
(@RISK Standard does not support automation.)
For compatibility of other releases, see Upgrading Palisade Software.

I wrote some macros that call @RISK functions listed in the Developer Kit documentation in @RISK 5.x. Will they work in @RISK 6.x/7.x?

@RISK 5.x macros will work in @RISK 6.x/7.x, with the exception mentioned below, but you will need to update the reference in Visual Basic Editor » Tools » References.

You need to update the reference when transitioning to @RISK 6 or @RISK 7, but not when transitioning between two @RISK 6.x version numbers or between two 7.x version numbers.

If you have a workbook with @RISK 5 automation code, follow this procedure to convert it to @RISK 6.x/7.x:

  1. Launch @RISK and open your workbook.

  2. Press Alt-F11 to launch Visual Basic Editor.

  3. Please see Setting References in Visual Basic for the appropriate references and how to set them.

  4. If you have 64-bit Excel, macro code in @RISK 5.7 required some special code beginning with #If Win64 Then. That code is no longer necessary in @RISK 6.x/7.x, so remove it.

  5. Click OK, close Visual Basic Editor, and save your workbook.

Repeat these steps for each workbook where your VBA code calls RISK functions.

Exception: Some macros from @RISK 4.5 were implemented in @RISK 5.0.1 through 5.7.1 as "wrappers" for 4.5 macros. Those wrappers no longer exist in @RISK 6.x/7.x because they were incompatible with Excel 2010. If you have working @RISK 5.x macro code that fails in @RISK 6.x/7.x, even after setting the correct references, the problem is probably those legacy 4.5 features that are no longer supported.

Additional keywords: XDK

last edited: 2015-06-18

10.18. @RISK 4.5 VBA Macro Compatibility with 6.x/7.x

Applies to: @RISK for Excel 4.5, upgrading to @RISK 6.x Professional or Industrial Edition.
(@RISK Standard does not support automation.)
For compatibility of other releases, see Upgrading Palisade Software.

I wrote some macros to control my model in @RISK 4.5.  Will they work in @RISK 6.x/7.x?

Macros that are called by @RISK (listed on the Macros tab of the Simulation Settings dialog) will probably be fine. But macros that call @RISK may need to be rewritten. The good news is that you get many new capabilities, notably a much richer set of methods to produce customized graphs.

The macro language in changed substantially when @RISK 5.0 was released in late 2007. The 4.5 interface was largely oriented toward stand-alone functions, but the current interface is much more object oriented. Because the overall architecture changed, the current VBA interface does not contain a one-to-one replacement for some of the functions from the 4.5 interface. This means, unfortunately, that some macros written for 4.5, such as macros that run fits and simulations, must be modified for @RISK 6.x/7.x. You will have to analyze the logic of those macros and rewrite them for the new object model. This cannot be done mechanically by any sort of automatic conversion program. It was a difficult decision to make changes that would invalidate existing VBA macros, and we do understand and regret the inconvenience. But on balance we felt that the new object model offers so many advantages that the change was justified.

In Visual Basic Editor, you will need to remove the references to @RISK 4.5 and select references to the new version. Please see Setting References in Visual Basic for the appropriate references and how to set them.

Some help is available. Beginning with release 6.2, Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular. To access an Automation Guide, click Help » Developer Kit (XDK) » Automation Guide. In addition to the Automation Guide, the same menu lets you access the XDK reference, with the complete object model and all methods and properties.

Last edited: 2015-06-18

10.19. @RISK 4.5 VBA Macro Compatibility with 5.x

Applies to: @RISK for Excel 4.5, upgrading to @RISK 5.x Professional or Industrial Edition.
(@RISK Standard does not support automation.)
For compatibility of other releases, see Upgrading Palisade Software.

I wrote some macros to control my model in @RISK 4.5.  Will they work in @RISK 5.x?

Macros that are called by @RISK (listed on the Macros tab of the Simulation Settings dialog) will probably be fine. But macros that call @RISK may need to be rewritten.

The macro language in @RISK 5.x is substantially different from the macro language in 4.5.  The 4.5 interface was largely oriented toward stand-alone functions, but the 5.x interface is much more object oriented.  This means, unfortunately, that some macros written for 4.5, such as macros that run fits and simulations, will no longer work in 5.x.

Because the overall architecture of the macro language has changed, the 5.x interface does not contain a one-to-one replacement for some of the functions from the 4.5 interface. For some macros written for 4.5, you will have to analyze the logic of the macro and rewrite it for the new object model. This cannot be done mechanically by any sort of automatic conversion program.

It was a difficult decision to make changes that would invalidate existing VBA macros, and we do understand and regret the inconvenience. But on balance we felt that the new object model offers so many advantages that the change was justified.

@RISK 5.0.1 improved compatibility

Some additional macros from @RISK 4.5 were implemented in 5.0.1 and later releases as "wrappers" for 4.5 macros, while others have stubs that give explanatory error messages. There's also a new manual in PDF, in addition to the help file. You will still need to rewrite some macros written for @RISK 4.5, but the new release makes the process easier.

@RISK 5.0.1 is included in the initial release of The DecisionTools Suite 5.0. If you have @RISK 5.0.0 and not the Suite, contact Palisade about updating to 5.0.1 or later. You won't need a new Activation ID, but you will need a new installer.

Documentation on the new macro language is available in different places depending on your version of @RISK:

  • @RISK 5.5.1, 5.7.0, and 5.7.1: run @RISK, then in the @RISK Help menu select Developer Kit, then Manual.

  • @RISK 5.5.0 and 5.0.1 (including DecisionTools Suite 5.0): click the Windows Start button, then Programs or All Programs, then Palisade DecisionTools, then Online Manuals, then @RISK for Excel Developer Kit.

  • @RISK 5.0.0: upgrade to at least 5.0.1 if you can, to get the macro compatibility features mentioned above.  If you're unable to upgrade, click the Windows Start button, then Programs or All Programs, then Palisade DecisionTools, then Help, then @RISK for Excel Developer Kit Help.

We recommend you start with the Project Overview topic in the online manual or help file. That gives an overview of the new object model, with clickable links to various objects.  Also check the Getting Started topic for the reference you must insert in your VBA module.

Additional keywords: XDK

Last edited: 2015-06-18

10.20. Adding Outputs from VBA

Applies to:  @RISK 5.x–7.x

When I wrote VBA macro code for @RISK 4.x, I used the RiskAddOutput( ) function; but I can't find it in the macro interface for the new version. Which function should I use?

There is no longer a dedicated VBA method to designate an @RISK output. Instead, your macro code should insert a RiskOutput( ) function at the beginning of the formula in the worksheet cell, using normal Excel VBA methods such as the Formula member. For example, suppose you have a cell containing this formula:

=NPV(.1,G1:G10)

To make it an output, change the formula to

=RiskOutput( )+NPV(.1,G1:G10)

Optional arguments to the RiskOutput( ) function let you designate a name for the output, or specify multiple outputs as an output range. For details on usage, please see the RiskOutput topic in the @RISK for Excel Help file or in the Reference section of the @RISK for Excel user manual.

Last edited:  2015-08-31

10.21. Name Conflicts with @RISK VBA Code?

Applies to: @RISK 6.x/7.x

I'm writing my own VBA code, and I'm concerned my variable names may inadvertently conflict with @RISK's name space. Do you have a list of names I should consider reserved?

If you stay away from names that begin with "Risk", you should be fine.

There's a small number of exceptions, legacy global names: GridData, GridStatistics, LegendNotDisplayed, LegendWithoutStatistics, LegendWithStatistics.

Last edited: 2015-08-31

11. @RISK 6.x/7.x with Projects

11.1. How to perform Resource Leveling in MS Project during a simulation with @RISK

Applies to: @RISK v7.x

Description: One of the features of MS Project is Resource Leveling, which tries to distribute the project work evenly.  For more information on Resource Leveling you can search on Microsoft’s site.  See one reference link here: https://support.microsoft.com/en-us/office/distribute-project-work-evenly-level-resource-assignments-59ee715d-4446-42c9-8756-4ea2a5a7e4a0

 

In order to perform the Resource Leveling process for each one of the iterations during a simulation process, you should add some VBA in order to call the Resource Leveling and also utilize the Macros configuration from @RISK to set it up.

In this article a small model is attached, which will perform the Resource Leveling automatically, one process for each iteration.  You can choose to activate or not the Macros and see the differences between the runs with and without it.  See below how the Macros settings is set up:

 

You might expect the duration of the simulation process that includes the Resource Leveling to be slower than a regular simulation, as the leveling process is included.



Last Update: 2020-06-25


11.2. Overview of @RISK with Microsoft Project

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions

I have my project in an MPP file, but I'm not sure what to do with it in @RISK. Do you have any guidebooks or videos that can help me?

Great question! Here are some materials to get you started.

  • In @RISK, click Help » Videos. Our Guided Tour has a section on @RISK with projects—you can select it from the menu at the left as soon as the video starts to play.
  • On our project management Web page, you'll find an illustrated step-by-step PDF guide to projects in @RISK. Look for "Step-by-Step Schedule Risk Analysis example", or use this direct link.
  • In @RISK, in Help » Example Spreadsheets, look for the group of simulations with Project. You'll see a number of easy examples there, showing various features.

Last edited: 2018-08-22

11.3. @RISK for Project not in v8

Applies to:
@RISK 8.x, Professional and Industrial Editions

Where can I find @RISK for Project in v8?

The link between @RISK for Excel and Microsoft Project schedules that was offered in previous versions of @RISK is not included in @RISK 8.0.  This means that Monte Carlo simulation of Microsoft Project schedules is not available in @RISK 8.0.  However, cost risk analysis and schedules built in Excel can still be simulated as always.  Palisade is actively evaluating how to best integrate leading-edge schedule risk analysis going forward.  Please note, you may install both @RISK 7.x and @RISK 8.0 on the same machine (although they cannot be run simultaneously).  This enables you to keep performing schedule risk analysis in Microsoft Project using an older version of @RISK, which Palisade will continue to support.


11.4. How Are Tasks Scheduled in a Project?

Applies to:
@RISK 6.7/7.x, Professional and Industrial Editions

We sometimes get questions like, "If I edit the _____ field, what will @RISK will do with the project schedule?"

The answer is that all schedule computations are done by Microsoft Project. In each iteration, @RISK computes the random numbers from your input distributions, ships them to Project, and retrieves the values for @RISK output fields after Project does all the schedule computations. So the schedule computations are exactly the same as if you put those random numbers into the project fields yourself. There are no special rules for computing project schedules, just because @RISK is involved.

For details of project computations, see the Microsoft article How Project schedules tasks: Behind the scenes.

See also: Common Mistakes in Scheduling.

Last edited: 2016-09-15

11.5. Schedule Audit: Powerful Debugging Tool

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I'm using @RISK with a project, but it doesn't seem to be simulating correctly. I've entered my distributions, probabilistic branching, and/or risk register, but the dates aren't varying as I would expect.

When you open the Excel workbook in @RISK, does @RISK then open Microsoft project and open your MPP file in Project? If not, the link between the Excel workbook and the MPP file has been broken and must be re-established. To fix this, see Project Not Linked in @RISK 6.x.

Assuming that the project is linked, there may be a problem in your project itself. Missing predecessor-successor relationships are a common problem, along with some kinds of relationships such as finish-to-finish. Fortunately, you don't have to search for these by examining all your tasks. In @RISK's Project menu, select Schedule Audit. Schedule Audit will present you with a list of issues in your project, identified by task. Each of them is worth looking at carefully, even if you end up rejecting some of them as not applying in your situation. But if you fix all the issues that do apply, you will probably find that your simulation is now behaving much more as expected.

Last edited: 2015-09-01

11.6. Common Mistakes in Scheduling

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions
@RISK for Project 4.x

Do you have any guidelines for best practices in scheduling? What are some pitfalls I should avoid?

The attached slide presentation by one of our consultants shows some of the issues. This article presents a summary, for quick reference.

Most importantly, avoid dangling activities. You create a dangling activity when changes in a predecessor, such as longer duration, aren't properly transmitted to the successor. One way to do this is with Start-to-Start constraints or Finish-to-Finish constraints: if an earlier task runs long, the later task still starts or finishes on the original date. With dangling activities, you can't trust the dates, float, or critical path, and risks don't have their proper effect on the schedule.

One solution to this is Finish-to-Start constraints. These are best where two tasks really can't be done in parallel, but one must finish before the other one can start. However, if your tasks really can run in parallel, you can still prevent them from dangling by using Start-to-Start and Finish-to-Finish constraints. You need to create three milestones — for the start of the first task, and one for the end of each task — to persuade Microsoft Project to accept both constraints on the same pair of tasks; see pages 7 and 8 of the attached slides.

Another common mistake is activities with no predecessors or no successors. Every activity, except the first and the last, must have at least one Finish-to-Start or Start-to-Start predecessor relationship and one Finish-to-Start or Finish-to-Finish successor relationship, like this:

Predecessor Task → F-S or S-S → This Activity → F-S or F-F → Successor

Be wary of "Must Finish on" constraints on important finish dates. These can frustrate risk analysis of the very items you care about. You'll sometimes get messages from Project to the effect that a scheduling conflict prevents finishing this task in time. Pay heed to those messages, and don't ignore them or turn them off.

See also: How Are Tasks Scheduled in a Project?

Last edited: 2016-09-15

11.7. Which Version of Project is Opened by @RISK?

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions
@RISK for Project 4.x

I have multiple versions of Microsoft Project on my computer. @RISK opens one version of Project, but I want it to open the other version.

Or,

@RISK works if I open Project first, but when Project isn't already running @RISK is unable to open it.

If you are comfortable editing the System Registry, you can create or change a registry key that tells @RISK which version of Project to open. (If you prefer not to edit the System Registry, and you're using @RISK for Project release 4.x, simply open your preferred version of Project before launching @RISK.)

Registry edits for @RISK up through 4.1, and for @RISK 6.1 and later:

  1. Make sure that Project is not running, then locate the Winproj.exe file and take note of the full file path. Examples:
    C:\Program Files\Microsoft Office\OFFICE11\Winproj.exe
    C:\Program Files (x86)\Microsoft Office\OFFICE14\Winproj.exe

  2. To open the Registry Editor, click the Windows Start button, then Run. Type REGEDIT and click the OK button. (In some versions of Windows, you can do a search for the REGEDIT application.)

  3. When the Registry Editor window appears, navigate to
    HKEY_LOCAL_MACHINE\Software\WOW6432Node\Palisade
    in the left-hand pane, or in 32-bit Windows navigate to
    HKEY_LOCAL_MACHINE\Software\Palisade
    In the right-hand pane you will see two string values called Main Directory and System Directory, and possibly some additional values.

  4. If Project Path appears in the right-hand pane, double-click it and edit the path to match the path you noted in step 1.

  5. If Project Path does not appear in the right-hand pane, right-click an empty spot in the right-hand pane and select New » String Value. Name the new string value Project Path, with a space between the two words. Double-click the name Project Path and edit in the path that you saved in Step 1.

  6. Test your edit by launching @RISK. If the correct version of Project does not come up, edit the value of the Project Path string. If the correct Project comes up, close the Registry Editor by clicking File » Exit.

Special note for @RISK 6.0:

In @RISK 6.0, the Registry key is HKEY_CURRENT_USER\Software\Palisade\@RISK for Excel\6.0\Project Version and not HKEY_LOCAL_MACHINE\Software\Palisade\Project Path. Beginning with 6.1, HKEY_LOCAL_MACHINE\Software\Palisade\Project Path is preferred, but if it is not found then HKEY_CURRENT_USER\Software\Palisade\@RISK for Excel\6.0\Project Version will be checked.

Last edited: 2015-12-24

11.8. Auto Schedule or Manual Schedule?

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions
@RISK for Project 4.x

If tasks in Microsoft Project are "Manually Scheduled" as opposed to "Auto Scheduled", does it make any difference? Does this impact the simulation results in any way?

Auto Schedule or Manual Schedule is an option in Project 2010 and newer; in older versions of Project it's Calculation options for Microsoft Project. Under either name, it's the same idea as Automatic or Manual Calculation in Excel.

In @RISK 6.0 and later, manually scheduled tasks are temporarily switched to auto scheduled during simulation, then reset at the end. @RISK changes them to auto scheduled, so that the start and finish dates are calculated by Project. So it makes no difference to a simulation whether tasks are set to auto or manual schedule.

In @RISK for Project 4.x, you need to set calculation to Automatic. If you have Microsoft Project 2010 with @RISK for Project 4.x, make sure that no tasks are marked as manually scheduled.

Last edited: 2015-12-24

11.9. @RISK with Projects in Progress

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions
@RISK for Project 4.x

I used @RISK to set up my project initially, but now the project has started. Some tasks are complete, some are partly complete, and some have not yet started. How can I best use @RISK with a project in progress?

There are several approaches to choose from:

  • (Requires @RISK 6.0 or newer)
    In the @RISK » Project » Project Settings dialog, set Date Range for Simulation to either Activities after Current Project Date or Activities after Project Status Date. There is no need to remove distributions from tasks that are complete or edit distributions in tasks that are partly complete. Then, in a simulation @RISK will not vary any tasks that are complete, and it will pro rate the variation in tasks that are partly complete. (The Current Date and Status Date can be set in Microsoft Project 2010–2016 by clicking the Project Information icon on the Project tab.)

    You can illustrate this with one of our examples. In @RISK, click Help » Example Spreadsheets » Project » Probabilistic Branching. After @RISK opens the MPP file, click into MS Project and set the project status date to some date in the middle of task 4. Then, in @RISK » Project » Settings set Date Range to Project Status Date. Run a simulation, and then Browse Results in the durations. You'll see that the durations of tasks 2 and 3 don't change because they were complete before your project status date; the variability of task 4 is now less (pro rated, as the documentation says), and tasks 5 to 8 (which start after the status date) still vary according to the original distributions.

  • A simple and practical approach is to apply distributions to the Remaining Duration field rather than Duration. That way, if an activity is100% completed the remaining duration will be zero; and for an activity that is partially complete you can set a range on the remaining duration. This removes the default option that prorates the uncertainty to the unfinished work, which in many cases needs to be completely reassessed to model the project in light of the actual conditions.

  • Or, you can remove the distributions from completed tasks and fill in either the actual finish or the actual duration; when you fill in one, Project will calculate the other. For tasks that have started but are not yet complete, you may want to remove the distribution from Duration and put a distribution on Remaining Duration. Or, if you don't have much confidence in your estimate of the percent complete, you might want to leave the distribution on duration as it is. This decision may need to be made task by task.

Last edited: 2017-07-06

11.10. Linked Subprojects in @RISK

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions

I'm planning to simulate a project with @RISK. It will be complex, so I'd like to have subprojects as separate files. Does @RISK support that?

Yes! Make the sub-projects linked projects within your main MPP file. Import only your main MPP file; Project will tell @RISK about the sub-projects automatically. While running a simulation, you don't need to have the sub-projects open.

A very simple example is attached to this article. Download all five files to the same folder, launch @RISK, and open KB1647_MasterProject.xlsx. Most durations contain @RISK distributions, and thhe finish dates for the three main phases are already set up as outputs. Of course, you can add other outputs if you wish.

Last edited: 2018-06-14

11.11. Transferring Edits in MPP File to Excel Workbook via ProjectFieldVal

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I'm using @RISK to simulate a project. I distribute the MPP file to our consultants, and they update Duration and other fields. When I select Project » Sync Now in @RISK, isolated numbers in the Excel workbook are updated, but numbers that are part of formulas are not updated. How can I structure my project so that all changes in the MPP file are reflected in the workbook?

Use the ProjectFieldVal property in your @RISK distributions. It tells @RISK to pick up the current value for this field in the MPP file and use that value in your formula. The field value can be changed in the MPP file while @RISK is not running, or even on a PC where @RISK isn't installed. Later, when the Excel file is reopened in @RISK, it gets the new value from the changed MPP file. @RISK will use that new value when simulating.

Here are three examples of Excel formulas using ProjectFieldVal.

Example 1. Sample values between 10% below and 10% above the value in the MPP file. (By default, RiskVary( ) uses a triangular distribution.)

=RiskVary(ProjectFieldVal,-10,10)

Example 2. Create a triangular distribution with the most likely value in the MPP file. The minimum possible is 10% below that (100%–10% = 90% = 0.9), and the maximum is 50% above the most likely value (100%+50% = 150% = 1.5). When a simulation is not running, display the field value from the MPP file, not the expected value of the triangular distribution.

=RiskTriang(ProjectFieldVal*0.9, ProjectFieldVal, ProjectFieldVal*1.5, RiskStatic(ProjectFieldVal))

Example 3. The task duration is the duration in the MPP file, plus an additional duration from the risk register in the Excel file.

=ProjectFieldVal + 'RiskRegister'!L6

Last edited: 2016-03-10

11.12. Global Setting of Probability

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I am using @Risk with Microsoft Project. I want to set distributions on durations for many tasks. I can see how to do it one activity at a time, but I can't figure out how to set them on durations globally. I have very large schedules, and it would take a lot of time to set the duration distributions one at a time. Is there a better way?

Yes—two better ways, in fact.

  • In @RISK, click Project » Model Tools » Parameter Entry Table. (You can see an example at Help » Example Spreadsheets » Project Management » Parameter Entry Table.)

    TIP: Select the box labeled "Also Add Entry Table to .MPP in Microsoft Project". This gives you the option of editing the distribution parameters directly in the project file, without running @RISK. When you later reopen the Excel workbook in @RISK — or when you use Sync Now — @RISK will copy the values from the table in the MPP file into the Excel workbook.

  • Alternatively, set one distribution in @RISK and then use Excel editing to propagate that to other tasks, either by copy/paste or by click-and-drag.

Last edited: 2015-09-01

11.13. Gantt Chart: Adding or Removing Arrows

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

Is there a quick way to turn the display of dependency arrows on or off? I have a project with defined predecessors and successors, but none of the dependency arrows appear when I import the project to @RISK 6.

Yes, this is easy to do. In the @RISK ribbon or menu in Excel, click Project » Charts and Reports » Standard Gantt Chart. Then select or deselect "Display Links/Connectors Between Tasks".

Last edited: 2015-09-01

11.14. Gantt Chart: Number of Tasks Imported

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I imported an MPP file that was created in @RISK for Project 4.x or in Microsoft Project, and the Gantt chart in Excel is not showing all my tasks. Is there some limitation on importing tasks to the Gantt chart?

There is no built-in limit to the number of tasks that can be imported, but @RISK 6.1 and newer show only 3000 bars on the Gantt chart (1500 bars in @RISK 6.0).

This is only visual, and does not affect the simulation in any way. All tasks are still simulated.

However, if the project has hundreds or thousands of activities it is usually better not to create the Gantt chart. This can be set as default in Application Settings or manually controlled in the Charts and Reports » Standard Gantt Chart dialog in @RISK.

Additional keywords: 3000 tasks, limit on Gantt chart

Last edited: 2015-07-08

11.15. What Happened to Conditional Branching?

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I am familiar with @RISK for Project 4.1, and I like the feature of conditional branching.  I see that @RISK 6 has probabilistic branching, but I want to branch based on logic, not chance.  How can I do it?

In @RISK for Project 4.1, there was a wizard to set up conditional branching.  Now it is much simpler: you just use an IF( ) function in Excel.

  1. Make the Predecessors column visible in Project before importing your MPP file.  That will cause @RISK to include it on the Tasks sheet at the time of importing.

    If you have already imported your MPP file, you can add the Predecessors column without re-importing.

  2. Apply an IF( ) function to the Predecessors cell for the task where you want to branch conditionally.

A minimal example is attached, to show the logic.

See also: Using a Risk Register is an alternative approach. In @RISK, click Help » Example Spreadsheets. Under Simulation with Microsoft Project, you'll find several examples with risk registers.

Additional keywords: If-Then branching

Last edited: 2015-09-01

11.16. Task Numbers in the Risk Categories Dialog

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions, when used with an MPP file

In @RISK, I clicked Project » Model Tools » Risk Categories, but the task numbers shown there don't match the task numbers in my MPP and Excel files.  What's wrong?

In Microsoft Project, every task has two numbers: the task ID shown at the left, and a Unique ID that is normally not shown. The Unique ID is assigned when the task is created, and it never changes,.  The visible task IDs can change as you insert and remove tasks.  The task IDs shown in the Risk Categories dialog are the Unique IDs.

You can make the Unique IDs visible in Excel or Project or both.  In Project, right-click a column and select Insert Column » Unique ID.  In @RISK in Excel, click Project » Project Link » Insert or Hide Field, and under Field to Insert select Unique ID.

Last edited: 2015-09-01

11.17. Parameter Entry Table versus Risk Categories

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions

A parameter entry table and a risk category both seem like a way to select a group of tasks and apply variation to them. How do I choose between them?

Choose a parameter entry table when you want to show parameters — such as min, most likely, and max — in dedicated columns for easy editing. Choose a risk category when you want a group of tasks all to vary by the same percentages or same amounts. There can be only one parameter entry table, but there can be multiple categories, with different variation for each category.

With either tool, most people chose to vary the Duration field. But you can choose any numeric or date field.

Creating a parameter entry table gives you dedicated columns for minimum, most likely, and maximum, or other appropriate distribution parameters. You can then edit any of those numbers without having to edit formulas. A project can have only one parameter entry table, so plan for this before you create it. You can have multiple risk categories.

Can I use both in the same project?

Yes, but you should not apply both of them to any of the same tasks.

How do I create a parameter entry table or a set of risk categories?

The two dialogs are similar, so we'll explaine the Parameter Entry Table dialog in detail, and then gloss over the Risk Categories dialog.

  1. Click Project » Model Tools » Parameter Entry Table to open the dialog box.
  2. Choose the field (column) for which you're creating the parameter entry table.
  3. Set your desired type of variation.
  4. Under Build Entry Table For, select All Tasks to apply the table to every task in the project, or Selected Tasks to apply the table to just e tasks that meet some criterion. Either setting will automatically ignore milestones and summary tasks, which cannot have distributions.
  5. If you chose Selected Tasks, Click Add if you want to choose tasks by clicking, or Add Marked if you want to choose them based on some characteristic. For example, if you chose Add Marked and then click in a Duration field with a value of 1 day, @RISK will select every task that has a Duration of 1 day; if you click on a text field that has the value Construction, @RISK will select every task that has the value Construction in that same column. (This includes any tasks that are invisible because of filtering.)
  6. If you plan to have project managers update the MPP file, check (tick) the box Also Add Entry Table to MPP. If everyone who will be updating the project has @RISK, there's no need to duplicate the parameter entry table in the MPP file.

Risk categories are a similar idea to the parameter entry table, but you can designate different categories with different distribution functions. You select distributions the same way as for a parameter entry table, and you select tasks using Add or Add Marked in the same way, but @RISK writes your chosen variation directly into the field you selected, for every selected task. For example, if you selected Duration to vary between 5% below base and 20% above base, @RISK will write RiskVary functions with those numbers into the Duration field of the tasks you selected. You can create additional categories when you want, or change the variation method of an existing category.

Risk categories and parameter entry tables are similar, but they do have a few differences. Here's a summary:

  Risk CategoriesParameter Entry Table
Number allowed Multiple Only one
Same distribution? Yes, within one category; can be different for different categories Yes
Same parameters? Yes, within one category; can be different for different categories Easily edit the parameters individually
Parameters visible in reserved columns in worksheet? No Yes

Last edited: 2016-09-21

11.18. Scheduling Rework in a Project

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

When a certain task, call it Task 3, is finished, there's a 20% probability that the work may not be acceptable, in which case Task 3 will have to be redone. (The rework might include one or more predecessors of Task 3 too.) How do I set this up with probabilistic branching in @RISK?

Probabilistic branching can go only forward, not backward. You could have a rework task, as a successor to Task 3, and branch around that rework task with probability 100%–20% = 80%. However, there's a more straightforward solution: use a RiskProjectAddDelay function in your Risk Register. 

In the attached example Rework, Tasks 2 and 3 get reworked (or not) as a group, with probability 20%. If the RiskBernoulli in the Risk register returns 1 in a particular iteration then for that iteration the RiskProjectAddDelay adds a task sandwiched in between Tasks 3 and 4. The duration of that rework task is the sum of the durations of Tasks 2 and 3, which is 19 days. Without reworking, the overall duration of the project is 21 days; with reworking Tasks 2 and 3 it's 21+19 = 40 days. If you run a simulation and look at the total duration in C2 on the Tasks sheet, you'll see that 80% of the time it's 21 days, and the other 20% it's 40 days.

In the example, almost everything is deterministic, just to make it easier to see how it all works. (A real project would most likely have probability distributions for the durations of Tasks 2, 3, and 4.) But you could get more elaborate. For example, maybe under some circumstances a second rework would be needed. In that case, replace the RiskBernoulli in C2 of the Risk Register with a RiskBinomial or other suitable discrete distribution.

Or maybe a rework doesn't take the same time as the original, but it could be more or less. For that, replace the duration in A2 of the Risk Register with a continuous distribution, perhaps something like

=RiskVary(Tasks!C3+Tasks!C4,-10,15,0,,"Triang")

as shown in the example Rework2. This is a triangular distribution varying between 15% below and 10% above the 19 days for Tasks 2 and 3 combined. Run a simulation. You'll see that the project duration is still 80% likely to be 21 days, but instead of a single value at 40 days there's now a distribution of possible durations around 40 days.

Last edited: 2015-09-01

11.19. Adding a Project Column to Excel

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

I want to add a column from my MPP file to my Tasks or Resources sheet in Excel. I know I could change the default view in Project and re-import the MPP file, but I don't want to lose all the editing I've already done in Excel. Is there another way?

Yes, there is.

  1. Have your Excel file and linked project open in @RISK.

  2. Click onto the sheet to which you want to add a column. It doesn't matter where you click on that sheet.

  3. In the @RISK ribbon, click Project » Project Link » Insert or Hide Field. @RISK lists the available Project fields, and also lets you specify where the column should be added.

Last edited: 2015-09-01

11.20. Filtering Tasks in Projects

Applies to:
@RISK 6.x/7.x, Professional and Industrial Editions

I'm using @RISK to simulate a project. How do I filter tasks on the Tasks sheet in @RISK's Excel workbook?

To filter tasks in a project, set your filter in MS Project, not in Excel, then update (not sync) the filter in @RISK. Details:

  1. In Project, Filter settings are on the View tab of the ribbon. You can use a built-in filter, a filter you created and saved previously, or a new filter.
  2. If you created a new filter that you may want to reuse, save your project at this point.
  3. In Excel, in the @RISK ribbon, click Project » Project Link » Update Project Filters.

Will @RISK simulate only the visible tasks?

No, rows that are "filtered out" are still in the Excel and Project files. Computations are not affected, and that includes simulations. Even if you select a range of rows (tasks), your selection will include the invisible tasks in that range. This is not a peculiarity of @RISK; Excel filtering works exactly the same way.

I'm trying to set up a filter because I want to apply similar variation to a group of tasks.

You probably want a parameter entry table or risk categories. See Parameter Entry Table versus Risk Categories.

Last edited: 2016-09-21

11.21. Critical Index of Summary Tasks

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions

In the Probabilistic Gantt chart, I see critical indices for summary tasks. How are they computed?

When using the Standard Engine, @RISK relies on Microsoft Project to determine critical paths. The critical path found by Project may include regular tasks and summary tasks. @RISK counts the number of iterations where a given task of either type is on the critical path, divides by the total number of iterations, and reports the resulting percentage as the critical index for that task.

When using the Accelerated Engine, @RISK determines the critical path based on individual tasks. Summary tasks are never considered part of the critical path, and therefore @RISK reports "n/a" (not applicable) as the critical index for summary tasks.

You can select an engine in the Project Settings dialog in @RISK. Use Check Engine in that dialog to determine whether your project can use the Accelerated Engine, or what issues require it to use the Standard Engine.

Last edited: 2015-09-01

11.22. Unlinking .MPP file from @RISK

Applies to: @RISK 6.x/7.x, Professional and Industrial Editions, when used with projects

I imported my .MPP file into @RISK. I want to continue simulating costs and other items, but I'm no longer interested in simulating the schedule. Can I unlink the MPP file so that Microsoft Project is no longer involved in my simulation?

Yes, this is easily done:

  1. Open Excel's name manager. (In Excel 2007 or above, click Formulas » Name Manager. In Excel 2003 or below, click Insert » Name » Define.)

  2. Delete the name RiskMPPPath and save the workbook.

This will remove the link with Project, and you can move or delete the .MPP file. Please be aware that the Gantt chart and related information in the Excel workbook will no longer be updated.

Last edited: 2015-09-01

11.23. Sending Confidential Projects to Palisade (Scrambler)

Applies to:
@RISK 6.x/7.x
@RISK for Project 4.x

The technical support representative needs to see my project to solve a problem I reported, but I can't send it because it's sensitive. What can we do? Is there some way to obscure or hide the task names?

Yes, you can save the project under a new name and scramble all the task names. Because of the co-ordination with Excel, instructions for @RISK 6.x/7.x are a little different from instructions for @RISK for Project 4.x.

Caution: Once scrambled, the task names cannot be unscrambled. Therefore, save the project(s) under different name(s) now.

For @RISK 6.x/7.x:

You'll scramble the task names in Microsoft Project, but you need a few extra steps to synchronize the scrambled names with your Excel file.

  1. Open @RISK, and open your Excel model. @RISK will open your project for you, as usual.
  2. Click into Microsoft Project and click File » Save As to save the file under a new name.
  3. @RISK will sense that you changed the MPP file name, and will prompt you to save the Excel file. Use File » Save As, not plain File » Save, and use a name consistent with the new name of the MPP file.
  4. Close Excel. @RISK will close Microsoft Project for you.
     
  5. Double-click your project(s) to open them in Microsoft Project, not @RISK.
  6. Press Alt-F11 to bring up the VBA editor window.
  7. Insert a module in one of the open projects (right-click on Microsoft Project Objects and click Insert » Module).
  8. Paste in the macro code below, from Sub to End Sub inclusive.
  9. Press F5. The task names in all open projects will be scrambled.
  10. Right-click on the module you created and select Remove Module. When asked whether to export before deleting, answer No.
  11. Close the Visual Basic Editor window. Save the project(s) and close Microsoft Project.
     
  12. Reopen @RISK and open your latest Excel file. After it finishes linking with your project, in @RISK click Project » Sync Now to bring in the scrambled task names.
  13. Save the Excel file, and close @RISK.  Send the Excel file and the MPP file to Palisade.

For @RISK for Project 4.x:

  1. In Microsoft Project, not @RISK, open the project(s) where you want to scramble task names.
  2. Save the project(s) under different name(s).
  3. Press Alt+F11 to bring up the VBA editor window.
  4. Insert a module in one of the open projects (right-click on Microsoft Project Objects and click Insert » Module).
  5. Paste in the macro code below, from Sub to End Sub inclusive.
  6. Press F5. The code should run and the task names in all open projects will be scrambled.
  7. Right-click on the module you created and select Remove Module. When asked whether to export before deleting, answer No.
  8. Close the Visual Basic Editor window. Save and close the project.

Here is the code of the macro:

Sub scrambleNames()

    ' This macro scrambles all task names in all open projects to make them
    ' unidentifiable.  Caution! The process cannot be reversed.

    On Error Resume Next

    Dim thisProject As Project, thisTask As Task
    Dim i As Long, randomName$

    For Each thisProject In Application.Projects
      If Not thisProject.ProjectSummaryTask Is Nothing Then _
          thisProject.ProjectSummaryTask.Name = String(10, Chr$(65 + (Rnd * 25)))
      ' The above If statement should be one long line.
      For Each thisTask In thisProject.Tasks
        randomName$ = ""
        For i = 1 To 10
          randomName$ = randomName$ + Chr$(65 + (Rnd * 25))
        Next
        thisTask.Name = randomName$
      Next
    Next

End Sub

Additional keywords: Scrambler, scrambler utility

Last edited: 2015-09-01

11.24. Importing from Primavera

Applies to:
@RISK 6.x/7.x
@RISK for Project 4.x

Can I import Primavera files into @RISK?

Yes. Export the Primavera file into a format that Microsoft Project can read. When you import the file into MS-Project, it is ready for @RISK 6.x/7.x or @RISK for Project 4.x.

Instructions for P6:

  1. Choose File » Export.
  2. Choose the MPX/MPP or XML option.
  3. Choose the export type: Project or Resource.
  4. Select the project(s) to export, and specify a file location and name.
  5. If you chose the XML option in step 2, open the XML file in Microsoft Project, save it as an MPP file, and close Project.

Instructions for P3:

  1. Choose Tools » MPX Conversion Utility.
  2. Choose File » Convert a P3 project to an MPX project...
  3. Select the P3 file that you want to export, and click OK.
  4. Specify an output file location and name, and click OK.

Last edited: 2016-03-14

12. @RISK for Project 4.x

12.1. Displaying the @RISK Functions Column in @RISK for Project

Applies to:  @RISK 4.x For Project

How do I get the column labeled @RISK Functions to appear in my Project file?

The easiest way to display the column is to go to the menu in Project and choose View » Table » @RISK. (In Project 2010, View » Other Views » More Views » @RISK.)

You can also add the @RISK column to other views and tables. (By default, @RISK uses the Text1 column for distributions, but you can change that as described in Designating Columns for @RISK Functions.)

In Project 2007 and earlier, you can display the @RISK Functions column in any view as follows:

  1. Select the column to the right of the position where you want the @RISK Functions column to appear.
  2. From the menu in Project, choose Insert » Column. Project's Column Definition dialog appears.
  3. In the Field name area, choose Text1.
  4. In the Title area, type: @RISK Functions or any other name you wish.
  5. Set the Width and Header Text Wrapping Options as desired and click OK. The column now appears in your Project, along with any @RISK functions it may already contain.

In Project 2010 and later, you can display the @RISK Functions column in any view as follows:

  1. Select the column to the right of the position where you want the @RISK Functions column to appear.
  2. Right-click the column header and select Insert Column.
  3. A new column appears, with all available field names. Choose Text1. The column now appears in your Project, along with any @RISK functions it may already contain.
  4. Right-click the column heading and select Field Settings. In the Field name box, type @RISK Functions or any other name you wish.
  5. Set the Width and Header Text Wrapping options as desired and click OK.

See also:  Designating Columns for @RISK for Project 4.x Functions

Last edited: 2012-11-09

12.2. Designating Columns to use for Functions in @RISK for Project

Applies to:  @RISK 4.x For Project

I don't want to use the Text1 column for my @RISK functions, because I use Text1 for another purpose. How can I specify a different column to contain my @RISK functions?

You can tell @RISK to use any of columns Text1 through Text20. When you set columns in @RISK, you must have an open project with at least one task or resource in it when you issue this command. Otherwise, you will get a message asking you to "Please open a project for use with @RISK."

  1. From the menu in Project, choose @RISK » Model » Columns for @RISK Functions.
     
  2. In either the Task Tables area or the Resource Tables area, click the field that will contain the @RISK data to place a check mark beside that field. (If desired, you may specify additional fields to use by holding down the Ctrl key and clicking additional fields.)
     
  3. Click the OK button. A dialog appears asking:

    "Update this project to read @RISK function from selected task and resource fields? (Note: @RISK functions in other fields will be ignored and should be moved to the selected fields!)"

  4. Click the OK button.

If the columns you chose in step 2 don't display automatically, please see this article: Displaying the @RISK Functions Column.

Last edited: 2013-04-25

12.3. Distribution's Sampling Depends on the Occurrence of an Event

Applies to:
@RISK for Project 4.x
(If you have @RISK 6.0 or newer, use RiskCompound; see Combining Probability and Impact.)

@Risk distributions can also include optional arguments. A useful optional argument is the "EnableWhen( )". Conditions can be specified as the argument for the EnableWhen( ). The condition will determine whether the distribution will be sampled. Example:

Duration = RiskNormal(10,2, EnableWhen(Variable[RiskOccurred]=1) )

The above function will only sample the defined RiskNormal( ) when the global variable called "RiskOccurred" is equal to 1. Conditional statements with EnableWhen( ) can only refer to global variables. However, you can also use the special argument prob=, like this:

Duration = RiskNormal(10,2, EnableWhen(prob=0.5) )

The above function will only sample the defined RiskNormal( ) variable for 50% of the iterations during the simulation. In other words, the RiskNormal( ) has a 50% chance of occurring.

Whenever the EnableWhen( ) evaluates such that the distribution is not sampled, the schedule will be calculated based on the initial value that is in the corresponding field prior to starting the simulation.

Using the EnableWhen( ) function, Risk impact variables can be made dependent on probability-of-occurrence parameters. This is very useful for modeling partially-mitigated and event-based risks.

For example, it is sometimes the case that a mitigation strategy cannot alter the impact of a risk if it occurs. Rather the mitigation strategy will be designed to reduce the likelihood of the risk occurring. Thus, the probability of occurrence for the risk has been reduced. The effect of reducing the probability of occurrence for a risk can be easily modeled using the EnableWhen( ).

See the attached file for an example using the "EnableWhen()" argument.

last edited: 2012-09-11

12.4. Parameter Entry Table for Fields Other Than Duration

Applies to:
@RISK for Project 4.x, Professional Edition only

Question:
I have columns in my project where I type in min, most likely, and max for the Duration field. I would like to do the same thing for another project field such as Work. Is there any way?

Response:
Yes. Click on the Create Parameter Entry Table icon, or in the menu select @RISK » Model » Create Parameter Entry Table. At the top of the dialog, in the "Assign Uncertainty to" section, click the drop-down arrow in the Field box and select your desired field.

If your desired field doesn't appear, click Cancel. Use Project's Insert Column command to add the desired column to the current view. Then reopen the Create Parameter Table Entry dialog and the desired field will now be in the list.

last edited: 2012-09-11

12.5. Ranking Sensitivities by Correlation instead of Regression Coefficients

Applies to:
@RISK for Project 4.x

Question:
The default sensitivity analysis is based on regression coefficients, but I would like the tasks ranked by correlation coefficients. How can I accomplish that?

Response:
Tornado graphs display a ranking of the tasks. The input distributions that have the largest impact on the output will have the longest bars in the graph. By default, the length of the bar shown for each input distribution is based on the regression coefficients.

If you just want the coefficients as numbers, you get them automatically in Quick Reports and also in the Sensitivity report. But if you @RISK to arrange the inputs in order of impact as measured by correlation coefficient, you have two options:

  • Generate the Sensitivity Report.  (In the Results window, click Results » Report Settings.  Select Sensitivities, and click Generate Reports Now.)  This report shows two sets of tables. The left-hand set shows the ranking of the tasks using regression coefficients, and the right-hand set shows the ranking of the tasks using correlation coefficients.

  • Open the Sensitivity Analysis window. (In the Results window, click Insert » Sensitivities.) In this window, a drop-down list lets you select which calculation method to use to rank the tasks.

last edited: 2013-01-15

12.6. How to Open an .RPJ File

Applies To:
@RISK for Project 4.x

Problem:
I double-click on an @RISK for Project results file (.RPJ), but a message box appears saying that Windows cannot open the file. How can I open an .RPJ file?

Solution:
To open an .rpj file:

  1. Launch @RISK for Project.
  2. Click on the "Show @RISK – Results Window" button in the @RISK toolbar.
  3. Select File » Open from the "@RISK – Results Window" menu.
  4. In the "Look in" list, click the drive or folder that contains the file you want to open.
  5. In the folder list, locate and open the folder that contains the file.
  6. Click the file, and then click 'Open'.

last edited: 2013-03-13

12.7. @RISK 4.x with Non-English Microsoft Project

Applies to:
@RISK for Project 4.1

Question:
My copy of Microsoft Project is in a language other than English. Can I run @RISK for Project successfully?

Response:
@RISK for Project 4.1 is not fully internationalized. However, the English version may be run using the corresponding Microsoft Project in

  • Danish
  • French
  • German
  • Italian
  • Norwegian
  • Portuguese
  • Spanish
  • Swedish

(All @RISK for Project text will still be in English.)

If the language version of Microsoft Project is not in the list above, then @RISK for Project will not work. This is because MS Project localizes all column names (Task, Successor, etc.), and @RISK for Project must be modified to recognize and handle each language separately.

last edited: 2006-06-22

13. BigPicture

13.1. Saving a BigPicture Map as PDF

Applies to:
BigPicture, all releases

How can I create a PDF of my BigPicture map? Can I print just part of it, zoomed to take up a whole sheet?

Please see Can I save a map as a PDF file?

Later in that article, you'll also find instructions to Print a zoomed-in portion of your PDF.

Last edited: 2017-07-10

13.2. User Guide for BigPicture

Where are the help file and user manual for BigPicture?

The BigPicture documentation is not installed with the product, but is available on line. The help icon in the BigPicture menu takes you to that page; the help icons in a BigPicture dialog box takes you to the specific help page for that dialog.

If you don't have Internet access, please contact Palisade Technical Support and we will send you a PDF of the user documentation. For fastest service, please include your software serial number in your email.

Can you send me the example spreadsheets also?

Those are installed as part of the product. In BigPicture, click Help » Example Spreadsheets.

Last edited: 2015-12-24

13.3. Using Auto Arrange in BigPicture

Do you have any tips for using Auto Arrange in BigPicture?

If you want tight spacing, turn Auto Arrange on, expand and collapse topics individually, and then turn Auto Arrange off. When Auto Arrange is turned on and then off in this way, it remembers the positions of expanded topics. Here's an example:

  1. In BigPicture, click Help » Example Spreadsheets, look in the Basic Maps group, and select TravelCosts.xlsx.
  2. Turn on Auto Arrange. Click Collapse or Expand to » No Topics Collapsed, then immediately click Collapse or Expand to » Root Topic. Turn off Auto Arrange.
  3. Now expand the root topic. You'll see that the spacing of the first-level topics is very large, just as it would be for a fully expanded map.
  4. To return to closer spacing, turn on Auto Arrange, then expand and arrange all topics while Auto Arrange is on. Finally, click Collapse or Expand to » Root Topic and turn of Auto Arrange.

Auto Arrange doesn't work well when there are groups of topics with different parents. Auto Arrange works on single contiguous groups of attached topics, but does not arrange all separate contiguous groups together.

Last edited: 2015-12-24

13.4. Undoing a Change in a BigPicture Map

Applies to: BigPicture, all releases

I made a change in my BigPicture, but then I realized I had made a mistake. I clicked Excel's Undo icon in the ribbon, but the change seems to be permanent. Does BigPicture break Excel's undo?

BigPicture and Excel have separate Undo stacks. Excel's Undo is in the Quick Access Toolbar at the top left of the Excel ribbon. BigPicture's Undo is on the BigPicture tab of the ribbon, in the Edit section near the right-hand end of the ribbon.

Use Excel's Undo to undo Excel operations, and BigPicture's Undo to undo BigPicture operations.

Last edited: 2015-09-18

13.5. Updating a Data Map with New Data

If I create a Data Map, then make a change to the data, I don't see an Update button, as I di for Org Charts and Linked Maps. How can I update a Data Map?

Go to the Data Map dialog and immediately click the Create Map button. BigPicture will display this message:

A map created from this data is open.
Select Yes to overwrite this map, No to create a new map.

Click Yes, and BigPicture will re-create the map with the new data, using the same options.

Last edited: 2015-12-24

13.6. Number Formats in a Linked Map

Applies to:
BigPicture, all releases

How do I change number of decimal places and other number formatting for the calculated numbers in a linked map?

The calculated statistics follow the format of the source data. Change the format for a column, run or re-run your map, and the calculations based on that column will use the new format.

Note: You need to change the format of the whole column, not just the cells containing numbers but also the cells containing asterisks (*).

Last edited: 2018-01-05

14. Evolver and RISKOptimizer

14.1. For Faster Optimizations

Disponible en español: Para optimizaciones más rápidas

Applies to:
@RISK 5.x–7.x, Industrial Edition
Evolver 4.x–7.x
RISKOptimizer 1.x
RISKOptimizer Developer's Kit (RODK) 4.1
Evolver Developer's Kit (EDK) 4.1

My optimization seems to take a long time to execute. Is there anything I can do to speed it up?

Here is our checklist. (The OptQuest engine mentioned in some of these hints is available in @RISK Industrial 6.0 and newer, and Evolver 6.0 and newer.)

  • If you have an older release of Evolver or RISKOptimizer, upgrade to the current release. The optimization engine in 6.x is significantly faster than earlier releases, even more so for linear problems, and 7.x is faster still.

  • Choose the most appropriate solving method, and limit the adjustable cells to as small a range as possible. This improves the proportion of valid (feasible) trials to invalid trials. For instance, if you have numbers 1 to 20 to assign in an optimal way to 20 cells, don't choose Recipe and try to set constraints that weed out duplicate assignments. Instead, choose Order and the duplicates will never be generated in the first place.

  • Set hard constraints where hard constraints are appropriate. Advice is sometimes given to users of evolutionary solvers to replace hard constraints with soft constraints and a penalty function, but Evolver and RISKOptimizer do just fine with hard constraints. Their OptQuest engine and Genetic Algorithm handle hard constraints intelligently, using methods that quickly find solutions that meet the hard constraints. (The Genetic Algorithm uses the method of "backtracking"; it is explained in the software manuals.)

  • Make constraints linear if you can. If all constraints are linear, the OptQuest engine (available beginning with release 5.0) can avoid generating solutions that violate constraints, so all trials will be valid trials. Eliminating these invalid trials can make some optimizations reach a solution much faster. And if you select = in your constraint, using the OptQuest engine, only a linear constraint will find valid trials within any reasonable time period.

    Hint: MAX and MIN are not linear functions. Instead of constraining the maximum or minimum of a cell range to be less or greater than a certain amount, constrain the cell range directly.

  • For adjustable cells, use discrete or integer rather than "any", if you can. When adjustable cells are discrete, the OptQuest engine may be able to enumerate them, thus generating only valid trials. (See Defining Decision Variables, accessed 2015-07-22.)

  • If you use the Genetic engine (optional in 6.x/7.x, standard in 1.x–5.x), start with a feasible solution, meaning a state in which all the constraints are met. If you start off with some constraints violated, the software's genetic algorithm must take time to find a feasible solution as a base for the optimization. If your model is complicated and you need help getting to an initial feasible solution, please see Debugging RISKOptimizer and Evolver Models.

  • Optimize on a continuous value that conveys meaningful information. The idea is that small changes in the adjustable cells should make small changes in the target value. Sometimes a customer model is essentially binary: the target cell is essentially a yes/no. It is always better to use a target cell that is a continuous number, so that the optimizer can tell when it is making progress. If your target cell is a 1/0, all infeasible solutions are equally bad and the optimizer has no way to choose one over another. Use constraints, not the target cell, to rule out unacceptable solutions.

  • Constrain on a continuous value when that is natural in the model. Suppose you need cell C5 to be no more than 120. Set your constraint as C5<=120. Sometimes people try to "help" an optimizer by putting the formula =IF(C5<=120,1,0) in a separate cell and constraining that cell to equal 1. But doing that deprives the algorithms of the information about how far or how close the constraint is to being met. When you use the real constraint, C5<=120, the algorithm can determine that a solution with C5=150 is better than one with C5=200.

  • If you have Excel 2007 or later, enable multi-threaded calculations. In Excel 2010–2016, File » Options » Advanced » Formulas » Enable multi-threaded calculations. In Excel 2007, click the round Office button and then Excel Options » Advanced » Formulas » Enable multi-threaded calculations.

  • Use the optimization stopping conditions on the RISKOptimizer or Evolver options screen. Sometimes the last little bit of convergence isn't needed or provides little improvement, but accounts for a large chunk of the optimization time (the 80-20 rule).

  • In RISKOptimizer, set the separate simulation stopping conditions in addition to the optimization stopping conditions. In RISKOptimizer 6.x/7.x, the simulation stopping conditions are on the Convergence tab of the @RISK Simulation Settings dialog; in RISKOptimizer 1.x and 5.x they are on the RISKOptimizer Options screen.

  • With RISKOptimizer, you can do some things to speed up the simulation portion of the optimization. Generally, good advice for @RISK is good advice for the simulation part of RISKOptimizer. Please see For Faster Simulations.

  • With RISKOptimizer, if you don't have any @RISK distribution functions in your model, set the number of iterations to 1, or use Evolver if you have it. For more information, please see Running RISKOptimizer Deterministically.

  • RISKOptimizer 7.5.0 and newer will split the optimization among multiple CPUs (cores). Look at the General tab of Simulation settings to be sure that multiple CPU is set to Automatic, or to Enabled. If this computer has only a few cores, try the optimization in a more powerful machine, with more cores and plenty of RAM.

Last edited: 2020-07-28

14.2. Multiple Goals for Optimization

Applies to:
@RISK Industrial Edition 6.x, 7.x
Evolver 4.x–7.x
RISKOptimizer 1.x, 5.x
Evolver Developer Kit (EDK) 4.1
RISKOptimizer Developer Kit (RODK) 4.1

Evolver and RISKOptimizer let me specify just one goal, but I need to maximize or minimize multiple cells. What can I do?


The Evolver and RISKOptimizer settings dialogs let you specify only one cell as a target. But you can still solve for multiple goals by creating a function that combines two or more goals into one goal.

For example, if you want to maximize (minimize) two cells then you would put their sum as a formula in a third cell and maximize (minimize) that cell as a target. To get two cells as close as possible to zero, put the sum of their absolute values as a formula in a third cell, and minimize that.

If the goals don't have equal importance, you can attach weights to them. For example, if getting K72 close to zero is ten times as important as K71, your goal would be to minimize abs(K72)*10+abs(K71).

Sample workbook

The attached workbook shows a goal of getting two numbers (in purple) as close as possible to two target numbers (in green). That means the discrepancies, cells I17:I18, must be as close as possible to zero, so the single goal in I22, the sum of abs(F17) and abs(F18), must be minimized.

The example is set up to run as it is, but you can change things if you wish. The targets (green cells) and the starting values of the adjustable (red) cells are editable, and you can also change any of the Evolver or RISKOptimizer options.

The example is protected so that you don't accidentally overwrite any formulas. You can remove protection in Excel 2003 by clicking Tools » Protection » Unprotect Sheet, or in Excel 2007 and above by right-clicking the tab and selecting Unprotect Sheet.

(In case you're interested in the background for this example, a user had a target mean and standard deviation for a Beta distribution, with fixed min and max, and wanted to find the necessary alpha1 and alpha2 parameters for the distribution. The mean and standard deviation are easy to find from min, max, alpha1, and alpha2, so it was just a matter of working backward by adjusting alpha1 and alpha2 in the optimization.)

Efficient Frontier Analysis

Beginning with release 7.0, @RISK Industrial Edition's and Evolver's efficient frontier analysis can simplify optimization when you have two competing goals. You choose one of them as the target for an optimization and constrain the other to be no worse than a specified limit. Then the software performs a sequence of optimizations, each time changing the limit value for the constraint. There's a full description in the @RISK and Evolver help files, and examples are included when you install @RISK Industrial 7.0 or newer or Evolver 7.0 or newer.

Additional keywords: Multi-objective optimization, Multi-goal optimization, Multi-target optimization, Multiple objectives for optimization, Multiple targets for optimization

Last edited: 2017-04-06

14.3. Efficient Frontier Analysis

Applies to:
@RISK 7.x, Industrial Edition
Evolver 7.x

I need to run an optimization with two competing goals, for instance to maximize profit while minimizing risk. Can I do this?

Yes, in releases 7.0 and newer, Evolver and RISKOptimizer can do this. (RISKOptimizer is part of @RISK Industrial Edition and is not available in the Professional or Standard Edition.) Select Efficient Frontier Analysis on the Model Definition screen.

Examples are installed on your computer with either product:

  • @RISK (RISKOptimizer): Help » Example Spreadsheets » RISKOptimizer and scroll down to Efficient Frontier Examples.

  • Evolver: Help » Example Spreadsheets and scroll down to Efficient Frontier Examples.

Last edited: 2015-07-23

14.4. Efficient Surface

Applies to:
Evolver 7.x
@RISK 7.x, Industrial Edition (in RISKOptimizer)

Is Efficient Surface available in RISKOptimizer and Evolver?

The concept of "Efficient Surface" ("Efficient Frontier Surface", "Efficient Plane") generalizes the concept of Efficient Frontier to 3 dimensions.  It answers the question about the optimal value of one of the three quantities, given fixed bounds on the values of the two other quantities.

This functionality is not available out of the box, but it can be obtained using the programming interface of @RISK for Excel.  Say we start with a regular efficient frontier that analyzes the tradeoff between the standard deviation and the mean, except we add an additional constraint that skewness <= x1.  We get one efficient frontier curve.  Then we do the same with the constraint that skewness <= x2, getting another curve.  We repeat it and get a number of curves that define our efficient surface.  Using VBA, an Excel 3D graph can be generated to represent the surface.

Last edited: 2015-08-11

14.5. "Value" Choice for Optimization Target

Applies to: RISKOptimizer in @RISK 6.x/7.x Industrial Edition

In setting up my optimization goal in RISKOptimizer, I see that I can optimize a cell's simulation mean, standard deviation, percentile and so on, but what does Value mean? What does it mean to optimize the target cell's value?

Think of Value as "final value" or "value at end of simulation". Mostly Value duplicates functions that you can reach through the other statistics, but there's one case where only Value will do what you want.

Suppose rather than optimizing a particular simulation statistic of one cell, you want to optimize an expression involving simulation statistics. For example, maybe you have expressions in A1 and A2 and you want to optimize the difference of their means.  In this case, put =RiskMean(A1)-RiskMean(A2) — or maybe =ABS(RiskMean(A1)–RiskMean(A2)) — in another cell, such as A3. Your optimization target is cell A3, and the statistic is Value.

Last edited: 2015-07-23

14.6. Limited Number of Adjustable Cells?

Applies to:
Evolver 5.x–7.x
RISKOptimizer in @RISK 5.x–7.x Industrial Edition

How many adjustable cells do Evolver and RISKOptimizer allow? How many groups?

The following applies to commercial, academic, and student versions sold by Palisade. Textbook versions may have lower limits.

Evolver Professional Edition allows up to 250 adjustable cells per model; Evolver Industrial Edition is unlimited.  Evolver doesn't impose any limit on the number of adjustable cell groups.

RISKOptimizer, which is included with @RISK Industrial exclusively, does not limit the number of adjustable cells or groups.

Although Evolver and RISKOptimizer don't impose fixed limits, your own system RAM and other resources may create performance issues, practical limits, or both. The optimizer needs to read and write to the adjustable cells many, many times; if there are a lot of adjustable cells, it may look like it is stuck.

See also: Some authors use the term decision variables for what Evolver and RISKOptimizer call adjustable cells. If you searched for "decision variables" and landed on this article, please retry your search using "adjustable cells".

Last edited: 2017-09-07

14.7. Running RISKOptimizer Deterministically

I would like to use RISKOptimizer's optimization, but without running Monte Carlo simulations. How can I run RISKOptimizer deterministically rather than stochastically?

We recommend using Evolver rather than RISKOptimizer for deterministic optimization, for these reasons:

  • Linear Programming is not available in RISKOptimizer.
  • Evolver will solve deterministic problems much faster than RISKOptimizer.
  • Evolver settings are presented in a manner appropriate for deterministic optimization, and RISKOptimizer's settings are integrated with @RISK settings, as appropriate for simulation optimization.

Evolver is part of the DecisionTools Suite and is also available as a separate product.  If you don't have access to Evolver, ...

While RISKOptimizer usually runs a simulation optimization, you can also run it deterministically. For an overview, please refer to the diagram in the user manual, in the section "Traditional Optimization vs. Simulation Optimization".

In RISKOptimizer, there are actually two types of variables:

  • Probability distribution functions (PDFs or inputs), which change values with every iteration within every simulation.
  • Adjustable cells, which change once in each simulation within your constraints. (These would be somewhat like the generation changes in Evolver.)

If you don't have any PDFs, you have a deterministic model. You still have the adjustable cells, and RISKOptimizer will try different values of them in each simulation. Since there are no PDFs, every iteration within a given simulation would produce the same result. Therefore, you want to set one iteration per simulation, and optimize for value.

To run RISKOptimizer deterministically:

  • In 6.x/7.x, on the toolbar or ribbon, set Iterations to 1. Also, in the Model Definition, set Optimize to Value.
    Note: The linear programming features included in Evolver 6.x/7.x are not available in RISKOptimizer. If you have a deterministic LP problem, we recommend using Evolver to solve it.

  • In 5.x, open Optimization Settings and go to the Runtime tab. Near the bottom, under "Simulation Runtime", click the radio button next to Iterations and set the iteration count to 1.

  • In 1.x, open RISKOptimizer Settings and click Options. Near the bottom, under Simulation Stopping Conditions, select Run and 1 iteration.

If your RISKOptimizer model contains @RISK probability distribution functions, you can lock them to their static values during the optimization, if you wish. See Turning Inputs On and Off.

 

Last edited: 2018-04-02

14.8. Choosing Evolver versus RISKOptimizer

Applies to:
@RISK Industrial Edition 6.x/7.x
Evolver 6.x/7.x

How should I decide whether to optimize with RISKOptimizer or Evolver? Are there any differences between the two?

Optimizer with Evolver when you need to choose the best alternatives, and the effects of your choices are predictable. It's appropriate for models where there is no element of chance. It's also appropriate where chance is involved but doesn't have a major effect on the outcome, so that you just build the likely effects of chance into your model as fixed numbers. The traveling salesman problem is a classic example: you know where all the customers are, and you want to plan a route that minimizes travel time. Evolver tells you the one best route.

RISKOptimizer, part of @RISK Industrial Edition, does optimization with simulation. You use it when your model involves both choices you must make and chance events that you can't control. In the traveling salesman example, RISKOptimizer lets you account for not knowing which customers will actually be getting deliveries on a given day. RISKOptimizer tells you the route that has the best chance of being correct.

If you want to use ten-dollar words, you can say that Evolver does deterministic optimization and RISKOptimizer does stochastic or probabilistic optimization.

See also: Running RISKOptimizer Deterministically

Last edited: 2018-08-10

14.9. Random Number Seed in RISKOptimizer

Applies to: @RISK 6.x/7.x Industrial Edition

Can I use a fixed initial seed for RISKOptimizer, to make an optimization repeatable?

Yes, you can set this in Simulation Settings, on the Sampling tab.

Is there any guidance on using a fixed seed across multiple simulations with RISKOptimizer?

You're asking about this setting in @RISK: Simulation Settings » Sampling » Multiple Simulations. By default it's set to "All Use Same Seed". We recommend that setting, because it makes it easier to interpret the optimization log. For example, let's say B4 and B5 are your adjustable cells. You see in the log that the simulation results are the same for (B4=1, B5=1), (B4=1, B5=2), (B4=1, B5=3), but are different as soon as the value of B4 changes. You can interpret this as a hint that the value of B5 doesn't matter for simulation results. On the other hand, with Multiple Simulations set to "Use Different Seeds", you will probably never see identical simulation results.

Depending on details of the model, the optimization algorithm may make a similar observation that B5 doesn't seem to make a difference to the results, and allocate less time to attempts to improve the results by changing B5.

When the Genetic Algorithm is used — either because you selected it, or because you selected Automatic and RISKOptimizer selected the GA — there's a second reason to select "All Use Same Seed" in Simulation Settings. The GA can sometimes backtrack and reuse the same set of adjustable cells. If this happens, but the seed is different, the simulation results will be different also, and this may make the GA converge more slowly than with "All Use Same Seed".

Last edited: 2015-07-23

14.10. Percentile Constraints and Target Constraints

Applies to: RISKOptimizer in @RISK 5.5–7.x Industrial Edition

While editing constraints in RISKOptimizer, I see two new types of Statistic to Constrain: "Percentile (X for given P)" and "Target (P for given X)". How do these work?

When you choose the "Percentile (X for given P)" statistic, RISKOptimizer constrains the value of the variable (X value) corresponding to a specified cumulative probability value (P value). For example, if P value is set as 0.1 (10%), the 10th percentile is constrained to be within the specified range.

With the "Target (P for given X)" statistic, RISKOptimizer constrains the cumulative probability value (P value) corresponding to a specified value of the variable (X value). For example, if X value is set as 5, and the range between 0.2 and 0.3 is specified, RISKOptimizer requires that the value 5 be between the 20th and 30th percentiles.

Last edited: 2015-07-23

14.11. Preventing Duplicates in Adjustable Cells

Applies to:
Evolver 4.x-7.x
RISKOptimizer in @RISK 5.x–7.x Industrial Edition
Evolver Developer Kit 4.1
RISKOptimizer Developer Kit 4.1

I have a set of adjustable cells that must vary as integers, but I need the values to be unique in each trial—there must be no duplicate values. How do I set up the constraint?

TIP: If the number of adjustable cells equals the number of possible values, so that each trial uses all possible values, all you have to do is choose the Order method in your model definition.

Set up a range of "helper" cells to count the duplicates. Then sum the helper cells and constrain the sum to equal zero.

Please download the attached example and use it to follow the technique below. The model is set up for Evolver or RISKOptimizer.

Details:
Let's suppose that the function to be maximized is in D14, and the adjustable cells are A14:A28. Try setting one of those values to match another one, and notice how the count of duplicates is updated automatically in the "helper" cells B14:B28. How is this accomplished?

In cell B14, type the formula

=COUNTIF($A$14:$A$28,A14)-1

The first argument is the range of adjustable cells, as an absolute reference with dollar signs. The second argument is the first adjustable cell, as a relative reference without dollar signs. Why subtract 1? The COUNTIF function counts all occurrences of the value in A14, including A14 itself. But we want the number of duplicates, which is one less than the number of occurrences.

Grab the fill handle at the lower right corner of B14, extend it through cell B28, and release the mouse button. Then click in one of the other cells, such as B21, and look at the formula. Notice that the first argument is the same because of the absolute reference, but the second argument has changed because of the relative reference.

You have now created the "helper" cells in column B. Each helper cell counts how many times the value to the left of it is duplicated. Sum them in B31 with an =SUM formula. (Try creating some duplicates in column A, to see how they are counted in column B. After experimenting, change the values so that there are no duplicates. It's good practice to start off Evolver or RISKOptimizer with a feasible solution, one where all constraints are met.)

Now look at the Evolver or RISKOptimizer settings. You see that A14:A28 are the adjustable cells, constrained to be integers 1 to 800. The total number of duplicates in B31 is set to equal 0 as a hard constraint. This makes it unnecessary to place constraints on cells B14:B28.

Close the Settings window and click the Start Optimization icon. You'll see the worksheet update quite rapidly as Evolver or RISKOptimizer finds the optimal solution while rejecting as invalid any solution with duplicates. When the optimization has converged, it will stop automatically and report the results.

(NOTE: For this small example, "Update Display" is checked in the Evolver and RISKOptimizer options. For greatest speed on larger models, you would not check this setting.)

Last edited: 2018-09-18

14.12. Adjustable Cell Matrix of Ones and Zeroes

Applies to:
Evolver 5.x–7.x
RISKOptimizer in @RISK 5.x–7.x Industrial Edition

My adjustable cells are a square matrix of 200 rows and 200 columns. Each row must have one 1 and the rest 0, and the same for each column. (Perhaps each row represents an object and each column represents a recipient.) I could make each row a Budget group, and constrain the column totals to equal 1, but that still leaves me with 40,000 adjustable cells. Is there a better way?

Yes, you can reformulate this as a permutation problem. You have exactly one 1 in each row, so it's just a question of which column the 1 for that row gets placed in. Instead of having each row contain 200 adjustable cells with 1s and 0s, have each row contain one adjustable cell with a value 1–200. Then the 200 rows need only 200 adjustable cells, one per row of your existing matrix model. Each permutation of these values represents one solution that meets the constraints, and all possible permutations give all solutions that meet the constraints. Use the Order method for that group of 200 adjustable cells.

What's nice about this is that you don't have to specify any constraints. Just set the initial values of those 200 adjustable cells to 1, 2, 3, 4, ... 200. The Order method takes the starting values and rearranges them for every trial, so there will never be an invalid trial. Every trial will have one 1 per row and one 1 per column. It's much faster to generate only valid trials than to generate a lot of trials, test each one against constraints, and throw out the invalid trials. With this structure, both the OptQuest engine and the Genetic Algorithm will generate only valid trials. (OptQuest won't generate a given order more than once; the Genetic Algorithm may, because it uses backtracking.)

How do you interpret the optimum set of adjustable cells at the end of optimization? Suppose cell 55 contains the value 187. That means that object 55 is assigned to recipient 187, or in other words row 55 contains a 1 in column 187, and all the other cells in row 55 and column 187 are 0.

My problem is similar, but I have a rectangular matrix, not a square one. Can I use the Order method if I have 200 rows and 60 columns? Each row must still have only one 1, but each column can have multiple 1's.

You still need just one adjustable cell per row, and each adjustable cell still contains the column number where that row's only 1 appears. But you can't use the Order method because you don't know in advance which columns will have multiple 1's in them.

Instead, use the Grouping method. Start with each of the 200 adjustable cells (one per row) containing a number from 1 to 60 (the number of columns). It doesn't matter how many of each number you have in the adjustable cells, but each number from 1 to 60 must occur at least once, and each adjustable cell must contain a number: no blanks or zeroes. During optimization, each trial will use only numbers 1 to 60, but in addition to the order the frequencies will vary. You'll interpret the final result the same way as above.

You don't need any explicit constraints with the Grouping method, because they're implicit in the method itself. But one option is important. If you check (tick) "All Groups Must Be Used", on the same screen where you select the Grouping method, then each number from 1 to 60 will occur at least once — every trial will have one or more 1's in each column.  If you don't select "All Groups Must Be Used", then each number 1 to 60 can occur any number of times, including none. If you have more complicated constraints — for example, rows 8 and 127 must be assigned to the same column — you may be able to use the Schedule method.

Last edited: 2015-07-23

14.13. Logging Non-Variable Cells

Applies to:
Evolver 6.x/7.x
@RISK Industrial 6.x/7.x (RISKOptimizer)

I have some calculated cells that depend on the adjustable cells, and I would like to see their values in each trial.  Is there any way to do this?

Yes, you can do this by creating a dummy constraint.  For example, suppose you want to log the values of cell X5.  You can add a constraint saying "X5 >= -1000000000", if you're sure the values of X5 will always be greater than this number.  The log generated after optimization will show the values of X5 during each trial.

Last edited: 2015-07-23

14.14. Interrupting and Resuming an Optimization

Applies to:
Evolver 5.x–7.x
RISKOptimizer 5.x
RISKOptimizer in @RISK 6.x/7.x Industrial Edition

My optimization takes a long time, and I need to take my laptop home in the middle of the optimization. How can I do this without losing several hours' work?

There are two methods.

Option A: Pause the optimization with the yellow bars. Don't close Excel, don't log off, just hibernate the laptop. (Default power settings probably hibernate the laptop when you close the lid.) When you get to your new location, enter your password to unlock the session, and click the yellow bars to resume from the same point.

Option B: Stop the optimization with the red square. When the dialog pops up, ensure that "Best Values" is checked (ticked). Save the workbook, possibly under a new name. Close Excel. When you get to your new location, open Evolver, open the workbook, and start the optimization. It will resume from that point, but the history before that pinpoint not be available.

Last edited: 2015-07-23

14.15. Iteration Data in RISKOptimizer

Applies to:
RISKOptimizer, all releases

I know that I can open the optimization log to get summaries of the results of the simulations, but is there any way to get the individual iteration data?

Yes, you can use the RiskData( ) function in your spreadsheet to get iteration data. Please see the help file or the online manual for details of using the RiskData( ) function. There's also a worked-out example in Placing Iteration Data in Worksheet with RiskData( ).

You can also export iteration data using a macro. Please see Exporting Information During Simulation.

Last edited: 2015-07-23

14.16. Branch Bounding and Cut

Applies to: Evolver 6.x/7.x

Does your software support the branch bounding and cut algorithm, also known as branch-and-cut, for optimization?

Yes, beginning with our release 6.0, it is supported in the OptQuest engine.

The linear programming algorithms include several version of the simplex method which are variations on three main approaches: a primal method, a dual method, and a linked primal-dual method. Problems with integer variables are addressed using a branch-and-bound algorithm with penalties based on an advanced form of probing (look-ahead analysis). Cutting plane analysis is incorporated within the probing, and so in this sense the method could also be classified as a branch-and-cut procedure.

(RISKOptimizer doesn't include linear programming functionality. RISKOptimizer is designed for optimization involving Monte Carlo simulations, and those optimization problems are never linear. If you have RISKOptimizer as part of @RISK Industrial Edition, and you want to solve linear optimizations with no stochastic element, please contact your Palisade sales manager to add Evolver on its own or as part of an upgrade to The DecisionTools Suite.)

See also: Linear Programming in Evolver.

Last edited: 2015-07-24

14.17. Starting Values of Adjustable Cells

Applies to:
Evolver 1.x–7.x
RISKOptimizer 1.x
@RISK Industrial 5.x–7.x (RISKOptimizer)

Does it matter what values are in the adjustable cells when I click the start button for an optimization?

With the genetic algorithm, it matters very much. With the OptQuest engine, it matters very little.

The OptQuest engine was added in Evolver 6.0 and @RISK Industrial 6.0's Optimizer. With those and later versions, you can select the engine on the Engine screen of the Settings dialog.  If you leave the default setting of Automatic, the software will select the engine that seems more appropriate.

With OptQuest, the starting solution is really just a suggested solution, and the algorithm includes a method for generating feasible solutions (all constraints met) even if the initial values don't meet constraints.

The genetic algorithm, which was the only algorithm in earlier releases and is still an option in 6.x, includes backtracking as an important strategy. When Evolver or RISKOptimizer strikes out from where it was standing and that doesn't make the target cell get closer to the goal, then the optimizer backtracks to its previous position and strikes out in a new direction. This has two consequences for the genetic algorithm:

  • It's extremely important to have an initial feasible solution (all constraints met) before you click the start button. If the initial values of the adjustable cells don't represent a feasible solution, the optimizer may have to strike out in many directions to find a feasible solution before it can then really begin the main optimization. For more on this, and help in finding an initial feasible solution, please see Debugging RISKOptimizer and Evolver Models.

  • Perhaps your problem has one or more local optimum points as well as a global optimum. If the initial values in the adjustable cells are close to a local optimum, Evolver or RISKOptimizer may home in on the local optimum instead of the global optimum. If this happens, starting from a very different set of values (though still meeting all constraints) may let the optimizer find a better solution.

Last edited: 2015-07-23

14.18. Large Models in Evolver and RISKOptimizer

Applies to:
@RISK Industrial 5.x–7.x, Evolver 5.x–7.x

Can the OptQuest engine in Evolver and RISKOptimizer handle large problems, involving thousands of variables and thousands of constraints?

A key algorithm in OptQuest is tabu search, invented by Fred W. Glover, an accomplished and recognized researcher in the field of optimization. Tabu search is considered to be a method for solving large problems. Here is a paper in which it was used to solve very large traveling salesman problems, a classic optimization problem: A Parallel Tabu Search Algorithm for Large Traveling Salesman Problems (accessed d). Another paper is Tabu Search-Based Algorithm for Large Scale Crew Scheduling Problems (PDF, accessed 2015-07-23).

 

Last edited: 2015-07-23

14.19. Linear Programming in Evolver

Applies to: Evolver 6.x/7.x

In linear programming (LP) or linear optimization problems, all the constrained cells and the target cell or "objective function" are linear functions of the adjustable cells. 

Evolver's LP functionality corresponds to the "Simplex LP" algorithm in Excel 2010 Solver, and to the "Assume Linear Model" option in Solver in earlier versions of Excel.  Here are some advantages of the LP functionality in Evolver, as compared to Solver:

  • With Solver the user needs to select "Simplex LP" or "Assume Linear Model"; Evolver auto-detects when to use linear programming.

  • Solver is limited to 200 variables and 100 constraints.  In the Industrial Edition, Evolver's LP has no limit on the number of variables ("adjustable cells") or constraints. (In Evolver Help » Example Spreadsheets, the "Transportation - Large" and "Product Mix - Large" Evolver examples have 250 variables each.)

  • Evolver offers LP with discrete variables in addition to continuous and integer variables.  For example, you might want to say that the permissible values of a variable are 10, 20, 30, 40, ... as in the "Product Mix - Large" example.  Discrete variables are not supported in Solver.

Linear Programming is not listed as a separate option in the Engine tab of the Optimization Settings dialog, where available optimization methods are listed.  If the engine settings are left as Automatic, Evolver will apply Linear Programming, as long as the optimization problem is linear.

(RISKOptimizer doesn't include linear programming functionality. RISKOptimizer is designed for optimization involving Monte Carlo simulations, and those optimization problems are never linear. If you have RISKOptimizer as part of @RISK Industrial Edition, and you want to solve linear optimizations with no stochastic element, please contact your Palisade sales manager to add Evolver on its own or as part of an upgrade to The DecisionTools Suite.)

Last edited: 2015-07-24

14.20. Did My Optimization Use OptQuest or the Genetic Algorithm?

Applies to:
Evolver 6.x/7.x
RISKOptimizer in @RISK 6.x/7.x Industrial Edition

I have my optimization engine set to Automatic.  How can I tell whether a given optimization used the Genetic Algorithm or OptQuest?

Look in the Optimization Summary report, near the end, to find the "Optimization Engine" entry. This will tell you whether OptQuest or the Genetic Algorithm was chosen.  If OptQuest was chosen, there's no way within Evolver or RISKOptimizer (@RISK) to determine which sub-algorithm was used.

"The OptQuest Engine combines Tabu search, scatter search, integer programming, and neural networks into a single, composite search algorithm." (source, accessed 2013-11-04)  This standard algorithm does not include the OptTek genetic algorithm, because Evolver and RIKOptimizer (@RISK Industrial) have Palisade's genetic algorithm built in.

Last edited: 2015-07-23

14.21. Technical Details of Genetic Algorithm

Applies to:
Evolver, all releases
@RISK 5.x–7.x, Industrial Edition
RISKOptimizer, releases 1.x

How does the Genetic Algorithm work, internally?

  • In Evolver 7.x, see "How Evolver's Genetic Algorithm Optimization is Implemented" in the help file. The same topic is part of Appendix A of the Evolver user manual.

  • In @RISK 7.x Industrial Edition, see "How RISKOptimizer's Genetic Algorithm Optimization is Implemented" in the help file. The same topic is part of Appendix B of the @RISK user manual.

  • Earlier software versions have similar topics in the help file and user manual.

Additional details are proprietary to Palisade.

Last edited: 2017-08-25

14.22. Technical Details of OptQuest

Applies to: @RISK Industrial 5.x–7.x (RISKOptimizer)
Evolver 5.x–7.x

How does the OptQuest engine work, internally?

Please see Optimization of Complex Systems (PDF), by OptTek Systems (accessed 2015-07-23).

Last edited: 2015-07-23

14.23. Evolver and Solver Compared

Applies to: Evolver 6.x/7.x

How does Evolver compare to Excel's Solver?  Can Evolver do anything that Excel Solver cannot?

Evolver offers functionality similar to that of the Solver add-in included with Excel 2010 and higher, but Evolver has a few important advantages.  Both products offer local optimization (single "hill" in the function graph) and global optimization (multiple "hills"), with smooth and non-smooth functions.  For the special case of linear optimization problems (constraints and target/objective function linear), they both offer Linear Programming methods.  (Excel 2007 and lower also include the Solver add-in, but the Solver that comes with those versions does not handle global optimization or non-smooth functions.)

Even though Solver is available to every Excel user, Evolver is more convenient to use and will find solutions to many problems that Solver will not be able to solve:

  • With Solver, you need to select the right optimization method or algorithm for the type of optimization problem. This can be "Simplex LP", "GRG Nonlinear" or "Evolutionary", depending on whether the problem is linear or non-linear, smooth or non-smooth, and local or global.  Evolver will automatically select the algorithm that matches the type of problem, as long as the "Engine" selection in the Optimization Settings dialog is left as the default "Automatic".

  • Solver is limited to 200 variables and 100 constraints.  Evolver Industrial has no limit on the number of variables ("adjustable cells") or constraints. (The example spreadsheets "Transportation - Large" and "Product Mix - Large" have 250 variables each.  You can access them through Evolver's Help » Example Spreadsheets menu.)

  • Evolver supports discrete variables along with integer and continuous.  For example, you may want to say that the permissible values of a variable are 10, 20, 30, 40, ..., as in the "Product Mix - Large" example.  Discrete variables are not supported in Solver.

  • Even when Solver can find a solution, Evolver may be able to find a better one.  One such example is shown in the attached workbook: Evolver vs. Solver.xlsx.  The example is intended to be used with Excel 2010 or higher and Evolver 6.0 or higher.

Last edited: 2015-07-23

14.24. Clearing RISKOptimizer Settings without Clearing @RISK Settings

Applies to: RISKOptimizer in @RISK 6.x/7.x Industrial Edition

How can I clear the model definition and other RISKOptimizer settings?

The easy way is Utilities » Clear @RISK Data.  The Settings check box will remove the RISKOptimizer information. 

However, it will also remove all the @RISK simulation settings. If you don't want to do this, you can create and run a Visual Basic macro to delete just the RISKOptimizer model, without affecting @RISK's simulation settings.

  1. Press Alt-F11 to open the Visual Basic Editor, then F7 to open the code window.

  2. Paste this code into the window:

    Sub ClearOptimizerModel( )
        RISKOptimizer.ModelWorkbook.AdjustableCellGroups.RemoveAll
        RISKOptimizer.ModelWorkbook.Constraints.RemoveAll
        RISKOptimizer.ModelWorkbook.OptimizationSettings.Engine.OptimizationEngine = OptEngineAutomatic
        RISKOptimizer.ModelWorkbook.OptimizationSettings.Runtime.TrialCountStoppingCondition = False
        RISKOptimizer.ModelWorkbook.OptimizationSettings.Runtime.ProgressStoppingCondition = False
        RISKOptimizer.ModelWorkbook.OptimizationSettings.Runtime.TimeSpanStoppingCondition = False
        RISKOptimizer.ModelWorkbook.OptimizationSettings.Runtime.StopOnErrors = False
        RISKOptimizer.ModelWorkbook.OptimizationSettings.Runtime.FormulaStoppingCondition = False   
    End Sub
  3. Please see Setting References in Visual Basic for the appropriate references and how to set them.

  4. Click somewhere in the middle of the ClearOptimizerModel routine, and press F5 to run the code.  This will remove the RISKOptimizer model without affecting the @RISK simulation settings.

  5. If you leave the code in place, Excel 2007 or above will no longer store the workbook as an .XLSX but instead will use .XLSM format.  This may present you or anyone who opens your workbook with a macro security prompt.  To prevent that, you can delete the pasted code before you save the workbook.  The RISKOptimizer model will not return when you delete the code.

Last edited: 2015-07-23

14.25. Debugging RISKOptimizer and Evolver Models

Applies to:
RISKOptimizer in @RISK 5.x–7.x Industrial Edition
Evolver 4.x–7.x
Evolver Developer Kit (EDK)
RISKOptimizer Developer Kit (RODK)

What are some things I can do to debug my RISKOptimizer or Evolver model?

Here are some suggestions:

  • Look for redundant constraints, or constraints that further constrain the Adjustable Cell Range. An important rule to go by is that the only constraint on an adjustable cell range should be the constraint itself. When there are constraints on a constraint, this causes RISKOptimizer to waste a lot of time finding solutions that abide by the main constraint, only to find that they violate a "subconstraint".

  • Consider the magnitude of the deviation when assigning a penalty function. For example, if the numbers in a model deviate in terms of thousands or millions, instead of the default penalty formula (100*(EXP(deviation/100)-1), which exponentiates the deviation, you can have the formula just be "deviation".

  • This bullet applies only to the Genetic Algorithm.  (All optimizations in 5.7 and earlier used the Genetic Algorithm, but beginning in 6.0 you can choose between the Genetic Algorithm and the OptQuest engine.)

    For the Genetic Algorithm, it's crucial that all constraints are satisfied in the initial state of the model before you start the optimization. This gives the software a good "launch pad" from which to work so that it does not waste a lot of time trying solutions that are totally in left field.  Please see Starting Values of Adjustable Cells.

    Evolver and RISKOptimizer 5.5 and above have Constraint Solver in the Utilities menu. This tool will find an initial feasible solution for you.  In 5.0 and earlier versions, to get the model started with an initial feasible solution, you can run a preliminary optimization in which the goal of the optimization is get the model in a state in which no constraints are violated. For example, you can do this by expressing each constraint individually in Excel as a statement that evaluates to either True or False. Then use an If statement that determines whether the constraint cell evaluates to True or False. If it evaluates to False (i.e., not satisfying the constraint) make the cell return some arbitrary number, other than zero. 1 might be a good choice. If the constraint cell evaluates to True, meaning that the constraint was satisfied, have the If statement evaluate to 0. Then have a cell at the bottom of the column of IF statements that sums that entire column. Make this the target cell of the optimization (leaving the adjustable cells as they are), and make the objective of the optimization achieving the closest value possible to 0.

See also: Evolver or RISKOptimizer Doesn't Find Best Solution

Last edited: 2015-07-23

15. NeuralTools

15.1. Recommended Amount of Data

Applies to: NeuralTools 5.x–7.x

I have about 700 rows of 100 columns to train my network. Is this sufficient? How can I determine in advance the number of rows necessary for a given number of columns?

There's no minimum number of columns (variables). As for the number of rows (cases), this can be determined only by experimentation: training and testing neural nets. Some authors in the literature have suggested 300 records, but that would probably apply if they had say 10 variables. With 100 variables, 300 rows would be far too few. Again, experimentation with a particular data set is the only way to determine whether you have enough cases in your data set.

Additional keywords: How many cases

Last edited: 2015-09-03

15.2. Preparing Data for Neural Tools

Applies to: NeuralTools, all releases

I've worked through the Quick Start and Guided Tour videos, as well as the manual. Can you give me some more specific advice about preparing my data set? How do I select variables for training? How do I decide whether to transform them? What are best practices for data preparation?

Here are recommendations from our lead developer of NeuralTools:

  • Timothy Masters, Practical Neural Network Recipes in C++ (1993). Chapter 16, "Preparing Input Data".
  • Timothy Masters, Signal and Image Processing with Neural Networks (1994). Chapter 3, "Data Preparation for Neural Networks".
  • Jeannette Lawrence, Data Preparation for a Neural Network (undated PDF, via www.archive.org).
  • Lean Yu, Shouyang Wang, and K.K. Lai, "An Integrated Data Preparation Scheme for Neural Network Data Analysis", in IEEE Transactions on Knowledge and Data Engineering 18:2 (February 2006) is available in PDF. This in-depth article also contains a long list of relevant references.

See also: Data Transformation before Training? gives a some brief guidelines.

Last edited: 2017-09-14

15.3. Order of Variables, Order of Cases

Does the order of my variables (columns) or cases (rows) matter, in the NeuralTools training process? It seems like I get different results if I move them around.

There's no reason to expect better results with one ordering than another. But it is possible for the algorithm to behave differently depending on the order, yielding a somewhat different neural net depending on the ordering. With some types of nets, results will differ between two training sessions with variables in the same order; see Technical Questions about the Training Process.

The question of time series sometimes comes up. NeuralTools doesn't "understand" time series, or the idea of one case being related to the case in any particular other row. If you have data in a time series, you represent that to NeuralTools by making the dates one of your independent variables.

Last edited: 2017-05-11

15.4. Dependent Variable Is a Date

Applies to: NeuralTools, all releases

Does NeuralTools treat dates in the dependent column as Category or as Numeric? Also, are there any restrictions to consider when using dates in the dependent column?

Excel treats dates as numbers, just applying special formats to them. The part before the decimal point is the number of days since 1900-01-01, and the part after the decimal point is the time of day. If you change cell formatting to a regular numeric format, you can see the underlying representation of the date as a number.

NeuralTools treats dates as numbers, because Excel does. NeuralTools doesn't know anything about these numeric values being displayed as dates. But NeuralTools does standardize numeric variables, so there's no need to worry about those date values being incommensurate with the independent variables.

There are no restrictions associated with using dates in NeuralTools cases. Of course, it might make sense in a particular data set to transform the date variable, just like any other numeric variable, for example by applying differencing.

Last edited: 2016-06-13

15.5. Dependent Variable Is a Category

The dependent variable in my NeuralTools data set is a category. Does that mean that the independent variables need to be categories also?

With either type of dependent variable, numeric or category, you can have numeric independent variables, category independent variables, or a mix.

Last edited: 2017-05-11

15.6. Excluding Variables from Training

I want to train the net without using all of the variables in my NeuralTools data set. Do I need to delete the unwanted columns?

That's not necessary. Instead, just go into Data Set Manager and change the Variable Type of the unwanted variables to Unused. NeuralTools will ignore them, just as if they were not there at all.

Last edited: 2017-05-11

15.7. Working in Excel during Training and Testing

Applies to:
NeuralTools 5.x–7.x

Training and testing my network takes a long time, and it takes even longer to run the Testing Sensitivity command.  During that time, I would like to work on another workbook.  Is there any way I can use Excel for something else while NeuralTools is running an analysis?

Yes, you can open a second instance of Excel and do anything in that instance, with one exception: Don't run any Palisade product in that second instance of Excel.

To open a second instance of your version of Excel, please see Opening a Second Instance of Excel.

Last edited: 2017-11-29

15.8. Multiple CPUs in NeuralTools?

Applies to:
NeuralTools 5.x–7.x

Training a network, or testing sensitivity, can take a long time. I've got 8 cores, but Task Manager shows NeuralTools using only one of them (12% of the total). Is there a way to have NeuralTools use multiple cores, as @RISK does?

We're sorry, but NeuralTools uses only one core (one CPU) at present. We have noted this as a suggestion for a future release of NeuralTools.

You can still work on other workbooks in Excel while NeuralTools runs. Please see Opening a Second Instance of Excel.

Last edited: 2017-11-29

15.9. More Than One Dependent Variable in NeuralTools?

Applies to: NeuralTools 5.x–7.x

I want to use a set of data to predict two dependent variables.  Is there a way to do it, or am I limited to one dependent variable?

You can have more than one dependent variable in your data set, but not during any one training session.

The solution is to set up your independent and dependent variables in one data set.  Then train one net with your independent variables and one dependent variable, and train a second net with your independent variables and the other dependent variable.  This gives you two Live Prediction dependent variables in the same data set, each using a different net.  Between the training sessions, in the Data Set Manager dialog you need to change the specification of the dependent variable; in other words, you cannot have two variables defined as dependent at a given time, since then NeuralTools wouldn't know where to put the live predictions.

You could also train those nets with two different data sets, and then use them both for Live Prediction in a third data set.

Last edited: 2015-09-03

15.10. Trained Network of Networks

Applies to: NeuralTools 6.x/7.x

Can I train a collection of neural nets and have a supervisory net choose which one to invoke depending upon the inputs?

There is no such function in NeuralTools for Excel as shipped. You might possibly construct this using the programming interfaces that are available for accessing Palisade neural net functionality (though it sounds rather complex). You might also consider using these interfaces for another scenario, namely training a neural net, and then using it in the context of your own VBA code.

  • The Excel Developer Kit (XDK) is included with NeuralTools. This lets you write VBA code to train a neural net based on Excel data, and store this net either in an Excel workbook or a file external to Excel (in Palisade's format). One limitation with this option is that the data for training and predicting must be in Excel, in NeuralTools data sets. You define data sets through the Data Set Manager command or through the programming interface. Workbooks with sample XDK code are available through the Help menu of NeuralTools.

  • We offer another product, a set of libraries named Palisade Custom Runtime. PCR makes Palisade's math available outside Excel, including neural net functionality. PCR can be used to train nets, and it can also predict using nets trained in Excel and stored in files. PCR could also be used from VBA for Excel, providing greater flexibility than the XDK. To use PCR from VBA, some development platform compatibility issues would have to be resolved. For example, PCR has a .NET interface, which is not directly accessible from VBA; the problem could be resolved with some programming effort.

Last edited: 2016-08-02

15.11. Automating NeuralTools

Applies to: NeuralTools 5.5 and newer

Can I run NeuralTools from Excel VBA (Visual Basic for Applications) or programmatically?

Yes, this has been part of NeuralTools since release 5.5. For details, in NeuralTools click Help » Developer Kit (XDK).

NeuralTools 1.0 and 5.0 did not provide a VBA programming API, so the definition and training of a neural net must be done interactively. However, once a neural net has been trained, Live Prediction (available in NeuralTools Professional) can be used in combination with Excel VBA in order to make use of a neural net programmatically.

Once the neural net has been trained, values in the data set can be changed and the corresponding outputs can be used the same as any other Excel cells which have values computed by formulas. In other words, if you are used to working with a formula like

A2 = A1 * 10

where you change the value of A1 programmatically and then work with the value of A2, then you are already working with the same principles. In the case of NeuralTools, changing A1 would correspond to changing one of the independent values in a particular data case, and A2 would correspond with the output computed by the neural network.

Since trained neural nets can be saved with Excel workbooks, the requirement of defining and training a neural network interactively will not impact the ability to make use of a trained neural net programmatically.

Last edited: 2015-09-03

15.12. Data Transformation before Training?

Applies to: NeuralTools 5.x–7.x

Is it advisable to transform some or all variables mathematically before inputting the dataset, for instance by a log transformation?

NeuralTools automatically scales numeric input variables linearly, to reduce differences in the order of magnitude of the variables; this is described in the manual.

Depending on your data, you may improve your results by performing additional non-linear transformations of data before training and prediction. For example, if the distribution of a variable has a long tail with most of the data points clustered together, the log transformation may be useful. The objective is for the neural net to "learn" how different patterns in inputs relate to the outputs, and that process is less likely to succeed if differences in patterns are obscured by data points being clustered together.

NeuralTools itself is not set up to do any user-selected data transformations. However, if you also have StatTools you can use it to perform several common types of non-linear transformations and then paste the results into your NeuralTools input data.

See also: Preparing Data for Neural Tools

Last edited: 2017-09-14

15.13. Signal Strength in Trained Networks

Applies to: NeuralTools, all releases

Neural networks "learn" by modifying the signal strength among neurons. "Threshold" or "activation" mathematical functions serve the purpose of measuring and adjusting signal strength. Do these functions within NeuralTools produce continuous values (say between 0 and 1), or are the values limited to 0 or 1 in an "all or nothing" situation?

Continuous values. For the purposes of classification, they're continuous values under the hood, but result displayed to the user is the predicted category, say "timely payments" or "late payments" with regard to a loan applicant.

Last edited: 2015-09-03

15.14. Hidden Layers of Neurons

Applies to: NeuralTools, all releases

How many "hidden layers" of neurons exist within the software?

Generally speaking, too many layers result in an over-parameterized model (over-fitting), and too few result in a poorly fitted model. Therefore, does the software have the ability to determine the optimal number of hidden layers?

By default, NeuralTools will train a Probabilistic Neural Net (PN net); these are considered better as classifiers, and also provide probabilities of predictions. PN nets are not prone to over-fitting.

Using an MLF net is also an option. This is the standard type of neural net, called "Multiple-layer feedforward/MLF nets" in NeuralTools. With an MLF net, it's not so much the number of hidden layers that is the issue. NeuralTools allows up to 2 layers, but few applications require more than one layer. The question is the number of nodes in that one layer. NeuralTools will suggest a number of nodes based on the data, but that is not necessarily the optimal number. The way to find the optimal number is to run Best Net Search: the best configuration is found based on results on the data set aside for testing.

Last edited: 2015-09-03

15.15. Validation Method

Applies to: NeuralTools, all releases

Does NeuralTools use K-fold Cross Validation?

"K-fold Cross Validation" is a method of assessing the accuracy of predictions of a trained net. NeuralTools relies on the more standard "holdout validation" to accomplish this — it sets aside part of the data for testing before training.

Added in NeuralTools 6.0.0, the Testing Sensitivity Analysis feature is also a form of cross validation.

Last edited: 2017-03-08

15.16. Technical Questions about the Training Process

Applies to: NeuralTools, all releases

Is it fair to say that NeuralTools only processes whichever information you feed it and internally adjusts the thresholds and weights of the node connections?

Adjusting or optimizing the weights of the node connections is what happens when training Multi-Layer Feedforward Networks.  We offer those, but by default we train Probabilistic Neural Nets (category prediction) or Generalized Regression Neural Nets (numeric prediction).  With PNNs/GRNNs, the training process consists in the adjustment or optimization of "smoothing factors".  Smoothing factors decide how far we look when making a prediction for a given case, where "how far" refers to the distance between this case, and each case found in our historical data.  Training of PNNs/GRNNs is much faster; that's why we use them by default.

Does NeuralTools include a process to generate samples or random numbers of any type?

Random numbers are involved in the training of MLF nets.  This is a much harder optimization problem than with PNNs/GRNNs.  Consequently, complex optimization methods are used.  We use a hybrid of deterministic and stochastic optimization methods, and the latter involve random numbers.  No random numbers are involved when training PNNs/GRNNs.

Last edited: 2015-09-03

15.17. GPU in NeuralTools

Applies to:
NeuralTools 5.x–7.x

For NeuralTools, is it better to have a powerful GPU for training and testing the neural network?

NeuralTools doesn't use the GPU at all for neural net computations. The GPU is involved only for displaying things on screen, just as with any other piece of software.

Last edited: 2018-03-16

15.18. Tradeoffs: Accuracy, Flexibility, Training Speed

Applies to: NeuralTools 5.x–7.x

Is there a way to reduce accuracy of the neural network but extend its flexibility?

Here the key is the distinction between different neural net types available in NeuralTools: Probabilistic Neural Nets versus Multi-Layer Feedforward Neural Nets.

MLF nets are more flexible, in the sense that they have greater capacity for generalizing beyond the range of training data. By default NeuralTools uses PN nets, which train much faster. For greater flexibility, and capability to generalize, try MLF nets.

Last edited: 2016-08-02

15.19. Training Time for PN/GRN Nets

Applies to: NeuralTools, all releases

Can I increase the training time for PN/GRN neural nets?

No. Training of PN/GRN nets will stop when the error is no longer statistically relevant. If the PN/GRN net trained longer, the error on the training set might decrease -- but the error on the testing set might well increase! Training of the PN/GRN net stops when NeuralTools decides further training would no longer significantly decrease training error.

For PN/GRN nets, it is not true that the longer a net is trained, the better.

For MLF nets, an increase in training time generally improves the accuracy of the net. However, if there are too many hidden nodes, the MLF net might overtrain with long training times. For that reason long training of MLF nets is only recommended in the context of Best Net Search.

Last edited: 2005-10-25

15.20. Overfitting during Training

Applies to: NeuralTools 5.x–7.x

How does NeuralTools address the problem of overfitting?

Overfitting occurs when neural net "memorizes" the training data instead of finding general patterns. With an overfitted net, the testing results on the training set are very good, but they are poor when the net is tested on data that was set aside for testing.

This potential problem is addressed in NeuralTools as follows:

  • PNN nets are not prone to overfitting.
  • For MLF nets, we recommend Best Net Search to avoid overfitting. It starts with a small net with two nodes, which will probably not overfit. It then tries bigger nets, to get to a point where the net is big enough to train properly, but not so big as to be capable of overfitting.

NeuralTools 6.0 added Sensitivity Analysis, which should help in diagnosing overfitting. If we get overfitting with a given architecture, that will probably show in unstable results from Sensitivity Analysis.

Last edited: 2015-09-03

15.21. Dependent Variable is Binary 1/0 or Yes/No

Applies to: NeuralTools 5.x–7.x

My dependent variable is all 1's and 0's, with a strong preponderance of 1's and few 0's.  After training, NeuralTools simply gets all the 1's correct, and all the 0's wrong.  What's wrong?

Basically, NeuralTools doesn't have enough 0 cases to learn from. It has decided that most cases result in a 1, and it just predicts a 1 every time.  It's right most of the time, but the predictions aren't useful.

In NeuralTools, click Help » Example Spreadsheets » Other Category Prediction Examples » Advertising Responses - Oversampling.  This example illustrates oversampling to make a better balance between the 0 and 1 cases.  (The example has a preponderance of 0's rather than a preponderance of 1's, but the principle is the same.)

As an alternative, you might try a Best Net Search on your original data.  The search may find an MLF with good testing results.  (PNN nets predict by interpolation from training data, so they don't work too well if the training set is very unbalanced, with the vast majority of the training cases falling into one of the dependent categories. MLF nets are capable of finding general patterns, so it is more likely that a good MLF net can be found with an unbalanced training set.  But keep in mind MLF nets don't return probabilities of predictions.)

Last edited: 2015-09-03

15.22. Correlated Variables and Interactions between Variables

Applies to:  NeuralTools, all releases

Does the software create an optimally "simple" model by combining correlated variables (dimension reduction)?

Correlations between variables are much less of an issue with neural networks than they are with linear regression. Therefore, at this point NeuralTools does not address them. "Multiple-layer feedforward networks generally handle even massive amounts of correlation without complaints. Probabilistic neural networks can be hindered to some degree, as groups of redundant variables exert undue influence on the decision process" (T. Masters, Advanced Algorithms for Neural Networks, 1995, p. 294).

The first step would be to determine if correlations are present, say by using StatTools Correlation and Covariance analysis. If there is a significant amount of correlation in the data, that would point towards the use of MLF nets. However, a PN net could also be a viable option, if it makes accurate predictions on the testing data despite the presence of correlations.

Interactions between variables are often present within data. This situation exists when the relationships between the values of one variable vary when measured against the values of another variable. For example, the driving experience of males vs. females does not remain constant across all ages.

How does NeuralTools adjust the impact of any given variable for any interactions that are present? Does the software provide graphs (sometimes referred to as panel graphs) that would help the user visualize and explain any interactions?

Variable interactions present a problem with linear regression, but do not present a problem with neural networks. Therefore NeuralTools does not provide tools for analyzing variable interactions.

Last edited: 2015-09-03

15.23. Quantity and Selection of Test Cases

Applies to: NeuralTools 5.x and newer

I'm using NeuralTools for time series predictions. Should I allow a random % for testing? Is it preferable to use the tag feature to test on sequential data, maybe using 1995-2014 data in training and 2015 in testing? What is the recommended amount of data to test?

The answer depends on the type of neural net. GRN nets are used by default for numeric prediction, and for these nets it makes a difference whether we use randomly selected data points for testing, or rather the final time period. GRN nets interpolate from known data, and it's easier to interpolate inside a gap, where we have known data on both sides of the gap. Therefore we recommend tagging the final period for testing with GRN nets.

MLF nets try to figure out the underlying function, and there it's OK to use randomly selected cases for testing.

For the amount of testing data, 20% is a good rule of thumb.  When using randomly selected cases, the Testing Sensitivity Analysis can help figure out the number of cases to use.

Last edited: 2016-04-19

15.24. Interpreting Incorrect% in Testing Report

My question is about the Detailed Report that NeuralTools produces when testing category predictions in a PNN. With a Good result, the Incorrect% is 100 minus the Prediction%, which makes sense to me. But with a Bad result, there doesn't seem to be a relationship between the Incorrect% and the Prediction%. What does the Incorrect% mean?

The Incorrect% is the sum of the probabilities for all the incorrect categories.

This is easier to understand if you "change Detailed Report settings to show the probabilities assigned by a Probabilistic Neural Net to every possible category for the dependent variable." Click Utilities » Application Settings. In the Reports section, click the drop-down menu at the end of the Columns in Detailed Reports row. A dialog box titled NeuralTools — Columns to Display in Detailed Reports will open. "In that dialog, select Probabilities of All Categories (for PNN) for Testing. Then train a PN Net on a data set with at least 3 categories in the dependent variable." (Source: NeuralTools 7 user manual, page 51, or topic "Training Reports" in the help file.)

Attached is part of the results from our standard example "Auto Loans 1a" (Help » Example Spreadsheets) with that application setting in effect. (Click the image to open it full size, in a new tab.)

Detailed Testing Report — click to enlarge.

Row 5 is probably the clearest: The correct answer was "late payments", but NeuralTools had computed there was only a 43.12% chance it was correct. NeuralTools picked "timely payments" because it thought there was a 54.69% chance that was correct. It computed a 2.19% probability for the third category, "default". The Incorrect% is 100% minus the probability of the correct answer, 100–43.12 = 56.88%. That same 56.88% is the sum of the probabilities of the incorrect answers, 54.69+2.19%. By contrast, for a Good result like row 6, Incorrect% is always 100% minus Prediction%, since the predicted answer was correct and all the incorrect answers were also not predicted.

For Good and Bad answers alike, Incorrect% is the total probability of all incorrect answers, which equals 100 minus the correct%. However, NeuralTools displays the Prediction%, not the correct%. If the answer was Good, the predicted answer was the correct answer, so the probability shown in Prediction% is the correct%. But for Bad answers, the prediction was not the correct answer; therefore, the correct% is not shown, the Prediction% is the probability of the one incorrect answer that had highest probability, and the Incorrect% is the total probability of all the incorrect answers.

Last edited: 2018-06-07

15.25. Time Needed for Testing Sensitivity

Applies to:
NeuralTools 6.7/7.x

I clicked on the Testing Sensitivity command, and it estimated a very long time to complete the analysis. Is there some way to shorten it?

The analysis has to run very many training sessions to assess the stability of the testing process. Your end result is some knowledge of the range of error measures when holding out various percentages of cases for testing. In deciding whether to assess testing sensitivity, you're deciding whether it's enough to know that one testing session had an error of X, or you need to know if another training session might have error of 2X or X/2.

The analysis time depends on training time (which could be fixed time or variable, as set in the training dialog), number of different % values, and the setting Number to train for each % value. If you reduce either of the last two, you will reduce the analysis time proportionately.

Last edited: 2018-11-28

15.26. Testing Results in Training Set Versus Testing Set

Applies to: NeuralTools, all releases

Why are testing results for the training set so much better than the testing results for the testing set with PN/GRN nets?

The summary training report includes testing results for the training set, but they do not provide useful information as to how well the training went. Good testing results on the training set may result from the training process overfitting neural net parameters to the specific cases included in training; we can think of this as the neural net memorizing the training set. An overfitted net will generate inaccurate predictions for cases not included in training. (For more on overfitting, see Overfitting During Training.)

With PN/GRN nets there is another reason for a low testing error reported for the training set. PNNs predict by interpolating from the entire training set, with emphasis on the training cases that are in the neighborhood of the one for which we are making the prediction. So the error reported for the training set is based on a procedure in which we make a prediction for a data point by interpolation from a set that includes that data point. That means we'll almost always get the correct answer in the case of category prediction (PN nets), or close to the correct answer in the case of numeric prediction (GRN nets).

Last edited: 2015-09-03

15.27. When NeuralTools Gives Unsatisfactory Test Results

Applies to:
NeuralTools 6.x/7.x

NeuralTools finished training and testing the net, but I'm not happy with the test results. Is there anything I can do to get a better network, so that I'll have higher confidence in predictions?

Yes, there are several possible improvements. Choose from the suggestions below, based on your situation.

  • Gather more cases. The more cases you have, assuming they're not duplicates, the better network NeuralTools can construct.
  • Train longer. If you have specified a fixed amount of time for training, and NeuralTools didn't finish training within that limit, consider increasing it.
  • Try Best Net Search. If you've specified a particular type of network, a different type might provide better results. The extra time for a Best Net Search may reward you with a better network. (In the training dialog, on the Net Configuration tab, change Type of Net to "Best Net Search" and re-train.)
  • Exclude some variables. That may make training run faster and may provide better results. See Eliminate Variables Based on Impact Analysis?
  • Change your percentage of training cases that are reserved for testing. If you have just a training data set, and you're telling NeuralTools to hold out a certain percentage of cases for testing, try a different percentage. Testing Sensitivity, in the Utilities menu, can help you make that decision.

Last edited: 2019-01-28

15.28. "None of the Above" Category

Applies to: NeuralTools, all releases

Can a neural net decide that a case does not belong to any of the categories found in the training data?

Yes, we can interpret the output of a Probabilistic Neural Network in this manner. (PNNs are the default type of net used for category prediction.)

Let's say we train a net to determine the vehicle type, based on some characteristics of a satellite image of a vehicle. In our training data set we have images for which we know the actual type of vehicle: sedan, station wagon, or van. When we ask a PNN to assign an image to one of these categories, we obtain 3 probabilities as our output, adding up to 100%. For example, the probability of a sedan may be 70%, a station wagon 20%, and a van 10%. In this case NeuralTools declares that we have an image of a sedan, with 70% probability (selecting the category with the highest probability).

When making a prediction for another image, it may turn out that none of the probabilities are very high. For example, our output may specify that the probability of a sedan is 30%, a station wagon 40%, and a van 30%. In this case NeuralTools will declare that the image is one of a station wagon. However, the user may choose to interpret the probabilities differently. We may decide that if none of the probabilities exceeds 50%, we treat the item as belonging to some category not found in the training data. This approach makes sense if we know that the list of categories in the training data is incomplete, as it is in our example: for example, the vehicle may be a truck.

To implement this approach to the interpretation of output probabilities, you need to specify that all the probabilities should be included in the Detailed Report. Open the Application Settings dialog, click in Columns in Detailed Reports row, and the dialog "NeuralTools – Columns to Display in Detailed Reports" will be shown; select the check box in the "Probabilities of All Categories (PNN)" row, in the "Prediction" column.

Additional keywords: Unknown category

Last edited: 2014-04-10

15.29. Structure of the Trained Network

Applies to:  NeuralTools, all releases

Is there some way to find the architecture (number of layers and neurons) of the neural network that I trained on the NeuralTools software?

It depends on the type of network you train.

If you train a GRN/PN network (which is the default), there's no flexibility as to the structure of the net.  The structure is as described in the "More on Neural Networks" section of the NeuralTools user manual. 

If you train an MLF net, you can specify the structure (layers or nodes). NeuralTools allows up to two layers in an MLF net. If it's left as Automatic, then the automatically selected structure is described in the training report.  While you knows the number of MLF layers and nodes, we don't provide the specific mathematical formula based on which the net makes predictions, as mentioned in Internals of the Trained Network.

Last edited: 2014-11-17

15.30. "Linear Predictor" in Training Report

Applies to: NeuralTools 6.x/7.x

In my training report, I see "Best Configuration: Linear Predictor". What does that mean?

By default, NeuralTools will try linear regression on numeric variables. (You can inhibit this in the Training dialog, if you wish.) Linear Predictor means that, for this data set, a linear regression did a better job than a regular neural network. In this situation, the intercept and coefficients are shown in a separate section on the same report.

One of the supplied examples shows this. Click Help » Example Spreadsheets » House Prices with Linear Regression.

What method does the linear regression model use? OLS, GLS, other?

NeuralTools uses Ordinary Least Squares (OLS).

Last edited: 2018-03-01

15.31. Internals of the Trained Network

Applies to: NeuralTools 5.5 and newer

Can I see the internals of the trained network or the actual equation that NeuralTools develops?

For a neural net, no "equation" is reported. The details of how a neural net computes the predictions would often be very complex, especially for PN/GRN nets.

Beginning with 5.5, NeuralTools attempts linear regression in addition to training a neural net. If the linear function makes better predictions on the testing set, then the linear function is used instead of a neural net. In this case, the linear function intercept and coefficients are listed in the report. An example installs with NeuralTools:

  • NeuralTools 6.x/7.x: Click Help » Example Spreadsheets » Other Numeric Prediction Examples » House Prices with Linear Regression
  • NeuralTools 5.5/5.7: Click Help » Example Spreadsheets » House Prices (Linear Regression)

See also: Structure of the Trained Network

Last edited: 2015-09-03

15.32. Accessing the Trained Network from Outside Excel

Applies to: NeuralTools 5.x–7.x

Can I use the trained network from outside of Excel?

Yes, but only while Excel and NeuralTools are running.

Method 1 — Live Prediction (requires NeuralTools Industrial Edition)

You can get NeuralTools predictions by having your non-Excel application communicate with Excel, utilizing the Live Prediction feature (requires NeuralTools Industrial Edition). Here's how:

  1. In Excel, train and test a neural net, and store that net in a workbook. Let's call it PredictionWorkbook.xls.

  2. PredictionWorkbook.xls will contain a Live Prediction cell; the value of that cell will change automatically as soon as input/independent values change.

  3. When the non-Excel custom application wants to get a prediction, it starts Excel, opens NeuralTools.xla and opens PredictionWorkbook.xls (all of which is done by code and doesn't need to be visible on the screen).

  4. Using Excel API, the custom application writes input/independent values into appropriate cells in PredictionWorkbook.xls, and reads the prediction from the Live Prediction cell.

Method 2 — NeuralTools Excel Developer Kit

There's also a NeuralTools Excel Developer Kit beginning with NeuralTools 5.5. Using that, you could trigger the Predict command from code (for new independent data), without a need for Live Prediction. If you want predictions outside of Excel, your non-Excel application would need to communicate with the copy of Excel that is running NeuralTools.

Last edited: 2015-09-03

15.33. Combining Two Trained Networks

Applies to: NeuralTools 6.x/7.x

I have two different data sets with the same dependent variable (output), and I have trained them separately. Can I combine them to provide even better predictions?

There is some literature in the field of machine learning about combining multiple models to arrive at a single prediction. You want to use search terms like "ensemble methods" and "combining classifiers".

This paper uses an ensemble model in NeuralTools for credit evaluation: "On the Use of Ensemble Models for Credit Evaluation" by Albert Fensterstock, Jim Salters and Ryan Willging, published in The Credit and Financial Management Review's issue dated Fourth Quarter 2013. The paper gives some additional references:

  • Dietterich, Thomas G. (2000) Ensemble Methods in Machine Learning
  • Kuncheva, L. and Whitaker, C. (2003) Measures of Diversity in Classifier Ensembles
  • Mitchell, Tom M. (1997) Machine Learning
  • Schwenk, Holger and Benggio, Yoshua (2000) Neural Computation
  • Sollich, Peter and Krogh, Anders. (1996) Learning with Ensembles: How Overfitting Can Be Useful
  • Specht, Donald F. (1989) Probabilistic Neural Networks
  • Zenko, Bernard (2004) Is Combining Classifiers Better Than Selecting the Best One?

Last edited: 2015-09-03

15.34. Where Is the Detailed Report?

Applies to: NeuralTools 5.x–7.x

I requested detailed and summary reports. I see the summary report all right, as a worksheet tab, but I didn't get a detailed report. What is wrong?

The detailed report is in the form of additional columns to the right of your data columns, rather than a separate worksheet tab.

I get a detailed report in testing and in prediction, but I never seem to get one in training. What is wrong?

The default settings for NeuralTools give detailed reports for testing and prediction, but not for training. If you would like to see a detailed report for training as well, open Application Settings in the NeuralTools Utilities Menu, change Reports to Generate to Custom, make your selections, and click OK.

Last edited: 2015-09-03

15.35. Inputs Correlated with Target

Applies to: NeuralTools 5.x–7.x

Some of my training reports showed surprisingly low association with inputs which were quite visibly correlated with the target itself. Is there a way to bias the training to take advantage of this?

This refers to the Variable Impact values that are included in training reports. It's important to understand the purpose of the Impact values, and how they're calculated. Calculation and Use of Variable Impacts has a lot to say about this.

The purpose is only heuristic, and the idea is that it's worth dropping variables with low impact values to see if the results improve.

Last edited: 2016-08-02

15.36. Calculation and Use of Variable Impacts

Applies to: NeuralTools, all releases

How does NeuralTools calculate variable impacts, and how can I use these results?

The following has been in the user manual since release 5.5.0, under "What is Variable Impact Analysis?":

The purpose of Variable Impact Analysis is to measure the sensitivity of net predictions to changes in independent variables. This analysis is only done on training data. As a result of the analysis, every independent variable is assigned a "Relative Variable Impact" value; these are percent values and add to 100%. The lower the percent value for a given variable, the less that variable affects the predictions. The results of the analysis can help in the selection of a new set of independent variables, one that will allow more accurate predictions. For example, a variable with a low impact value can be eliminated in favor of some new variable.

However, one needs to keep in mind that the results of the Impact Analysis are relative to a given net. The fact that one net "learned" to disregard a given variable makes it likely that another net will also "learn" to disregard it; but then again, another training session with a different type of net might "discover" how to the variable can make a significant contribution to accurate predictions. In data sets with smaller numbers of cases and/or larger numbers of variables, the differences in the relative impact of the variables between trained nets may be more pronounced. Also, it is important to remember that these values are "relative". Suppose that with two independent variables one is assigned 99%, and the other 1%. This means that the latter is much less important than the former, but does not mean that it is unimportant, particularly if high accuracy of predictions is desired.

Only the training data set is included in the analysis. (If Auto-Testing or Auto-Prediction are used, those cases are not included. The reason is that they might have numeric values outside the training range, which could make analysis results more unpredictable.)

For a given category independent variable, for every case the analysis steps through all the valid categories for that variable, and measures the change to the predicted value. (With category prediction there is no numeric predicted value, but there are raw numeric net outputs on which the category prediction is based; those numeric outputs are used by the analysis.)

For a given numeric independent variable, for every case the analysis steps through the range from the minimum to the maximum training value for that variable, measuring the change to the predicted value (or, in the case of category prediction, change to the raw numeric outputs).

The internal details of the method are not crucial, because the purpose of that analysis is limited. It's not meant to support firm conclusions, like stating with high confidence that a given variable is irrelevant. Instead, it's meant to help in a search for the best set of independent variables: the results of the analysis may be telling us that a given variable looks irrelevant, sufficiently so that it's worth trying to train a net without this variable.

I understand the above caveats, but still I'd like to know exactly how the impacts are calculated.

We take the first case in our training set, and we step through the values of the first independent variable (while keeping other variables fixed), make predictions with our neural net, and record the values we get for the dependent variable. Delta is the difference between max and min dependent value.

We do that for every case in the training set. Let's use MeanDelta1 to represent the mean Delta value for the first variable. We get deltas for our n independent variables, MeanDelta1, MeanDelta2, ..., MeanDeltan.

Then the impact of the first variable is MeanDelta1 / (MeanDelta1 + MeanDelta2 + ... + MeanDeltan), expressed as a percentage, and similarly for the others. The total impact is always 100%.

See also:

Last edited: 2015-09-03

15.37. Interpreting the Percentages in Variable Impact Analysis

Applies to: NeuralTools 5.x–7.x

What if var1 = 25%, var2 = 15% and var3 = 10% in the variable impact analysis? The help file indicates that this number could be used as a ranking number and the lowest ones could be eliminated from the model. But, using the numbers above, can I say that these three represent 50% of samples?

As you see in the manual, we make limited claims for the results of the variable impact analysis: these percentages can help when trying to eliminate unnecessary variables from training. (For more about that, please see Eliminate Variables Based on Impact Analysis?)

These percentage values refer to percent of total variability in the dependent variable that comes from a given independent variable. They refer to the overall network and cannot be interpreted in terms of percent of samples.

Last edited: 2015-09-03

15.38. Eliminate Variables Based on Impact Analysis?

Applies to: NeuralTools, all releases

Should I eliminate low ranking variables as indicated by the Variable Impact Analysis, and then re-run/re-test the model without these variables, in effect creating a new model? What criteria should I use to choose the variables to eliminate?

If results on the testing data are not satisfactory, it makes sense to try eliminating variables using the Variable Impact Analysis tool. You can decide whether to use a net trained with a smaller number of variables, based on the results on the testing data.

See also: Interpreting the Percentages in Variable Impact Analysis.

Trying different subsets of independent variables could be a tedious process. If you have NeuralTools 5.5 or later, you can automate it by using the NeuralTools Developer Kit (XDK).

Last edited: 2019-01-16

15.39. Categorical Predictions with Probit or Logit?

Applies to: NeuralTools, all releases

For categorical prediction, does NeuralTools use a probit, logit, or some type of multinomial function added to the model?

These are terms from traditional statistical methods for category prediction, like logistic regression.  PNNs are a more recent invention, but have a better foundation in terms of statistical theory.  Regarding logit, it's used because it works well in practice, but there's no statistical argument to show why it should work better than say probit.  The PNN methodology, on the other hand, is essentially the statistics for deriving probability density functions from data (one probability density function per category).  Probabilities that an item is in this or that category come directly from probability density functions, with those functions constructed from the available data.

Another way to compare say logistic regression to PNNs is to note that logistic regression tries to compress all of the information contained in the historical data in a rather simple function.  With PNNs, we keep all of the historical data to be used during the prediction step.  During the prediction step we interpolate from all of our historical data. This interpolation is done with the smoothing factors determining how far we look to get the interpolated value; see Technical Questions about the Training Process.

Because PNNs have better theoretical foundation, and don't try to compress the information in the historical data into a simple function, they often end up making better predictions than the more traditional methods.

Last edited: 2013-06-24

15.40. Live Prediction

Applies to: NeuralTools 7.x

Can I save a trained net to an .NTF file with Neural Net Manager and then use it for live prediction?

No, it must be in the workbook with the data set. However, you can use Neural Net Manager to move a network to a new workbook. This lets you analyze data sets in other workbooks using the existing trained network, with no need to access the original data set that was used in training.

As an alternative, with some custom programming the Palisade Custom Runtime can make predictions directly with nets stored in .NTF files. The greater the number or complexity of your nets, the more benefit there is to this approach.

When the trained net is in the workbook, can I use live prediction in any cell?

The prediction formula can only be generated by our software using the Predict command on the ribbon. This should not be a major inconvenience. You can create a one-row data set with one live prediction cell. Then in the cells for independent variables you can add references to the cells you want to use for live prediction.

How many live prediction cells can I have in one workbook?

There's no fixed limit; it depends on the number and complexity of your nets and the available Excel and Windows resources.

Last edited: 2017-05-31

15.41. Extrapolation with a Trained Network

Applies to: NeuralTools 5.x–7.x

Let's say that we have 10 variables in an historical log of 10,000 records, and all of them were used to train the net. Let's assume the net is predicting values with high accuracy.

Now, what happens if the network receives information that is outside the original ranges it was trained with? If effect, it's being asked for an extrapolation. How will it perform?

Attempts to make predictions outside the range of the training data can be problematic. For now, some sort of custom coding would be needed to check for these cases, but a future release of Neural tools will likely automate these warnings.

Let us note differences between neural net types in terms of their ability to extrapolate beyond training data. MLF nets will do a better job in this regard than GRN/PN nets. (See attached example.) GRN/PN nets use sophisticated statistical techniques to interpolate from training data, and are very good at it; MLF nets have the capability to discern general patterns that will probably extend outside the training range.

It might also be possible to set up training/testing to stress test this particular scenario: deliberately split the data set so there's a lot of out-of-range data in the testing set. The NeuralTools XDK could be used to perform multiple tests with one click (analogous to the Testing Sensitivity analysis added in v6).

Last edited: 2015-09-03

15.42. Data treatment before training a Neural Network

Applies to: NeuralTools 7.x/8.x

Main steps to follow:

1. Data Quality

The first step for building any prediction or classification model is to evaluate the data quality. It is important to identify and fix some problems related to the data set.

  • Repeated records. A record with the same information can't be included many times in the data set because if one of them is selected during the training and the other ones during the testing, we could increase the percentage of bad predictions.

  • Values out of ranges. Numerical values that the variable shouldn’t take. Example: Age=-3

  • Invalid values. Categorical variables with categories that don’t make sense. Example: Marital Status = Bachelor

  • Inconsistencies. This type of problem occurs when there is no concordance between the values of two or more variables.

    Example: Age = 20 years, Employment seniority = 25 years.

  • Missing data. If the variable has more than X% of missing data it is removed of the analysis; otherwise missing data can be estimated using any imputation data technique.


2. Univariate Analysis

Once data issues described before have been fixed, the next step is to run a Univariate Analysis.

  • Categorical variables. Build a bar chart or a frequency table of the variable. See an example below:



- If the variable only has one value, it should be removed from the analysis.


- If there are categories with a frequency lower than 5%, the variable should be categorized again in order to ensure a frequency greater than 5%.



-If there are only two categories and one of them has a frequency lower than 5% it should be removed from the analysis.

  • Numerical variables. Compute descriptive statistics; build a histogram and a boxplot of the variable.


- If the variable is highly skewed, it would be convenient to use the log transformation during the Neural Network training.


- If there are outliers, it is important to see if they are error measures or not, before making the decision to exclude them.

3. Bivariate Analysis


If there are a big number of independent variables, it is convenient to run a Bivariate Analysis which means that all the independent variables will be analyzed with the dependent variable at the same time.

  • Categorical variable vs. categorical variable: If both variables are categorical, run a Chi-square Independence test.
    - If the p-value of the test is low, the independent variable is included in the neural net training; otherwise, it is omitted.

  • Numerical variable vs numerical variable: If both variables are numerical, compute the Spearman correlation coefficient.

    - If the absolute value of the correlation coefficient is greater than 0.75, the independent variable is included in the neural net training; otherwise, it is omitted.

  • Numerical variable vs. categorical variable: If one of the variables is numeric and the other one is categorical, run a t-test (available in StatTools) or a Mann-Whitney test (also available in StatTools).

- t Test. These results are based on the assumption that the variables are approximately normally distributed. If this is not the case, then these results might not be valid, especially if the sample size is small. You can use the Mann-Whitney test in these cases.
If the p-value of the test is low, the independent variable is included in the neural net training; otherwise, it is omitted.
You can run this analysis trough the menu Statistical Inference > Hypothesis Test > Mean/Std. Deviation… of StatTools. Be sure to select the Two-Sample Analysis type.

- Mann-Whitney test. If the p-value of the test is low, the independent variable is included in the neural net training; otherwise, it is omitted. You can run this analysis trough the menu Nonparametric tests > Mann-Whitney test … of StatTools.


Last Update: 2020-06-04


 

 


16. PrecisionTree

16.1. Size Limits in PrecisionTree

Applies to: PrecisionTree 5.x–7.x

It seems my computer really drags when I create a larger decision tree or influence diagram. I know that PrecisionTree Professional Edition is limited to 1000 nodes, but are there any limits to the number of nodes in PrecisionTree Industrial Edition?

PrecisionTree Industrial does not have a limit to the total number of nodes, but large trees must be split into subtrees of no more than 5000 nodes each. You can link them into the main tree with reference nodes. Particularly if you have some repetitive subtrees, this can simplify your logic and make your tree easier to understand.  Depending on the details of the main tree and the common subtrees, you may or may not observe a speedup in calculations.

Though there's no fixed upper limit on number of nodes, there is a practical limit that varies from one computer to the next. The computational complexity of the influence diagram is proportional to the product of the numbers of branches in all the nodes. That means that you get slower and slower performance as you have more and more nodes. A thousand, for instance, would almost certainly be too many. But there's no one number where we can say, "no more than this", because it depends on the characteristics of the computer.

Similar considerations apply with a decision tree. And, even though a decision tree lets you split a portion off into a subtree, this does not reduce the computational complexity.

Last edited: 2015-10-20

16.2. Sharing a Tree with Colleagues Who Don't Have PrecisionTree

Applies to: PrecisionTree 6.x/7.x

I want to send the Excel file containing my tree to a colleague who doesn't have PrecisionTree. How can I do this?

If they open the Excel file and don't have PrecisionTree running, all the numbers will be replaced with #NAME.

PrecisionTree doesn't have anything that corresponds to @RISK "swap out functions". Instead, you can use PrecisionTree to capture all or part of the tree to an image and paste it into Word, Excel, PowerPoint, Acrobat, or any program that can accept images. Please see Decision Tree on a PowerPoint Slide.

Beginning with release 7.0, you also have the option to click the View in BigPicture button in PrecisionTree, if you have BigPicture. That will load BigPicture and make a copy of your tree. In BigPicture, you can expand individual branches, after turning Single Expand off if necessary; or you can click Collapse or Expand to » and select a level of expansion, such as No Topics Collapsed. The resulting tree will be self contained in Excel, and all the numbers can be viewed by someone who has Excel but doesn't have PrecisionTree or BigPicture. (If they have BigPicture but not PrecisionTree, they'll be able to collapse and expand branches of the tree.)

Last edited: 2015-09-04

16.3. Payoff Formula after Converting Influence Diagram to Decision Tree

Applies to: PrecisionTree, all releases

I converted an influence diagram to a decision tree. In the converted tree, the path payoff calculation is Payoff Formula, but there's no default formula. But when I create a tree myself, the calculation is Cumulative Payoff. Is something wrong?

Cumulative payoff is indeed the default formula for new trees. But when converting an influence diagram to a tree, PrecisionTree has to honor the values in that diagram, which are specific to each node, so the method is Payoff Formula. Each node has its own formula, but there is no default formula because there's no one formula that would be suitable for all the converted nodes.

Last edited: 2013-01-04

16.4. Converting Decision Tree to Influence Diagram ?

Applies to: PrecisionTree, all releases

PrecisionTree can convert an influence diagram to a decision tree. Can it go the other way?

No, PrecisionTree cannot convert a decision tree to an influence diagram, because there is not a unique solution to that type of conversion.

Last edited: 2013-01-04

16.5. Fault Trees in PrecisionTree?

Applies to: PrecisionTree 1.x, 5.x–7.x

Can a decision tree be turned into a fault tree?  Can PrecisionTree handle fault trees?

PrecisionTree is not suitable for fault trees.  PrecisionTree will not do reverse calculation and does not have AND/OR gates.

Our custom developers may be able to create a fault tree application for you, using @RISK for uncertainty assessment.  Please contact your Palisade sales manager if this is of interest to you.

Last edited: 2013-07-22

16.6. Utility Functions

Applies to: PrecisionTree 5.x and newer

What is a utility function? Why would I use one?

Most decision trees are about possible gains and losses in money units like dollars. They assume that every dollar is as good as every other dollar, that your happiness from gaining $100 and your unhappiness from losing $100 would be equal. When the amounts involved are small relative to your wealth (or your company's wealth), that's not a bad assumption.

But suppose your net worth is $200,000, including your home equity. Losing $200,000 would wipe you out and put you on the street, whereas gaining $200,000 would make you somewhat more comfortable. The negative utility of losing $200,000 is much greater than the positive utility of gaining $200,000. In this example, it makes sense to be risk averse. You might pas up a decision with potential large gains and losses, in favor of a lower-risk decision with smaller potential gains but also smaller potential losses.

A utility function translates branch values, such as money amounts, into utility. In effect, you customize the values to you of potential gains and losses, and PrecisionTree then takes those into account. You set this up on the Utility Function tab of Model Settings. PrecisionTree lets you choose whether to display Expected Value (the original branch values, and the resulting rollup values), Expected Utility (the computed utility function), or Certainty Equivalent (see "Oil Drilling 6 - Model with Utility Function.xlsx" in Help » Example Spreadsheets). No matter what is displayed, once you select Use Utility Function, PrecisionTree will do all its computations to maximize your utility rather than your return based on original branch values.

At this point, you may want to open the two attached examples in PrecisionTree. Watch for Excel's macro security prompt and enable macros in both workbooks.) Then, in Excel, click View » Arrange All » Vertical to display them side by side. The two files have the same three trees, just displayed differently. The Expected Value one shows original branch values, and the Expected Utility one shows the computed utility function.

Tree #1 doesn't use a utility function, so it's identical between the two files. The decision node selects the top branch because its expected dollar value, $10, is greater than the $0 expected value of the other branch of the decision node.

Tree #2 is the same as Tree #1, except that it uses the predefined Exponential utility function; you can change the R value to see the effect of differing risk tolerance. With R = 100, just as one possible value, the optimum path through the tree is different from what it was without the utility function. Although the expected dollar value of the upper branch is greater, it's also more risky, so PrecisionTree chooses the lower branch. This is not at all obvious from expected values, but if you look at Expected Utility you can see that the expected utility of the lower branch is greater (less negative) than the upper branch.

Tree #3 is also the same as Tree #1, except that it uses a custom utility function. Just to show how custom utility functions are created, it uses the simple function U = sqrt(R+$). You can experiment with different R values, but they must all be at least 100, since the most negative value of any branch is –$100.

Last edited: 2015-11-24

16.7. Calculation Algorithm for Decision Trees

Applies to: PrecisionTree 5.x–7.x

How does PrecisionTree calculate the values and probabilities it shows next to each node? Why are some nodes marked TRUE and others FALSE?

Short answer:

  • The value of an end node is the sum of the value of each branch in the path from the root. The percentage (likelihood) of an end node is the product of each branch in the path from the root, counting TRUE as 1 and FALSE as 0.
  • Working from right to left, the value of a chance node is the weighted average of the values of the nodes that that chance node branches to, and the value of a decision node is the value of the node on the TRUE branch of that decision node.

There's a much more thorough explanation in the PrecisionTree user manual. Click Help » Documentation » Manual and look at Appendix A.

For convenience, those two pages are attached to this article. The information is valid for PrecisionTree up through 7.6. If you have a later release of PrecisionTree, please look at the manual that is installed with your software.

Last edited: 2018-11-30

16.8. Copying a Decision Tree

Applies to: PrecisionTree 5.x–7.x

How can I copy a decision tree to a new worksheet, or to an existing worksheet?

To duplicate an existing worksheet that contains a tree, right-click the tab that holds the sheet name and select Move or Copy. In the Move or Copy dialog, put a check mark (tick mark) in the box labeled Create a copy.

To duplicate a tree onto an existing worksheet:

  1. Create a new tree at the spot where you want to create the copy.
  2. Right-click the root node of the existing tree and select Copy Subtree.
  3. Right-click the terminal node of the new tree and select Paste Subtree. Select Yes in the confirmation prompt that pops up.

Last edited: 2015-06-16

16.9. Insert a Node in a Decision Tree

Applies to: PrecisionTree 5.x–7.x

How can I add a node within a branch when using PrecisionTree—in other words, how do I create another decision level in the middle of a tree?

In PrecisionTree 6.x and newer, right-click on any node and select Insert Node. A new node will be inserted to the left of the selected node.

In PrecisionTree 5.x, follow this procedure:

  1. Copy the portion of the tree which will be to the right of the new node by right-clicking on the node and selecting Copy SubTree.
  2. Create a new tree to save the contents of the copied sub-tree by clicking on the Create New Decision Tree button and selecting a blank cell. When the tree options dialog appears, click OK.
  3. Right-click on the end node of the tree you just created and select Paste SubTree.
  4. Delete the original subtree by right-clicking on the node and selecting Delete SubTree.
  5. Create your new node where the subtree was just deleted.
  6. Right-click on the end-node of the new node and select Paste SubTree.
  7. Delete the decision tree that you created to save the subtree by right-clicking on the tree name and selecting Model » Delete.

Last edited: 2015-09-04

16.10. Merging Workbooks that Contain Decision Trees

Applies to: PrecisionTree 6.x/7.x

I have one or two trees in each of several workbooks, and I'd like to consolidate them into one workbook for convenience.  How can this be done?

Yes, assuming the trees are all valid and recognized by PrecisionTree.  First open PrecisionTree, then open the workbooks in question.  Use Excel's "Move or copy sheet" command on the sheets that contain the trees.

Last edited: 2015-09-04

16.11. Decision Tree on a PowerPoint Slide

Applies to: PrecisionTree 1.x, 5.x–7.x

How can I transfer my Decision Tree into PowerPoint for a presentation?

If you have PrecisionTree 6.0 or newer, right-click on a node and select Copy Image to Clipboard. Then paste the image into Word, PowerPoint, etc.

With PrecisionTree 7.0 or newer, if you also have BigPicture, you can click View in BigPicture for an alternative format. See Sharing a Tree with Colleagues Who Don't Have PrecisionTree for more about using BigPicture to display a decision tree.

If you have PrecisionTree 5.7 or an older release, follow these steps:

  1. Open PrecisionTree, and open the file that contains your tree.
  2. Adjust the view in Excel so that you can see all or most of your tree.
  3. Open PowerPoint.
  4. From the menu in PowerPoint, choose Insert » Object.
  5. Click the Create from File option button.
  6. Click the Browse button and browse to the file that contains your PrecisionTree model.
  7. Click OK.
  8. Adjust the magnification of the picture of your tree using the Zoom box in the PowerPoint menu.
  9. Make other PowerPoint adjustments as needed.

Last edited: 2015-09-04

16.12. PrecisionTree Creates an External Link

Applies to: PrecisionTree 7.x

When I try to move one entire sheet that contains a tree to a different workbook, and external link is created on the formula with reference to the first workbook.

This happens when you try to copy the entire tree from one workbook to a different workbook. To avoid this, right click on the first sheet and select copy or move and paste this sheet on the second workbook as a new sheet.

Last edited: 2019-03-25

17. StatTools

17.1. Data Limits and Number of Variables in StatTools

Applies to: StatTools 6.x/7.x/8.x

How much data can StatTools process? How many variables? How many cases?

With StatTools Professional, there is a limit of 10,000 rows of data. With StatTools Industrial, there is no fixed limit, only the limit imposed by available memory. (Excel 2003 itself limits a workbook to 255 worksheets of 65,536 rows, about 16.7 million cases in all; later Excels have no such limit.)

The maximum number of variables for every StatTools analysis can be found in the help file or the user manual. In StatTools, click Help » Documentation » Help and select the Index tab. The StatTools 6 topic is Command Listing, and the StatTools 7 topic is Table of StatTools Procedures.

Additional keywords: Number of variables, independent variables, limit on variables

Last edited: 2015-09-04

17.2. Managing Reports in StatTools

Applies to: StatTools 5.x and newer

It seems like getting basic information about my data out of StatTools takes a lot of reports, and they're all in separate workbooks. Isn't there some way to consolidate them?

Yes, there is. The default with StatTools is to put each report in a new workbook, but you can easily change that. Click Utilities » Application Settings. In the Reports section, change Placement to After Last Used Column in Active Sheet or to Query for Starting Cell. (You can also get to Application Settings by clicking the icon at the bottom of most StatTools dialogs.)

Beginning with StatTools 7.0, there's a new alternative way to get statistics and summary graphs for multiple variables, all in one window — and this includes overlay graphs, box-whisker plots, scatter plots, and correlation coefficients. Click the new Data Viewer icon, and use the row of icons at the bottom of the dialog to select which information you want to see.

Last edited: 2016-04-20

17.3. Sharing StatTools Results with Colleagues Who Don't Have StatTools

Applies to:  StatTools 5.x–7.x

I ran a StatTools analysis, saved the workbook, and sent it to a colleague. She opened it in Excel, and instead of the numbers she just saw #NAME errors. How can share my results?

When you run an analysis, it can be static or live. A static number is not linked to the original data, so it doesn't change if you change the original numbers. A live number is linked, and therefore the result changes as you change your data. (If you have set Excel calculation to manual, you'll have to press F9 to trigger a recalculation.) All of this applies to graphs as well as numerical results. If the analysis creates a new sheet — which happens if Reports: Placement in Application Settings is set to New Workbook or Active Workbook — the header will show whether updating is live or static.

The default in StatTools is to create live results, which are linked to the original data. If you're planning to send results to a colleague who doesn't have StatTools, go into Utilities » Application Setting, and in the Reports section change Updating Preference to Static. This has no effect on any analysis done previously, so you need to change the setting before generating the report.

Setting Updating Preference to Static is also useful if you want to keep a historical record even though the data may change later.

Last edited: 2015-09-04

17.4. Number of Independent Variables in Regression

Applies to: Applies to: StatTools 1.x, 5.x–7.x

How many independent variables can I have in a regression in StatTools?

The limit varies by StatTools version:

  • 1000 in StatTools 7 (250 for logistic regression and discriminant analysis).
  • 250 in StatTools 5 and 6.
  • 25 in StatTools 1.

See also: Data Limits and Number of Variables in StatTools for limits that apply to all StatTools analyses.

Last edited: 2015-09-04

17.5. Forcing Regression through the Origin

Applies to: StatTools 1.x, 5.x–7.x

What is the meaning of the box "Set constant to zero (origin)" in the StatTools regression dialog? How should I decide whether to use it?

A regression with m independent variables actually computes m+1 values: a coefficient for each variable and a constant term. The coefficients would correspond to variable cost and the constant term to fixed cost. StatTools finds the combination of those m+1 values that best fit the data.

If you check the box "set constant to zero", you are forcing the regression to include the origin. In other words, you are saying that you want the fixed cost to be zero no matter what, and the coefficients for the independent variables should be fitted within that constraint. This is a controversial procedure. By definition, the residuals will be higher than they would be if you let StatTools fit the constant term in addition to the coefficients. And when the regression is forced through the origin, R² is computed differently and can even be negative.

If you have sound theoretical reasons for rejecting the constant term, you might want to run the regression both ways and compare results, not just R² but the plots. If you don't have a strong reason to reject the constant term, you probably want to leave "set constant to zero" unchecked.

For more on this topic, try a Web search for regression force origin (without quotes).

Last edited: 2015-09-04

17.6. Methodology of Logistic Regression

Applies to: StatTools 7.x

The user manual says, "The StatTools logistic regression procedure relies on optimization to find the regression equation. This optimization uses a complex nonlinear algorithm." Can you tell me anything more about the methodology?

The detailed code is proprietary, but we can tell you that it's based on Maximum Likelihood Estimators. StatTools uses an optimization process based on the conjugate gradient method to solve a system of nonlinear equations and find these estimators.

Last edited: 2017-03-14

17.7. Multicollinearity

Applies to:  StatTools; @RISK for Excel; @RISK For Project; @RISK Developer Kit

What is multicollinearity, and how can I use StatTools to test for it?

Short version: In StatTools 7.0 and newer, on the Options tab of the Regression dialog, tick the box for "Check Multicollinearity" and the box to show the correlation matrix. StatTools will calculate a Variance Inflation Factor (VIF) for each independent variable. Large VIF indicate multicollinearity. Look in the correlation matrix to see which pairs of candidate variables are highly correlated. When a pair of variables have large VIF and are highly correlated, you may want to exclude one of the pair from the regression.

What is VIF, and what do we mean by "large VIF"? Wikipedia says, "The square root of the VIF indicates how much larger the standard error is, compared with what it would be if that variable were uncorrelated with the other predictor variables in the model.  Example: If the VIF of a predictor variable were 5.27 (√5.27 = 2.3) this means that the standard error for the coefficient of that predictor variable is 2.3 times as large as it would be if that predictor variable were uncorrelated with the other predictor variables."  In Detecting Multicollinearity Using Variance Inflation Factors, Penn State says, "The general rule of thumb is that VIFs exceeding 4 warrant further investigation, while VIFs exceeding 10 are signs of serious multicollinearity requiring correction."

Long version:

Each coefficient in a regression equation indicates the effect of one independent variable (explanatory variable) on the dependent variable (response variable), provided that the other independent variables in the equation remain constant. You could say that the coefficient represents the effect of this independent variable on the dependent variable in addition to the effects of the other variables in the equation. Therefore, the relationship between an independent variable Xj and the dependent variable Y depends on which other X's are included or not included in the equation.

This is especially true when there is a linear relationship between two or more independent variables, in which case we have multicollinearity. Multicollinearity is defined as "the presence of a fairly strong linear relationship between two or more explanatory variables", and it can make estimation difficult.

Example: This example and text have been adapted for this article from Managerial Statistics by Albright, Winston, Zappe, published by Duxbury Thomson Learning. Contact Palisade Corporation for ordering information, if you like this explanation of multicollinearity.

Consider the attached file. It is a very simple example, but it is serves the purpose of demonstrating the warnings of and how to deal with and recognize multicollinearity. (You need to open the file in StatTools to see all features.)

We want to explain a person's height by means of foot length. The response variable is Height, and the explanatory variables are Right and Left, the lengths of the right foot and the left foot. The question is, "What can occur when we regress Height on both Right and Left?"

To show what can happen numerically, we generated a hypothetical data set of heights and left and right foot lengths. We will use StatTools for the regression analysis, though @RISK can also do regression on input distributions (independent) and outputs (dependent).

On first inspection of this problem, common sense dictates that there is no need to include both Right and Left in an equation for Height. One or the other would be sufficient. In this example, however, we include them to make a point about the dangers of multicollinearity.

After creating a correlation matrix in StatTools with Summary Statistics » Correlation and Covariance, we notice a large correlation between height and foot size. Therefore, we would expect this regression equation to do a good job.  And our intuition is correct; the R² value is 0.817. This R² value is relatively large and would probably cause us to believe the relationship is very strong.

But what about the coefficients of Right and Left? Here is where the problem begins. The coefficient of Right indicates the right foot's effect on Height in addition to the effect of the left foot. That is, after the effect of Left on Height has already been taken into account, the extra information provided by Right is probably minimal. This can go both ways regarding Left and Right.

We created the data set so that except for random error, height is approximately 32 plus 3.2 times foot length (all expressed in inches). As shown in our correlation matrix using StatTools in Height.xls, the correlation between Height and either Right or Left in our data set is quite large, and the correlation between Right and Left is very close to 1.

The regression output when both Right and Left are entered in the equation for Height appears in Heights.xls. This tells a somewhat confusing story. The multiple R and the corresponding R² are about what we would expect, given the correlations between Height and either Right or Left in Height.xls. In particular, the multiple R is close to the correlation between Height and either Right or Left. Also, the Standard Error value is quite good. It implies that predictions of height from this regression equation will typically be off by only about 2 inches.

However, the coefficients of Right and Left are not at all what we might expect, given that we generated heights as approximately 32 plus 3.2 times foot length. In fact, the coefficient of Left is the wrong sign—it is negative! Besides this "wrong" sign, the tip-off that there is a problem is that the t-value of Left is quite small and the corresponding p-value is quite large. We might conclude that Height and Left are either not related or are related negatively. But we know from Height.xls that both of these conclusions are false.

In contrast, the coefficient of Right has the "correct" sign, and its t-value and associated p-value do imply statistical significance, at least at the 5% level. However, this happened mostly by chance. Slight changes in the data could change the results completely—the coefficient of Right could become negative and insignificant, or both coefficients could become insignificant. The problem is that although both Right and Left are clearly related to Height, it is impossible for the least squares method to distinguish their separate effects. Note that the regression equation does estimate the combined effect fairly well—the sum of the coefficients of Right and Left is 6.823 + (-3.645) = 3.178. This is close to the coefficient 3.2 that we used to generate the data. Also, the estimated intercept 31.760 is close to the intercept 32 we used to generate the data. Therefore, the estimated equation will work well for predicting heights. It just does not have reliable estimates of the individual coefficients of Right and Left.

When Right is the only variable in the equation as seen in Heights.xls, it becomes

Predicted Height = 31.546 + 3.195*Right

R² is 81.6%, Standard Error is 2.005, and the t-value and p-value for the coefficient of Right are now 21.34 and 0.0000—very significant. Similarly, when Left is the only variable in the equation, it becomes

Predicted Height = 31.526 + 3.197*Left

R² is 81.1% and Standard Error is 2.033; the t-value and p-value for the coefficient of Left are 20.99 and 0.0000—again, very significant. Clearly, these two equations tell almost identical stories, and they are much easier to interpret than the equation with both Right and Left included.

This example illustrates an extreme form of multicollinearity, where two explanatory variables are very highly correlated. In general, there are various degrees of multicollinearity. In each of them, there is a linear relationship between two or more explanatory variables, and this relationship makes it difficult to estimate the individual effect of the X's on the response variable.

Some common symptoms of multicollinearity can be:

  • wrong signs of the coefficients
  • smaller-than-expected t-values
  • larger-than-expected (insignificant) p-values.

In other words, variables that are really related to the response variable can look like they aren't related, based on their p-values. The reason is that their effects on Y are already explained by other X's in the equation.

Sometimes multicollinearity is easy to spot and treat. For example, it would be silly to include both Right and Left foot length in the equation for Height as seen in our example. They are obviously very highly correlated and only one is needed in the equation for Height. The solution then is to exclude one of them and re-estimate the equation.

However, multicollinearity is not usually this easy to treat or even diagnose. Suppose, for example, that we want to use regression to explain variations in salary. Three potentially useful explanatory variables are age, years of experience in the company, and years of experience in the industry. It is very likely that each of these is positively related to salary, and it is also very likely that they are very closely related to each other. However, it isn't clear which, if any, we should exclude from the regression equation. If we include all three, we are likely to find that at least one of them is insignificant (high p-value), in which case we might consider excluding it from the equation. If we do so, the R-squared and Standard Error values will probably not change very much—the equation will provide equally good predicted values—but the coefficients of the variables that remain in the equation could change considerably.

Last edited: 2017-05-12

17.8. Autocorrelation in StatTools Time Series

Applies to: StatTools 6.x/7.x

How does StatTools compute autocorrelation? I tried creating a lagged copy of the time series myself, and computing correlation with Excel's COREL( ) function, but that gave different answers.

Autocorrelation is computed differently, as you discovered. You can find the correlation formula in this page from the US National Institute of Science and Technology (NIST), and the autocorrelation formula in this page from NIST's Engineering Statistics Handbook.

Last edited: 2018-11-27

17.9. Cross Correlation with Lag for Two Variables

Applies to: StatTools 5.x–7.x

I need to run cross-correlation with lag for two variables. In StatTools, I was able to generate lags and then use the correlation function. However, the lags are positives and not negatives. To run the correlation for negative lags, I have to run lags for the other variable and then correlate, which makes it a little cumbersome, two-step process. Is there a better way?

It's true that negative lags are not available in the Lag procedure of StatTools. The only way to accomplish the task in StatTools, if you limit yourself to menu commands, involves running the Lag procedure twice, separately for each variable (X and Y), as follows:

  1. Use the StatTools Lag procedure to create shifted versions of Y (Y1, Y2, Y3, ...).
  2. Use the StatTools Correlation and Covariance procedure to correlate X with Y, Y1, Y2, Y3, .... That gives you positive lags, which is only half of the task.
  3. To get negative lags, create lagged versions of X (X1, X2, X3, ...) and run Correlation and Covariance between Y and all the X's. 

Instead of doing that, you might prefer to have more functionality available in the Lag procedure to begin with. You would like to get both positive and negative lags in the Lag procedure, say -Y3, -Y2, -Y1, Y1, Y2, Y3.  Then you could get correlations between X and this list, in one operation.

A good option, if you will have to do this more than once, is to use the automation interface of StatTools, as described in Help » Developer Kit » Help.  Using that interface, you can write VBA code that will perform the cross-correlation with lags in one step.  Your code would automate the numbered steps above, creating lagged variables with positive and negative lags and then running the Correlation and Covariance procedure.  An item can be added to the ribbon to launch the code.  If you have programming expertise, you can set this up yourself; or our Custom Development department can assist you. For Custom Development, please contact your Palisade sales office.

Last edited: 2016-03-11

17.10. Cluster Analysis Methodology

Applies to:
StatTools 7.x

What methodology is used by StatTools cluster analysis?

StatTools provides Hierarchical Agglomerative Clustering (HAC).

This procedure starts with each object representing an individual cluster, and then these clusters are sequentially merged according to their similarity. Similarity is achieved by use of an appropriate metric (a measure of distance between pairs of observations), and a linkage criterion which specifies the similarity of clusters as a function of the pairwise distances of observations in the clusters. The similarity sij between two clusters is given by

sij = 100 · (1 − dij/dmax)

where dmax is the maximum value in the original distance matrix D.

StatTools offers these linkage methods and metrics:

  • Linkage methods (labeled Agglomerative Method in the dialog): Single (Nearest Neighbor), Complete (Farthest Neighbor), Average, Centroid, Median, Ward.
    See the StatTools help topic "Cluster Analysis Dialog—Clustering Settings Tab" for definitions of these.
  • Metrics (labeled Distance Measure):
    • For observations: Euclidean, Squared Euclidean, Mahalanobis, Manhattan.
    • For variables: Correlation, Absolute Correlation.

    For details on the distance measures, please see the attached Word document.

The choice of metric or linkage method will influence the final number of clusters. Therefore, you may need to spend some time looking at your data set and choosing an appropriate metric and linkage. If in doubt, you might perhaps try different approaches and compare the results.

Last edited: 2018-05-08

17.11. "Equal Variances" and "Unequal Variances" in Two-Sample Inferences

Applies to:
StatTools 6.x/7.x

In StatTools, I'm selecting a confidence interval or hypothesis test about the difference in means of two independent samples. StatTools gives two columns of results, headed "Equal Variances" and "Unequal Variances". What do those mean?

Here's the short answer: just use the Unequal Variances column. Unless you want more details, you can stop reading now.

More details:

Formula for degrees of freedom in inferences about the difference of meansThe sampling distribution of the difference of sample means follows a Student's t distribution. As you know, there are an infinite number of t distributions, each one determined by its degrees of freedom. For one-sample inferences, df = n − 1. For two-sample inferences, the general formula for degrees of freedom is shown at right.

However, if you know that the population variances are equal, you can use df = n1 + n2 − 2. (Note: population variances, not sample variances.) Tha is usually (not always) a bit higher than the degrees of freedom computed by the general formula. Higher degrees of freedom translate to a higher critical t and lower p-value. In turn that means your confidence interval is usually a bit narrower and you are more likely to be able to reject the null hypothesis.

Some books and calculators use the term "pooling"—if the variances are equal then you can "pool the data sets", treating them as coming from one population.

But how can you know if the population variances are equal? Well, there's the rub: in the vast majority of cases you can't know. You can perform an F test, but even if you get a large p-value in the F test you have only failed to reject the hypothesis that the population variances are equal; you haven't proved it. Also, an F test requires that both populations be normally distributed, not just approximately normal as with a t test, and you virtually never know for sure that the populations are normal. For these reasons, the whole idea of pooling is controversial, and some textbooks don't even mention it as a possibility.

Finally, even after you go through all that, pooling or not ("Equal Variances" column or "Unequal Variances" column in StatTools results) usually makes only a minor difference. The conservative choice is to use the "Unequal Variances" column, meaning that the data sets are not pooled. This doesn't require you to make assumptions that you can't really be sure of, and it almost never makes much of a change in your results.

Last edited: 2018-10-22

17.12. "Learning Statistics with StatTools" Book

Applies to:
StatTools 5.x and 6.x

Question:
Is Albright's Learning Statistics with StatTools included with StatTools and the DecisionTools Suite?

StatTools 6.x:
The PDF is not included with the software, but can be purchased from our Web page.

StatTools 5.5.1, 5.7, and 5.7.1:
The StatTools Help menu contains a link to the PDF.

StatTools 5.0 and 5.5.0:
The book is installed with StatTools and The DecisionTools Suite but is not linked from any of the menus. To access the PDF, please follow these steps:

  1. Open My Computer and navigate to your Palisade installation folder. (By default that is C:\Program Files\Palisade or C:\Program Files (x86)\Palisade, but you may have installed the software to another location.)
  2. Within the Palisade folder, double-click StatTools5, then Examples, then English, then Albright.

last edited: 2012-11-06

17.13. StatTools Pareto Chart Limits

Applies to: StatTools 6.x/7.x

There is a limit on categories (bins) when using a Pareto Chart in StatTools. The most categories you are able to show in a Pareto chart is 100.

If you exceed 100 categories, then you will receive an error message with the phrase "Subscript out of range".

Additional keywords: Subscript out of range, Pareto charts

Last edited: 2020-07-16

17.14. Box and Whisker Outliers

Applies to: StatTools 6.x/7.x/8.x

The Whiskers and the outliers are determined by the Interquartile Range (IQR). This is the 3rd Quartile - 1st Quartile.

Whiskers extend to the further observations that are no more than 1.5 IQR from the edges of the box. Mild outliers are observations between 1.5 IQR and 3 IQR. Extreme outliers are greater than 3 IQR from the edges of the box.

Last edited: 2020-12-03

18. TopRank

18.1. INDIRECT and OFFSET with TopRank

Applies to: TopRank 6.x/7.x

When I let TopRank discover inputs, can it interpret Excel's INDIRECT( ) and OFFSET( ) functions?

Yes, TopRank interprets those functions when you add the output or begin analysis, depending on the checkboxes in Analysis Settings » Find Inputs, and can go through those functions to find inputs. 

The functions are evaluated once only.  If their values change later — for instance, if an INDIRECT( ) points to different cells during the analysis — TopRank will not update its list of inputs dynamically.

Last edited: 2015-09-11

19. Developer Kits—BDK, EDK, RDK, RODK

19.1. Support Policy for RDK, BDK, EDK, and RODK

Applies to:
@RISK Developer Kit (RDK)
BestFit Developer Kit (BDK)
Evolver Developer Kit (EDK)
RISKOptimizer Developer Kit (RODK)

Overview

The @RISK Developer Kit (RDK) and related technologies (BDK, RODK, and EDK) have been discontinued by Palisade. (For brevity, "RDK" will be used to refer to all the developer kit products: RDK, BDK, RODK, and EDK.)

In 2015, Palisade released the Palisade Custom Runtime (PCR), which replaces the RDK. PCR licenses are not sold "off the shelf", as the RDK licenses were, but instead are delivered as part of a customized development agreement with Palisade. Please visit our Custom Development page, or contact your Palisade sales manager or sales@palisade.com, for more information concerning the new PCR technology, including questions about upgrading your older RDK programs to the newer platform.

This document describes the policies governing the support of the discontinued RDK technologies.

Technical Support

Support for Current Maintenance Holders: If you have current maintenance for your RDK product, Palisade will continue to support you until the maintenance period has expired. However, please note: There is no technical support at all for the RDK running on Windows 10, Windows Server 2016, or later. You will be unable to renew your maintenance once it expires.

Support for License Reauthorizations: Palisade will continue to allow all RDK licenses to be reauthorized (including on new machines) regardless of maintenance status, using the automated authorization system. Problems with authorization will be handled at a very basic level by Technical Support. Use of the RDK on the Windows 10, Windows Server 2016, or later operating systems is not recommended, and is not supported by Palisade Technical Support. Codes for such systems may be generated by the automated system, but any authorization problems encountered will not be handled.

Sales

Since the RDK has been discontinued, Palisade will no longer sell new RDK licenses, sell additional licenses for existing deployments, or collect maintenance fees for existing licenses.

See also: Automating Palisade Software.

Last edited: 2017-08-11

19.2. Developer Kit License Types

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
BestFit Developer Kit 4.1 (BDK)
Evolver Developer Kit 4.1 (EDK)
@RISK Developer Kit 4.1 (RDK)
RISKOptimizer Developer Kit 4.1 (RODK)

What are the license types for the RDK, BDK, EDK, and RODK?

The RDK includes the BDK, and the RODK includes BDK, EDK, and RDK.

A DK license is specified by 3 pieces of information:

  • Edition: End-User, Server, Developer, or Demo
  • Network vs. Non-Network license type (End-User Edition and Server Edition)
  • Maximum number of simultaneous users (for Server Edition and Network license types)

The Developer Edition lets you run simulations/optimizations like the other editions, but it also contains examples and documentation needed to build an application with one of the DKs. It puts up daily nag screens to keep it from being used for deployment.

The Demo Edition lets you demo or test an application on a machine with no need to purchase a DK license for that machine. The Demo Edition will only function for 30 days on a given PC. The Demo Edition installer is available in the Redistribution Demo folder after you install the Developer Edition.

The End-User and Server Editions are purchased to deploy an application that uses one of the DKs. The End-User Edition is for desktop applications, while the Server Edition is for Server-Client applications such as Web applications. In Server-Client applications you install the DK on one machine, and all simulations/optimizations run on that machine, say a Web server. Multiple users issue requests to run those applications, say through their Web browsers. In the desktop application model, the End-User Edition DK is installed on the machine of every user who wants to run simulations/optimizations, and those applications run on that user's machine.

When there are a great many end-user desktop licenses at the same company, you might choose to make the application available via a Citrix server/Terminal Services.  For this, you set up everything on the server: the Network license type of Server Edition, with network license manager and client components. For more details, see RDK Deployment on Citrix/Terminal Services. (Another option is the Network type of End-User license, with the license manager installed on the server and the clients installed on end-user machines.  Between the two, the Citrix deployment is much more convenient to manage, and it's used much more frequently.)

Last edited: 2016-01-05

19.3. 64-bit Versions of Developer Kits?

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer Kit
BestFit Developer Kit
Evolver Developer Kit
RISKOptimizer Developer Kit

Do you have 64-bit versions of the RDK, BDK, EDK, or RODK?

We have no 64-bit versions of the developer kits at this time, but you can use the 32-bit versions in a 64-bit environment by following these rules.

The preferred method is to build a 32-bit application. (For example, in Visual Studio 2008 you want Build » Configuration Manager » Platform » x86; if x86 isn't in the list, select New and then x86. Alternatively, select Project » Project Properties » Compile » Advanced Compile Options » Target CPU » x86. (You don't want "Any Platform", because then 64-bit Windows will attempt to run it as a 64-bit application, and an attempt to run the RDK from within a 64-bit process will fail.) 64-bit Windows installs 32-bit applications in Program Files (x86) and runs them automatically through the "Windows on Windows" WOW64 feature.

If you have a 64-bit application, you should be able to use any of the DKs by employing some advanced programming techniques. You can set up a separate 32-bit process to run RDK simulations, BDK fits, or EDK/RODK optimizations, and have the 64-bit application communicate with this server process. There are techniques for setting up such external server processes. We have seen that done with the COM/VB6 programming platform, and there they are referred to as "ActiveX exe" components. The same might possibly also be accomplished with the .NET platform using the ".NET Remoting" technology.

Another possibility: use version 5.7 or later of our Excel add-ins, and build your application in Visual Basic for Applications. The VBA interfaces for @RISK, Evolver, and RISKOptimizer are documented within the applications; just click Help » Developer Kit » Manual. (There is no longer a separate BestFit product, because its functionality is integrated within @RISK Professional and Industrial Editions. You can therefore automate the process of fitting distribution by using the VBA interface to @RISK.)

Last edited: 2012-09-02

19.4. Developer Kits, Visual Studio, and .NET

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
BestFit Developer Kit (BDK) 4.x
Evolver Developer Kit (EDK) 4.x
@RISK Developer Kit (RDK) 4.x
RISKOptimizer Developer Kit (RODK) 4.x

Question:
Which versions of Visual Studio and .NET can I use for developing with the DKs?

Response:
All developer kits are compatible with .NET Framework 1.0 and all later versions, as of March 2014. Visual Studio 2002 and all later versions, as of February 2012, will convert and run the .NET examples we ship.

Our DKs are written not in .NET but in C++, so .NET is not required. They have an object-oriented COM interface written in VB6 that can be used directly in COM (say in VB6) without .NET being involved. DKs make that COM interface available for .NET programing (using COM-to-.NET Interop).

See also:

last edited: 2014-03-12

19.5. Developer's Kit Redistribution for .NET

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
Developer Kits (BDK, EDK, RDK, RODK), release 4.1

This information is adapted from the ".NET Deployment" section of the Developer's Kit (DK) Version 4.1.2 Redistribution Instructions, which is attached as DK Redistribution Instructions.rtf and also ships with the Developer Editions of the DKs.

All of Palisade's Developer Kits (DK) provide ActiveX interfaces.  For .NET developers, we provide a primary interoperability assembly (PIA) with each DK to allow you to use the ActiveX interface from the .NET environment.

We do not install anything to the Global Assembly Cache (GAC) by default; however, we provide the files that should go there. 

Option 1:

Using your own installer, you can place the relevant PIA and policy files into the .NET Global Assembly Cache on the target system. On the computer where you developed the application, the PIA file is in the Palisade\System folder — for example, the @RISK Developer's Kit (RDK) uses Palisade.RdkPia.dll — and the publisher policy files are in the folder "Redistribution Demo\GACDeployment\PublisherPolicyFiles\4.1.2".

Option 2:

As an alternative, we supply a small MSI file that will install the necessary components, assuming that Windows Installer and .NET are already on the target system. Since the current version of the .NET framework installs Windows Installer, this is a reasonable expectation.  The MSI file is in the "Redistribution Demo\GACDeployment" folder on your development computer.  For convenience, RDKPIA.msi for the @RISK Developer's Kit (RDK) and RODKPIA.msi for the RISKOptimizer Developer's Kit (RODK) are also attached to this article.

It's possible to install MSI packages via the command line.  For example, this command installs the RDK PIA silently:
msiexec /I "C:\TEMP\RDKPIA.msi" /q

For more information on installing MSI packages from the command line, see Command-Line Options (accessed 2014-03-12).

You can integrate either of these methods into your own installer.

See also:
To distribute your application in a Citrix, Terminal Services, or Remote Desktop configuration, please see RDK Deployment on Citrix/Terminal Services.

Additional keywords: End User Edition, Demo Edition, .NET deployment of Developer's Kits

last edited: 2014-03-12

19.6. RDK Deployment on Citrix/Terminal Services

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer's Kit (RDK), release 4.1

Question:
I have written an application using the RDK Developer Edition, and I want to deploy the application to end users on a Citrix server or via Terminal Services or Remote desktop Services.  How do I deploy the RDK files with my application?

Response:
You will need the network license type of the RDK Server Edition, which is a special license type compatible with Citrix.  This license type comes with a specified maximum number of concurrent users, or it can be generated for an unlimited number of users.  Your Palisade sales manager can help you obtain this license.

Please see the instructions in Installing Palisade Developer Kits on Windows 2003 Terminal Services (HowToInstallPalisadeDeveloperKitsOnWin2003TerminalServices.doc, attached). Although the instructions reference Windows 2003, they'll work for other versions of Windows.

If you already have another edition of the RDK on the target machine, uninstall it

See also:

Additional keywords: RDK Server Edition

last edited: 2014-03-12

19.7. Multiple CPUs with the RDK?

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer's Kit (RDK) 4.x

Question:
Can the RDK use multiple cores on a computer?

Response:
We're sorry, but the RDK does not have a direct way to include multi-core simulation.  An alternative would be to use @RISK 6, which does support multiple CPUs.  You can automate @RISK with Microsoft Office VBA.  Beginning with @RISK 6.2, an Automation Guide provides a quick start to automating @RISK.

See also: CPUs used by @RISK

last edited: 2014-03-31

19.8. RiskMakeInput Equivalent in the RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer Kit 4.1

Question:
I have defined my distribution functions in the RDK, but I want to run sensitivities using an expression as an input. I know I could do this in @RISK with the RiskMakeInput( ) function, but I don't see anything like that in the @RISK Developer Kit.

Response:
It's not exactly straightforward, but it can be done. Please have a look at the attached example, which was created by the developer of the RDK.

The example defines Discrete and General distributions as inputs, and gets sensitivity results on an expression involving the two distributions.

last edited: 2012-09-02

19.9. RiskCompound Equivalent in the RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer's Kit 4.1

Question:
I'm writing an application where a given risk occurs multiple times, based on a probability distribution. Each occurrence has a severity that is also a probability distribution, so it's not just a matter of multiplying a frequency distribution by a severity distribution.  In Combining Probability and Impact (Frequency and Severity), I read about @RISK's RiskCompound function, which seems perfect for my application, but I can't find it in the @RISK Developer's Kit.  Does the RDK have any equivalent to RiskCompound?

Response:
Although RDK 4.1 does not directly include a RiskCompound as a single function, you can build one out of other distribution functions.

Please see the attached example. This example is flexible in terms of the number of severity distributions. It also uses a special technique to include the RiskCompound equivalent in the sensitivity analysis.

last edited: 2013-09-12

19.10. Sensitivity Analysis in RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer Kit 4.1

Question:
How can I define the General and Discrete distributions using the RDK, and then get sensitivity analysis results?

Response:
The attached example shows how to do that with RDK and C#.

last edited: 2012-09-02

19.11. Silently Adjusting Correlation Matrix in RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer Kit 4.1

Question:
How can I repair an inconsistent correlation matrix without having prompts presented to the user?

Response:
Call the CheckConsistency method, passing True for the parameter value. If the values in the matrix are not consistent, this call will change the values in the matrix to make it consistent without putting any message boxes.

See also: How @RISK Adjusts an Invalid Correlation Matrix

last edited: 2012-09-02

19.12. Combining the RDK with @RISK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies To:
@RISK for Excel
@RISK Developer Kit

Question:
Can the @RISK Developer Kit (RDK) be combined with interactive @RISK for Excel?

Response:
Yes. In some cases @RISK models can require hundreds of thousands of input functions and can thus take a long time to simulate. In many of these models, there's no need to represent each risk variable as a function in the spreadsheet. Rather, the necessary model values could be generated more efficiently using Visual Basic For Applications (VBA) and the Risk Developer Kit. Sampling functions in VBA, instead of placing samples and making spreadsheet calculations the traditional way using @RISK for Excel, can reduce the time and resources required.

Please download the accompanying example file. (To run this example, you will need to have the RDK installed.)

Example description:
An insurance provider sells three lines of insurance and is interested in assessing risk for the total aggregate loss across all lines. The number of claims for each line is known to follow a "Poisson" distribution. Each claim loss amount is known to follow a log-normal distribution.

Traditionally, to model this example the developer would need to have a single RiskPoisson( ) distribution to represent each line's "Total # Claims" as well as listing each RiskLognorm( ) loss distribution individually allowing the losses to be combined using a SUM( ) formula. A model of this nature could require hundreds of thousands of RiskLognorm( ) functions depending on the maximum total number of claims that are possible. Simulating a model of this size could require several minutes or even hours to process.

The @RISK Developer Kit offers an alternative approach that reduces the time required for calculating each line's total aggregate loss. For example, the developer could create a custom VBA spreadsheet function that will calculate the total aggregate loss using a FOR...NEXT loop. The distribution samples required for calculating the aggregate loss will be generated using VBA and the RDK rather than @RISK For Excel.

Use the VBA Editor (Alt+F11) to review the code in Module1 of this workbook. You will see three routines:

  • A custom spreadsheet function called CalcAggregateLoss( ). This function is placed in spreadsheet cells to perform the aggregate summation without having to list each loss distribution individually; see cells C6:E6.
  • An event handler for the WorkbookOpen event, which initializes the RDK.
  • An event handler for the WorkbookClose event, which frees the RDK libraries.

To test this model, simply run a simulation using @RISK For Excel.

Note: This example was written before the RiskCompound( ) function was available in @RISK for Excel; see Combining Probability and Impact (Frequency and Severity).  However, the technique shown here is still valid for situations where you have a great many distributions and the simulation runs too slowly.

last edited: 2016-01-05

19.13. Converting an @RISK Model to RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer's Kit 4.1

Question:
I have a model that works with @RISK in Excel. I would like to build this into a custom application using the RDK. Do you have any assistance for the conversion?

Response:
We don't have any kind of automated tool, but our developers have created four examples. Thee @RISK for Excel models, and the Visual Studio code, are included in the attached file. If you familiarize yourself with them, you should see how to do the conversion.

Another option is our Custom Development department. These consultants work with you directly to develop customized applications that build on our product mix to meet your needs most effectively. If you like this option, your Palisade sales manager can discuss strategies with you.

last edited: 2013-09-10

19.14. C# with the RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
@RISK Developer Kit 4.1

I'm building an application using the @RISK Developer Kit. Should I use the ActiveX interface or the DLL interface?

Use the object-oriented interface, which is meant for COM/ActiveX and .NET. The title of the manual refers to ActiveX, but there's a section in it that talks about using that interface with .NET. We don't support the DLL interface with .NET.

Do you have any C# examples of the @RISK Developer Kit?

Yes, after you install the Developer Edition of the RDK, you can find two examples in
C:\Program Files\Palisade\RDK4\DotNet\COMInteropAssembly\Examples\SimpleSimCSharp
and
C:\Program Files\Palisade\BDK4\DotNet\COMInteropAssembly\Examples\FitDensityCSharp

Those examples don't show graphing. Do you have any graphing examples in C#?

We have a Visual Basic .NET examples that does:
C:\Program Files\Palisade\RDK4\DotNet\COMInteropAssembly\Examples\GraphVB
The structure of the code in C# would be very similar.

An improved version of that example (still with Visual Basic), with better-looking graphs, is available as file GraphVB_PixelToTwipConversion.zip, accompanying this article.

last edited: 2012-09-02

19.15. Java with the RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
All Developer's Kits: BDK, EDK, RDK, RODK

Question:
Can I include the RDK or another Developer's Kit in a Java application?

Response:
We don't support Java directly in the Developer's Kits, because Java is supposed to run on different platforms (Windows, Mac, Unix, Android), and our core mathematical libraries run on Windows only.  The RDK has COM and .NET interfaces, and there are bridge technologies that make it possible to use COM/.NET components from Java.  So it may be possible to use the RDK in Java, but the resulting program will run on Windows only.

last edited: 2014-05-07

19.16. Distributions fitted by BDK and RDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
BDK 4.1 and RDK 4.1

Question:
Can the RDK fit distributions?  Which ones?

Response:
The BDK ships as part of the RDK, so the RDK has the same capacity to fit distributions as the BDK.

The BDK or RDK can not fit to all of the distributions that the RDK can use in a simulation. The following continuous distributions can be fitted:

  • Beta general
  • Chi-squared
  • Erf (error function)
  • Erlang
  • Exponential
  • Extreme value
  • Gamma
  • Inverse Gause
  • Logistic
  • Log-logistic
  • Log-normal (both forms)
  • Normal
  • Pareto (both forms)
  • Pearson 5 and 6
  • Rayleigh
  • Student's t
  • Triangular
  • Uniform
  • Weibull

The following discrete distributions can be fitted:

  • Binomial
  • Geometric
  • Hypergeometric
  • Integer uniform
  • Negative binomial
  • Poisson

last edited: 2014-04-17

19.17. Capacity of the BDK

This article relates to discontinued products, but is retained for the benefit of our customers with existing licenses. For current information, please see Support Policy for RDK, BDK, EDK, and RODK.

Applies to:
BestFit Developer Kit 4.1

Question:
How many samples can I fit with the BestFit Developer Kit?

Response:
BDK 4.1.2 will fit distributions to up to 1 000 000 (one million) samples (BDKDataTypeSamples). If you want to fit a distribution to a set of values describing a curve (BDKDataTypeCumulative, BDKDataTypeDensity), then the limit is 30 000 points.

last edited: 2012-08-22

20. Windows Operations

20.1. Finding Your Display Settings

Occasionally we'll run across a problem that is related to particular display settings. To reproduce the problem here, we need to know your display settings. This article explains how to find them, for various versions of Windows. Please be sure also to tell is which version of Windows you have, and whether it's a real machine, a virtual machine hosted in a Mac, or a virtual machine hosted in Windows.

If you have multiple screens, please do the following for each screen on which you run our software:

Windows 10:

Right-click the desktop and select Display Settings. (a) What's the percentage next to Change the size of text, apps, and other items? And (b) is the orientation portrait or landscape?

On the same panel, click Advanced display settings near the bottom—you may need to scroll down to make that visible. (c) What's the Resolution shown?

Windows 8.1:

Right-click the desktop and select Screen Resolution. (a) What is the Resolution shown? (b) What is the Orientation?

Click Make text and other items larger or smaller. You'll see a slider control with limits Smaller and Larger. (a) Tell us how many tick marks you see under the slider, and where the indicator is pointing; or just make a screen shot and attach it to your reply.

Windows 8:

Right-click the desktop and select Screen Resolution. (a) What is the Resolution shown?

Click Make text and other items larger or smaller. (b) Which is selected, Smaller–100%, Medium–125%, or Larger–150%?

Windows 7:

Right-click the desktop and select Screen Resolution. (a) What is the Resolution shown?

Click Make text and other items larger or smaller. (b) Which is selected, Smaller–100%, Medium–125%, or Larger–150%?

Windows XP:

Right-click the desktop and select Properties. (a) What is the Screen resolution shown, _____ by _____ pixels? (b) What is the Color quality shown?

Click Advanced at the lower right. (c) What is the DPI setting shown? If it's Other, click the down arrow and re-select Other to make the Custom DPI Setting dialog pop up. (d) What percentage is shown?

Last edited: 2017-02-21

20.2. Opening an Administrative Command Prompt

Disponible en español: Abrir el Símbolo del Sistema cmd (Command Prompt)

Certain tasks need to be done in an administrative command prompt (sometimes called an elevated command prompt). This article explains how to open the command prompt with administrative privilege in various versions of Windows.

In all versions of Windows, there are multiple ways to open a command prompt. If you already have a favorite method, feel free to use that instead of our suggestions here; just verify that the word Administrator appears in the title bar of the new command-prompt window.

Windows 10:

Right-click on the Start button, and select Command Prompt (Admin). If you get a permission prompt, click Yes.

If your right-click menu doesn't include Command Prompt (Admin), then LEFT-click the Start button and type "cmd" (without quotes). The results should include "Command Prompt". Right-click that result and select Run as Administrator.

Windows 8.1:

Right-click on the Start button, and select Command Prompt (Admin). If you get a permission prompt, click Yes.

Windows 8:

Press Ctrl+Shift+Esc to open Task Manager. Click File » Run New Task. In the dialog, type cmd and check (tick) the box "Create this task with administrative privileges", then click OK.

Windows 7 (or Classic Shell in later Windows):

Click the Start button. In the Search programs and files box that appears just above it, type cmd. In the results, above the search window, right-click cmd and select Run as administrator. If a User Account Control prompt pops up, click Yes.

Windows XP:

Click the Start button, then select All Programs » Accessories. In the resulting program list, right-click Command Prompt and select Run As .... The Run As window will be set up differently depending on your account privileges, but it should be clear what you need to do to run as an administrator. Remember, verify that the word Administrator appears in the title bar of the new command-prompt window.

Last edited: 2018-07-16

20.3. Opening Your Temp Folder

What is a "temp folder", and why do I care?

The temporary folder or "temp folder" is where most programs write temporary files, files that will not be needed after the program is closed.

Over time, these orphaned files accumulate, because there's no provision for purging them automatically. At best, they're taking up space on your hard drive and making backups take longer. At worst, they can actually make Windows run slower and cause Excel to behave erratically. Therefore, it's good to purge them periodically. See Cleaning Your Temp Folder.

Also, as part of troubleshooting a problem, Palisade Technical Support may ask you to send us some files from your temp folder.

How do I find that folder and open it?

The simplest way is to let Windows find and open it for you, as follows:

  • Windows 10: In the box in your taskbar that reads Search the Web and Windows, type %TEMP% including the percent signs, and press Enter. Your temp folder will open.

  • Windows 8: If a key on your keyboard has a Windows logo, press that key and R together. Type %TEMP% including the percent signs, and press Enter.

    If you don't have the Windows key, then right-click on the Start screen and select All Apps. Find the Windows System header, which will be at or near the right-hand edge of the list, and click Run. A Run box appears. Type %TEMP% including the percent signs, and press Enter.

  • Windows 7, Vista, and XP: Click Start » Run. Type %TEMP% including percent signs, and press Enter.

Additional keywords: Temp directory, temporary directory

Last edited: 2017-02-15

20.4. Cleaning Your Temp Folder

También disponible en Español: "Limpieza de la carpeta temporal"

Why is it a good idea to clean my temp folder?

Most programs on your computer create files in this folder, and few to none delete those files when they're finished with them. Thus, the folder grows and grows. The wasted space is less important than it used to be, because disks have grown larger,. But if you're doing regular backups, as you should, they will take longer than they need to. If the folder has enough files in it, Windows can actually run slower and Excel may be unstable. Finally, the temporary files may include copies of sensitive documents, which would be compromised if your computer is attached by malware or is lost or stolen.

Okay, how do I clean my temp folder?

Windows 10, 8, 7, and Vista: Basically you're going to try to delete the entire contents. This is safe, because Windows won't let you delete a file or folder that's in use, and any file that's not in use won't be needed again.

  • Open your temp folder.
  • Click anywhere inside the folder and press Ctrl+A.
  • Press the Delete key. Windows will delete everything that's not in use. You will probably get one or two warning messages about files or folders that are in use. Check (tick) the box Do this for all, and click Skip.
  • Empty the Recycle Bin to recover the disk space occupied by deleted files.

Additional keywords: Clear temp folder, clear temporary folder, clear temporary directory

Last edited: 2019-11-18

20.5. Making Filename Extensions Visible

There are two "Risk" files in my C:\Program Files (x86)\Palisade\Risk7 folder. How can two files have the same name?

They don't. One is actually Risk.exe and the other is Risk.xla. By default, Windows hides the three-letter "extension" from you. You can find it by right-clicking the file and selecting Properties. If you're in Details view, you can look at the File Type column, though it's not obvious how most file types correspond to extensions.

Another approach is just to tell Windows that you always want to see the extension right next to the file name. Here's how to do that in various versions of Windows.

Windows 10, Windows 8.1, Windows 8:

Click the folder icon in the taskbar, or double-click any folder, or press the Windows and E keys together.

On the View tab, check (tick) the box File name extensions.

Windows 7 (or Classic Shell in later Windows), Windows Vista, Windows XP;

Click the Start button and select Control Panel. Select Folder Options. (If you are in category view, click Appearance and Personalization and then click Folder Options.)

Clear the check box "Hide extensions for known file types", and click OK.

Last edited: 2018-07-18

20.6. Screen Capture

I'd like to send a screen shot to make my question clearer. How can I do that?

To create a screen shot:

  1. Click the title bar of the window to ensure that it is active.
  2. Hold down the Alt key, press the Print Screen key, and release both keys. This puts a copy of the window on your Windows clipboard.
  3. Many email programs, including Outlook, will let you paste an image directly in your message. Try clicking into the appropriate spot in your reply and then clicking Edit » Paste or pressing Ctrl+V.
  4. If that doesn't work, open an application that can store pictures, then paste the screen capture into that application.
    • Microsoft Paint is available on most versions of Windows.
    • Microsoft Word is another good choice because it can hold multiple screen caps in a single document.

    Save the document containing the screen image(s) and attach it to your reply email.

Last edited: 2018-07-23