HomeTechniques and TipsNeuralToolsExtrapolation with a Trained Network

15.36. Extrapolation with a Trained Network

Applies to: NeuralTools 5.x–7.x

Let's say that we have 10 variables in an historical log of 10,000 records, and all of them were used to train the net. Let's assume the net is predicting values with high accuracy.

Now, what happens if the network receives information that is outside the original ranges it was trained with? If effect, it's being asked for an extrapolation. How will it perform?

Attempts to make predictions outside the range of the training data can be problematic. For now, some sort of custom coding would be needed to check for these cases, but a future release of Neural tools will likely automate these warnings.

Let us note differences between neural net types in terms of their ability to extrapolate beyond training data. MLF nets will do a better job in this regard than GRN/PN nets. (See attached example.) GRN/PN nets use sophisticated statistical techniques to interpolate from training data, and are very good at it; MLF nets have the capability to discern general patterns that will probably extend outside the training range.

It might also be possible to set up training/testing to stress test this particular scenario: deliberately split the data set so there's a lot of out-of-range data in the testing set. The NeuralTools XDK could be used to perform multiple tests with one click (analogous to the Testing Sensitivity analysis added in v6).

Last edited: 2015-09-03

Downloads

This page was: Helpful | Not Helpful