Palisade Knowledge Base

HomeTechniques and TipsNeuralToolsTesting Results in Training Set Versus Testing Set

15.26. Testing Results in Training Set Versus Testing Set

Applies to: NeuralTools, all releases

Why are testing results for the training set so much better than the testing results for the testing set with PN/GRN nets?

The summary training report includes testing results for the training set, but they do not provide useful information as to how well the training went. Good testing results on the training set may result from the training process overfitting neural net parameters to the specific cases included in training; we can think of this as the neural net memorizing the training set. An overfitted net will generate inaccurate predictions for cases not included in training. (For more on overfitting, see Overfitting During Training.)

With PN/GRN nets there is another reason for a low testing error reported for the training set. PNNs predict by interpolating from the entire training set, with emphasis on the training cases that are in the neighborhood of the one for which we are making the prediction. So the error reported for the training set is based on a procedure in which we make a prediction for a data point by interpolation from a set that includes that data point. That means we'll almost always get the correct answer in the case of category prediction (PN nets), or close to the correct answer in the case of numeric prediction (GRN nets).

Last edited: 2015-09-03

This page was: Helpful | Not Helpful