Palisade Knowledge Base

HomeTechniques and TipsNeuralToolsCategorical Predictions with Probit or Logit?

15.39. Categorical Predictions with Probit or Logit?

Applies to: NeuralTools, all releases

For categorical prediction, does NeuralTools use a probit, logit, or some type of multinomial function added to the model?

These are terms from traditional statistical methods for category prediction, like logistic regression.  PNNs are a more recent invention, but have a better foundation in terms of statistical theory.  Regarding logit, it's used because it works well in practice, but there's no statistical argument to show why it should work better than say probit.  The PNN methodology, on the other hand, is essentially the statistics for deriving probability density functions from data (one probability density function per category).  Probabilities that an item is in this or that category come directly from probability density functions, with those functions constructed from the available data.

Another way to compare say logistic regression to PNNs is to note that logistic regression tries to compress all of the information contained in the historical data in a rather simple function.  With PNNs, we keep all of the historical data to be used during the prediction step.  During the prediction step we interpolate from all of our historical data. This interpolation is done with the smoothing factors determining how far we look to get the interpolated value; see Technical Questions about the Training Process.

Because PNNs have better theoretical foundation, and don't try to compress the information in the historical data into a simple function, they often end up making better predictions than the more traditional methods.

Last edited: 2013-06-24

This page was: Helpful | Not Helpful