Palisade Knowledge Base

HomeTechniques and TipsNeuralToolsTechnical Questions about the Training Process

15.16. Technical Questions about the Training Process

Applies to: NeuralTools, all releases

Is it fair to say that NeuralTools only processes whichever information you feed it and internally adjusts the thresholds and weights of the node connections?

Adjusting or optimizing the weights of the node connections is what happens when training Multi-Layer Feedforward Networks.  We offer those, but by default we train Probabilistic Neural Nets (category prediction) or Generalized Regression Neural Nets (numeric prediction).  With PNNs/GRNNs, the training process consists in the adjustment or optimization of "smoothing factors".  Smoothing factors decide how far we look when making a prediction for a given case, where "how far" refers to the distance between this case, and each case found in our historical data.  Training of PNNs/GRNNs is much faster; that's why we use them by default.

Does NeuralTools include a process to generate samples or random numbers of any type?

Random numbers are involved in the training of MLF nets.  This is a much harder optimization problem than with PNNs/GRNNs.  Consequently, complex optimization methods are used.  We use a hybrid of deterministic and stochastic optimization methods, and the latter involve random numbers.  No random numbers are involved when training PNNs/GRNNs.

Last edited: 2015-09-03

This page was: Helpful | Not Helpful