Neural Networks Computer Science

Lesson 2: Probabilistic Neural Networks

In this video, we present the algorithm for probabilistic neural networks (PNN). PNN were created by Donald Specht around 1990. The basic idea of PNN is that each training element adds to the likelihood that nearby data has the same classification; this fact is represented by a Gaussian pattern unit. So, each category unit is represented by a sum of Gaussians, which looks like this:

To calculate the classification of a data point, we calculate the response for the point with every category. The category that has the highest response is selected as the classification. In the video, we used a simple Gaussian for our pattern units. However, we would probably want a more sophisticated formula for practical use. This equation gives a much more generalized form:

In the formula above, we have added weighting coefficients, parameterized our sigma values, used a more generalized distance function, and enforced a normalization condition on our data. These variants and more will be found in the literature, and the simple Optical Character Recognition (OCR) example that was presented is intended to convey the basic concept.

The diagram above depicts the classification algorithm for a classifying two-dimensional data with three classes or categories. The categories have 2, 2, and 3 pattern units each. However, we can have any number of pattern units. We add new pattern units and even new categories at any time without needing to retrain the network. This is a one of the advantages of Probabilistic Neural Networks. The downside is that as the network gets more complex, the speed and accuracy decrease. As a practical matter, we might also want to adjust sigma to make our patterns narrower as the number of patterns grows.

 

© 2007–2024 XoaX.net LLC. All rights reserved.