Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Algorithm constructing RBF-network.

1) Algorithm of K-averages. This algorithm aspires to choose optimum set of the points which are centroid of clusters in the training data. At K radial elements their centers settle down so that:

Each training point "concerned" one center ???????? and lay to it more close, than to any other center;

Each center of cluster was a centroid sets of the training points concerning this cluster.

2) The isotropic. The deviation undertakes identical to all elements and is defined heuristically taking into account quantity of radial elements and volume of covered space.

3) K the nearest neighbors. The deviation of each element is established (individually equal to average distance to it K the nearest neighbors). Thereby deviations will be less in those parts of space where points are located densely, - well details, - and there where it is not enough points here will be considered, deviations will be big (and will be is made interpolation).

The problems, which are solved with RBFNN

Using RBFNN methods the weights matrix is initialized with random vectors with values in the range of the input data, x, and these are used for forward propagation. The result will be differences between the expected output (y values) and those produced, i.e. errors, which are then used to adjust the set of weights by back propagation. The weight matrix is thus learnt, over a number of epochs, avoiding the need for matrix inversion. This becomes of increased value where larger problems are concerned or the structure of the problem means that the matrix D cannot be inverted.

Conclusion

RBF nets can learn arbitrary mappings: the primary difference is in the hidden layer.

RBF hidden layer units have a receptive field which has a centre: that is, a particular input value at which they have a maximal output.Their output tails off as the input moves away from this point.

RBF networks are trained by

? deciding on how many hidden units there should be

? deciding on their centres and the sharpnesses (standard deviation) of their Gaussians

? training up the output layer.

Generally, the centres and SDs are decided on first by examining the vectors in the training data. The output layer weights are then trained using the Delta rule. BP is the most widely applied neural network technique. RBFs are gaining in popularity.

Nets can be

? trained on classification data (each output represents one class), and then used directly as classifiers of new data.

? trained on (x,f(x)) points of an unknown function f, and then used to interpolate.

RBFs have the advantage that one can add extra units with centres near parts of the input which are difficult to classify. Both BP and RBFs can also be used for processing time-varying data: one can consider a window on the data:

Networks of this form (finite-impulse response) have been used in many applications.

There are also networks whose architectures are specialised for processing time-series.

 


Date: 2016-06-12; view: 7


<== previous page | next page ==>
Local linear models | Treatment of Hypervitaminosis D
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.006 sec.)