Convergence Rate in a Destructive Neural Network With /Without Thresholds in The Output Layer

Document Type : Original Article

Author

Faculty of Science Department of Computer Science Assiut University

Abstract

Neural networks are a practicable solution for the extraction of accurate knowledge, where the data being mined can be so noisy, due to their relatively good tolerance to noisy and generalization ability and the performance of a neural network is directly related to its parameters and architecture.
The degree of complexity of ANN increases exponentially as a factor of the numbers of input and hidden nodes. The complexity problem can be improved by constructing the structure of the network based on constructive learning and destructive learning.
So, the objective of the network is to learn or to discover some involvement between input and output patterns to find the structure of the input patterns. The learning process is achieved through the modification of the connection weights between units. It is known as the network’s topology which determines the network’s final behavior.
The goal of finding an optimal topology is to minimize an error function while conserving generalization capabilities.
In our work, the destructive topology is used: first the algorithm trains a big size of neural network on the data, and then prunes it to increase its generalization capability while preserving its accuracy.
The present paper introduces destructive neural network learning techniques and presents the analysis of the convergence rate of the error in a neural network with and without threshold in the output layer.
The present paper introduces destructive neural network learning techniques and presents the analysis of the convergence rate of the error in a neural network with and without threshold.

Keywords

Main Subjects