Solving Numerical Optimization Model of Neural Network

Main Article Content

Weam Abbas Obaid, Dr. Ahmed Sabah Ahmed Aljilawi

Abstract

Neural networks are usually trained using gradient-based methods, in this paper the attempts to reduce the size of the cost function in order to ensure that it is suitable for any monitoring or observation provided, where the model adjusts its weight and slope, so it uses the cost function and enhanced learning to reach the convergence point or the local minimum. Gradient descent is the process by which algorithm adjusts its weights, allowing the model to determine the trend towards reducing errors(reducing the cost function) with each training of the model, the model coefficients are adjusted to gradually converge to a minimum.

Article Details

Section
Articles