Comparison of Backpropagation and Backpropagation with Stochastic Gradient Descent

Authors

  • Iqbal Giffari Ritonga Universitas Potensi Utama
  • Hartono Universitas Potensi Utama
  • Rika Rosnelly Universitas Potensi Utama

DOI:

https://doi.org/10.35842/icostec.v3i1.64

Keywords:

Neural Network, Multi Layer Perceptron, Backpropagation, Stochastic Gradient Descent

Abstract

Backpropagation has a weakness, namely that
determining the initial weights will affect errors in the training
process, training time, and the accuracy of the resulting model.
Determining the initial weight is difficult to do because the initial
weight determines the error rate, training time and accuracy.
Based on this problem, it is better to optimize weights rather than
looking for ideal weights using stochastic gradient descent which
is expected to reduce errors, reduce training time, and increase
accuracy. This method updates the weights at each
backpropagation iteration by looking at the error value in the
previous iteration. The results of comparing this method are that
backpropagation with stochastic gradient descent has an mse
value of 0.0932, training time 141 seconds, and an accuracy of 80%
compared to backpropagation which has an mse value of 0.1784,
training time 158 seconds, and an accuracy of 64%.
Backpropagation with stochastic gradient descent has been
proven to make backpropagation results better.

Published

2024-02-17