Accelerated Gradient Learning Algorithm for Neural Network Weights Update (CROSBI ID 541770)
Prilog sa skupa u zborniku | izvorni znanstveni rad | međunarodna recenzija
Podaci o odgovornosti
Hocenski, Željko ; Antunović, Mladen ; Filko, Damir
engleski
Accelerated Gradient Learning Algorithm for Neural Network Weights Update
This work proposes decomposition of gradient learning algorithm for neural network weights update. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on MLP neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings in multiple thread execution.
time critical processes; algorithm decomposition; neural network; weights update; gradient learning method; parallel processing
nije evidentirano
nije evidentirano
nije evidentirano
nije evidentirano
nije evidentirano
nije evidentirano
Podaci o prilogu
49-56.
2008.
nije evidentirano
objavljeno
978-3-540-85562-0
Podaci o matičnoj publikaciji
Knowledge-Based Intelligent Information and Engineering Systems
Ignac Lovrek and Robert J. Howlett and Lakhmi C. Jain
Heidelberg: Springer
Podaci o skupu
Knowledge-Based Intelligent Information and Engineering Systems, KES 2008
predavanje
03.09.2008-05.09.2008
Zagreb, Hrvatska