Pregled bibliografske jedinice broj: 13625
A Cascade-Correlation Learning Network with Smoothing
A Cascade-Correlation Learning Network with Smoothing // Proceedings on the International ICSC/IFAC Symposium on Neural Computation - NC'98 / Heiss M. (ur.).
Beč, Austrija: Academic Press, 1998. str. 1023-1029 (predavanje, međunarodna recenzija, cjeloviti rad (in extenso), znanstveni)
CROSBI ID: 13625 Za ispravke kontaktirajte CROSBI podršku putem web obrasca
Naslov
A Cascade-Correlation Learning Network with Smoothing
Autori
Petrović, Ivan ; Baotić, Mato ; Perić, Nedjeljko
Vrsta, podvrsta i kategorija rada
Radovi u zbornicima skupova, cjeloviti rad (in extenso), znanstveni
Izvornik
Proceedings on the International ICSC/IFAC Symposium on Neural Computation - NC'98
/ Heiss M. - : Academic Press, 1998, 1023-1029
Skup
International ICSC/IFAC Symposium on Neural Computation - NC'98
Mjesto i datum
Beč, Austrija, 23.09.1998. - 25.09.1998
Vrsta sudjelovanja
Predavanje
Vrsta recenzije
Međunarodna recenzija
Ključne riječi
Cascade Correlation; Neural Network; Learning Network; Smoothing
Sažetak
A cascade correlation learning network (CCLN) is a popular supervised learning architecture that gradually grows the hidden neurons of fixed nonlinear activation functions, adding one-by-one neuron in the network during the course of training. Because of fixed activation functions the cascaded connections from the existing neurons to the new candidate neuron are required to approximate high-order nonlinearity. The major drawback of a CCLN is that the error surface is very zigzag and unsmooth due to the use of maximum correlation criterion that consistently pushes the hidden neurons to their saturated extreme values instead of active region. To alleviate this drawback of the original CCLN two new cascade-correlation learning networks (CCLNS1 and CCLNS2) are proposed, which enable smoothing of the error surface. Smoothing is performed by (re)training the gains of the hidden neurons' activation functions. In CCLNS1 smothing is enabled by using the sign functions of the neurons' outputs in the cascaded connections and in CCLNS2 each hidden neuron has two activation functions: fixed one for cascaded connections, and trainable one for connection to the neurons in output layer. The performances of the network structures are tested by learning them to approximate three nonlinear functions. Both proposed structures exhibit much better performances than the original CCLN, while CCLNS1 gives a little bit better results than CCLNS2.
Izvorni jezik
Engleski
Znanstvena područja
Elektrotehnika
POVEZANOST RADA
Ustanove:
Fakultet elektrotehnike i računarstva, Zagreb