DAILY PAPER REVIEW

0828_Predicting flux decline using GA-ANN

Title: Predicting flux decline in crossflow membrane using artificial neural networks and genetic algorithm
Journal: Journal of Membrane Science
Authors: Goloka Behari Sahooa and Chittaranja Rayb
Corresponding author: Goloka Behari Sahoo

Institute: 
a Department of Civil and Environmental Engineering, University of California at Davis, One Shield Avenue, Davis, CA 95616, USA
b Department of Civil and Environmental Engineering & Water Resources Research Center, University of Hawaii at Manoa,
2540 Dole Street, Honolulu, HI 96822, USA

The original and creativity of paper: The efficiency of prediction performance of ANNs is affected by geometry and internal parameters. Since most ANNs are calibrated and designed the proper internal network parameters and network geometry using the trial-and-error approach which wastes a lot of time and it was not a scientific way. Thus the use of genetic algorithms (GA) to search the optimum geometry and values of internal parameter that possible was applied to shorten time and increase the accuracy.

Summary: 

In this study, two types of ANN models were applied: back-propagation training algorithm (BPNN) and radial basis function network (RBFN). Also, genetic algorithm which is the global search and optimization method was also applied to improve the performance of ANNs. 

GA is found to be a good alternative over trial-and-error approach to determine ANNs geometry and internal parameters quickly and efficiently. This is because of GA is robust and global to apply in the problems where there is a little or no a priori knowledge about the process to be controlled. Moreover, GA does not require derivative information or a formal initial estimate of the solution. 

The results show that the prediction performance efficiency of GA-ANNs combination was improved compared to the original ANNs. The comparison of efficiency between ANN models (RBFN and BPNN) showed almost same R values (above 0.99) so it is hard to say which ANN model is better than other. This mean the ANN models geometry and internal parameters are optimized and it is optimally trained. Another advantage of GA is it makes ANNs possible to apply to the small training dataset which original ANN is difficult to do.

Moreover, when data was applied scaling technique, this can increase the R value of BPNN which using small data set (from 0.9915 to 0.9935) but not for large data set. This is because of when the data was scaled and trained to the range of 0-1; it is an advance that the solution range for spread always lies in the range of 0.1-1.0. As well as the maximum number of neuron cannot exceed the number of data sample resulted in a practical range can be easily set based on this approach. On the other hand, if training datasets were not scaled, the spread could vary in the widely range which difficult to present the solution range to GA. 


Application & further study: We can apply this algorithm to improve the efficiency of ANNs. Anyway other scholars try to get the better ANN by apply self organized map which also produce the very satisfy result. According to those approaches, we can try both ways and compare the efficiency of them. 

By Monruedee Moonkhum
Email: moon@gist.ac.kr

첨부 (0)