Please use this identifier to cite or link to this item: http://117.252.14.250:8080/jspui/handle/123456789/3545
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRath, Sagarika-
dc.contributor.authorNayak, P. C.-
dc.contributor.authorChatterjee, Chandranath-
dc.date.accessioned2019-09-12T11:24:37Z-
dc.date.available2019-09-12T11:24:37Z-
dc.date.issued2013-
dc.identifier.urihttp://117.252.14.250:8080/jspui/handle/123456789/3545-
dc.description.abstractThe current study employs a hierarchical adaptive network-based fuzzy inference system for flood forecasting by developing a rainfall–runoff model for the Narmada basin in India. A hybrid learning algorithm, which combines the least-square method and a back propagation algorithm, is used to identify the parameters of the network. A subtractive clustering algorithm is used for input space partitioning in the fuzzy and neurofuzzy models. The model architectures are trained incrementally each time step and differentmodels are developed to predict one-step andmulti-step ahead forecasts. The number of input variables is determined using a standard statistical method. An artificial neural network (ANN) model which uses an Levenberg–Marquardt (LM) backpropagation training algorithm has been developed for the same basin. The results of this study indicate that the hierarchical neurofuzzy model performs better compared to an ANN and the standard fuzzy model in estimating hydrograph characteristics, especially at longer forecast time horizons.en_US
dc.language.isoenen_US
dc.publisherTaylor & Francisen_US
dc.subjectHierarchical neurofuzzy modeen_US
dc.subjectTakagi–Sugeno fuzzy modelen_US
dc.subjectSubtractive clusteringen_US
dc.subjectFlood forecastingen_US
dc.titleHierarchical neurofuzzy model for real-time flood forecastingen_US
dc.typeArticleen_US
Appears in Collections:Research papers in International Journals

Files in This Item:
File Description SizeFormat 
Restricted Acess.pdf411.81 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.