dc.contributor.author |
Osunmakinde, I
|
|
dc.contributor.author |
Bagula, A
|
|
dc.date.accessioned |
2010-02-26T14:50:15Z |
|
dc.date.available |
2010-02-26T14:50:15Z |
|
dc.date.issued |
2009-04 |
|
dc.identifier.citation |
Osunmakinde, I and Bagula, A. 2009. Supporting scalable Bayesian networks using configurable discretizer actuators. ICANNGA'09: International Conference on Adaptive and Natural Computing Algorithms, Kuopio, Finland, 23-25 April 2009, pp 323-332 |
en |
dc.identifier.isbn |
978-3-642-04920-0 |
|
dc.identifier.uri |
http://www.springerlink.com/content/j814q25454163pw8/
|
|
dc.identifier.uri |
http://hdl.handle.net/10204/3957
|
|
dc.description |
Copyright: Springer-Verlag Berlin Heidelberg 2009. This is the authors version of the it is posted here by permission granted by Springer-Verlag. The article is published in the Lecture Notes in Computer Science, Vol.5495(2009), pp 323-332 |
en |
dc.description.abstract |
The authors propose a generalized model with configurable discretizer actuators as a solution to the problem of the discretization of massive numerical datasets. Their solution is based on a concurrent distribution of the actuators and uses dynamic memory management schemes to provide a complete scalable basis for the optimization strategy. This prevents the limited memory from halting while minimizing the discretization time and adapting new observations without re-scanning the entire old data. Using different discretization algorithms on publicly available massive datasets, the auhtors conducted a number of experiments which showed that using our discretizer actuators with the Hellinger’s algorithm results in better performance compared to using conventional discretization algorithms implemented in the Hugin and Weka in terms of memory and computational resources. By showing that massive numerical datasets can be discretized within limited memory and time, these results suggest the integration of their configurable actuators into the learning process to reduce the computational complexity of modeling Bayesian networks to a minimum acceptable level. |
en |
dc.language.iso |
en |
en |
dc.publisher |
Springer-Verlag Berlin Heidelberg 2009 |
en |
dc.subject |
Intelligent systems |
en |
dc.subject |
Massive datasets |
en |
dc.subject |
Bayesian networks |
en |
dc.subject |
Discretization |
en |
dc.subject |
Scalability |
en |
dc.subject |
Natural computing algorithms |
en |
dc.title |
Supporting scalable Bayesian networks using configurable discretizer actuators |
en |
dc.type |
Book Chapter |
en |
dc.identifier.apacitation |
Osunmakinde, I., & Bagula, A. (2009). Supporting scalable Bayesian networks using configurable discretizer actuators., <i></i> Springer-Verlag Berlin Heidelberg 2009. http://hdl.handle.net/10204/3957 |
en_ZA |
dc.identifier.chicagocitation |
Osunmakinde, I, and A Bagula. "Supporting scalable Bayesian networks using configurable discretizer actuators" In <i></i>, n.p.: Springer-Verlag Berlin Heidelberg 2009. 2009. http://hdl.handle.net/10204/3957. |
en_ZA |
dc.identifier.vancouvercitation |
Osunmakinde I, Bagula A. Supporting scalable Bayesian networks using configurable discretizer actuators. [place unknown]: Springer-Verlag Berlin Heidelberg 2009; 2009. [cited yyyy month dd]. http://hdl.handle.net/10204/3957. |
en_ZA |
dc.identifier.ris |
TY - Book Chapter
AU - Osunmakinde, I
AU - Bagula, A
AB - The authors propose a generalized model with configurable discretizer actuators as a solution to the problem of the discretization of massive numerical datasets. Their solution is based on a concurrent distribution of the actuators and uses dynamic memory management schemes to provide a complete scalable basis for the optimization strategy. This prevents the limited memory from halting while minimizing the discretization time and adapting new observations without re-scanning the entire old data. Using different discretization algorithms on publicly available massive datasets, the auhtors conducted a number of experiments which showed that using our discretizer actuators with the Hellinger’s algorithm results in better performance compared to using conventional discretization algorithms implemented in the Hugin and Weka in terms of memory and computational resources. By showing that massive numerical datasets can be discretized within limited memory and time, these results suggest the integration of their configurable actuators into the learning process to reduce the computational complexity of modeling Bayesian networks to a minimum acceptable level.
DA - 2009-04
DB - ResearchSpace
DP - CSIR
KW - Intelligent systems
KW - Massive datasets
KW - Bayesian networks
KW - Discretization
KW - Scalability
KW - Natural computing algorithms
LK - https://researchspace.csir.co.za
PY - 2009
SM - 978-3-642-04920-0
T1 - Supporting scalable Bayesian networks using configurable discretizer actuators
TI - Supporting scalable Bayesian networks using configurable discretizer actuators
UR - http://hdl.handle.net/10204/3957
ER -
|
en_ZA |