ABSTRACT



 
 

In this thesis we study the concepts of machine learning and adaptation in contrast to preprogrammed intelligent behavior. Using an appropriate nomenclature the field of Learning Automata is reviewed, as well as the paradigms and controversies of machine learning. A new model for learning is discussed, based on a hierarchical approach to stochastic learning. A parametric computer simulation is used to test several hypotheses based on the model. Results indicate that a mixed Bottom-Up/Top-Down coordination mode is adequate for learning
 
 

ACKNOWLEDGEMENTS


I would like to express my gratitude to all the people who made this research possible. Among them my advisor, Peter Bock, for his constant effort to move me towards new ideas in this academic field. To my brother Guillermo, for his full and unconditional support. To Dr. James Albus, for his innovative ideas on hierarchical control. To my sister Gilda, for her proofreading and typing. Finally, to my wife Mary Alcocer, for her immense love and patience.

This work was made possible by a grant from the National Council of Science and Technology of México (CONACYT ), under grant No. 18100.
 
 

I. INTRODUCTION


Is machine learning feasible?

What sort of structure is required to support it?



These two fundamental questions are addressed in the present work. The motivation behind this dissertation lies in the idea that learning models can be improved if new data structures and hierarchical algorithms are incorporated into them. So far, excellent results have been obtained with preprogrammed models of intelligence; it is felt, however, that to continue only with this line of research in artificial intelligence, would be quite limiting.

Learning models had to be buried for a long period of time in computer history. They required new hardware and more advanced software techniques. Today, we have VLSI circuits, complex operating systems and well developed theories of algorithms and data structures. We strongly feel that it is time to re-evaluate learning and adaptation in computers.

The main disadvantage of these models lies in the fact that adaptation and learning are slow and resource-consuming processes. Yet, on the other end, preprogrammed models are highly inflexible, requiring the modification of their own structures whenever a change in the environment takes place.

As a compromise, we present a model that is adaptable, but that requires some basic initial structure. We feel that this compromise provides improved learning models which are still able to modify their internal structure whenever an environmental change occurs. We begin our study with a complete review of the literature on the topic of machine learning. This literature is centered basically in the areas of artificial intelligence and learning automata. In our review we also briefly touch on the fields on psychology and robotics.

We provide an adequate framework for the formal study of machine learning. Based upon that, we hypothesize a new model: the model is both hierarchical and collective; thus the model is named a Hierarchical Collective Learning Stochastic Automaton (HCLSA). The model is then tested through simulation studies and the results are analyzed.

The HCLSA model is based on a continuous trial and error process with the added characteristic that the evaluation is made on a single collective basis and only at the root level of the hierarchy. The scientific method (analysis, hypothesis, synthesis and validation) was used to investigate the properties and viability of the HCLSA model. The results of the parametric simulation indicate that a mixed mode of Bottom-Up/To-Down coordination is the most adequate mode for learning. A simple example of hierarchical learning is presented in the appendices; it provides a clear view of the concepts discussed in this thesis.