|
EPIA'03 - 11th Portuguese Conference on Artificial Intelligence
ALEA -- Workshop on Artificial Life and Evolutionary Algorithms
|
Session: December 6, 11:0-12:0, Room B |
Title: |
Yerkes-Dodson Law in Intelligent Agents’ Training |
|
Sarunas Raudys and Viktoras Justickis |
Abstract: |
We consider intelligent agents’ training from a point of view of the well known Yerkes-Dodson Law (YDL) which claims that medium intensity stimulation encourages the fastest learning. The single layer perceptron and gradient descent training algorithm applied to solve classification task are used as simplified model of trainable agent. Desired outputs of the perceptron were associated with stimulation strength. We found that very small stimulations cause slow learning. Strong stimulations assist in obtaining large weights. Large weights instigate diminution of the gradient and decrease in the learning speed. Understanding of the interaction between learning and stimulation probably would be a sound example in which computational studies might shed light on problems that are difficult to study with the research tools of psychology, education and biology. From an engineering point of view, the most relevant result is that showing that knowledge on relation between adaptation and stimulation indicates potential direction how to build functional intelligent agents. |
Back to schedule. |