Publiée le 08/04/11 à 14h00

Licence Creative Commons CC-By

The peripheral auditory system of lizards is highly directional. The system is medially symmetrical and relatively simple in design. Due to a smaller head size with respect to the wavelength of the sounds to which it responds, the incident sound waves diffract around the lizard’s head, resulting in approximately equal sound pressure at the two ears. However, there is a phase difference between the sound waves arriving at either side, whose magnitude depends on the direction from which the sound appears to originate relative to the animal, and this small phase difference cue is converted by the system into a relatively larger difference in the perceived amplitude of the sound on either side.
In real lizards, the auditory cues from the peripheral auditory system are interpreted by the nervous system. This raises the question, how well can the peripheral auditory system represent sound direction? The perceived sound amplitudes cannot be linearly mapped to the sound direction, as they are non-
linear over the frontal region. However, a map between the two might be built via neural processes. It has been previously demonstrated with robotic models that it is possible to localize sound via such auditory cues, using a human-built decision model that maps these cues to motor control outputs [6]. In order to automate the construction of such a decision model, unsupervised learning algorithms can be used. We employ reinforcement learning in simulation, to train a Cerebellar Model Articulation Controller (CMAC) as a decision model that maps the auditory cues to sound direction.
The results of the simulations show that the CMAC is able to learn such a map in which the characteristics of the data points in the input space are correctly reflected. Furthermore, the results of both supervised and unsupervised approaches are in agreement with each other. The implementation and real-world trials of the unsupervised approach