Department of Psychology, Tulane University, USA.
Individuals possess functional maps of the body. They are able to localize targets on their bodies with their hands regardless of where the target is positioned on the body and regardless of where the hands are positioned in space. This type of functional body map has great adaptive value. It helps individuals to engage in activities centered on the body including feeding, grooming, removing stimuli, tool use and many other basic skills of daily living. Little, however, is known about when or how functional maps of the body map develop in humans. Additionally, such developmental information might inform the design of artificial agents where knowledge of body layout is crucial for effective functioning in an environment.
In this presentation, we focus on the origins of body maps in infants. For this purpose, we have developed new procedures to assess how infants manually locate targets on their bodies. We place a small vibrating buzzer on different parts of the body and examine whether infants can localize that buzzer by reaching to it. In our first study, we tested infants cross-sectionally from 7 to 24 months of age (N=60) and placed buzzers at different locations on the face (forehead, ears, mouth) or arms and (shoulder, elbow, and arms). In our second study, we studied infants longitudinally from 2-7 months of age (N=15) and placed the buzzers at different locations on the face (forehead, mouth, chin, ears) to examine how face maps develop. Results across studies suggest that body maps emerge gradually in infants. Infants succeed at locating targets in the mouth region earlier than they do in other facial regions. Additionally, infants succeed at localizing targets on the hand before they succeed at localizing targets at other regions on the arm. We also found that infants show strong patterns of lateralization when they localize targets: They reach ipsilaterally for targets on the face, but contralaterally for targets on the arm (where ipsilateral reaches are not biomechanically possible). We conclude by considering mechanisms for the emergence of body maps in human infants based on experience and self-touch.
In this presentation, we focus on the origins of body maps in infants. For this purpose, we have developed new procedures to assess how infants manually locate targets on their bodies. We place a small vibrating buzzer on different parts of the body and examine whether infants can localize that buzzer by reaching to it. In our first study, we tested infants cross-sectionally from 7 to 24 months of age (N=60) and placed buzzers at different locations on the face (forehead, ears, mouth) or arms and (shoulder, elbow, and arms). In our second study, we studied infants longitudinally from 2-7 months of age (N=15) and placed the buzzers at different locations on the face (forehead, mouth, chin, ears) to examine how face maps develop. Results across studies suggest that body maps emerge gradually in infants. Infants succeed at locating targets in the mouth region earlier than they do in other facial regions. Additionally, infants succeed at localizing targets on the hand before they succeed at localizing targets at other regions on the arm. We also found that infants show strong patterns of lateralization when they localize targets: They reach ipsilaterally for targets on the face, but contralaterally for targets on the arm (where ipsilateral reaches are not biomechanically possible). We conclude by considering mechanisms for the emergence of body maps in human infants based on experience and self-touch.
Research scientist, Laboratoire Psychologie de la Perception, Paris Descartes University, Paris, France.
We
studied 2-6 month old infants’ responses to vibrotactile stimuli
presented to five locations: the forehead, right hand, left hand,
right foot, left foot (five conditions). Vibrotactile stimulation was
provided by small vibrators that were attached to the infant’s
body, one at a time in counterbalanced order. Each trial lasted 35s,
after which the experimenter removed the vibrator and attached it to
the next location. In a (sixth) baseline condition no vibrator was
attached to the infant’s body and spontaneous movements were
recorded. Four age groups were compared cross-sectionally (3-, 4-, 5-
and 6 month olds) and a group of infants were followed longitudinally
from 2- to 6 months of age.
In
order to compare limb activity across conditions, we analyzed video
recordings of infants’ responses using a movement analysis software
which calculated the distances travelled by each limb in the
two-dimensional plane of the video display. We also performed
qualitative analyses of the infants’ movements, coding the
direction of gaze, the cases where the infant touched its own body,
and movements towards the vibrator.
Our
hypothesis was that before they actually reach for the vibrating
target, which, according to previous studies, occurs around 6 months
of age, infants would demonstrate emerging knowledge about their
body’s configuration by producing specific movement patterns
associated with the stimulated body area. Furthermore, based on
earlier studies that used conjugate reinforcement, we hypothesized
that at 3 months infants would produce general whole-body movement
patterns upon stimulation, and that more localized movements would
gradually emerge with age.
Results
showed that at 3 months, infants responded with an increase in
general activity when the vibrator was placed on the body,
independently of the vibrator’s location. Topographical awareness
of the body seemed to appear around 5 months, with specific responses
resulting from stimulation of the upper body and hands emerging
first, followed by the differentiation of movement patterns
associated with the stimulation of the feet. Qualitative analyses
revealed specific movement types reliably associated with each
stimulated location by 6 months of age, possibly preparing infants’
ability to actually reach for the vibrating target.
We
discuss this result in relation to newborns’ ability to learn
specific movement patterns through intersensory contingency, as well
as in relation to studies that proposed a different sequential order
for the emergence of awareness of different body locations.
Researcher, iCub Facility, Istituto Italiano di Tecnologia, Italy.
Humans
and animals seamlessly command their complex bodies in space and
concurrently integrate multimodal sensory information. To support
these capabilities, it seems that some representation of the body is
necessary. In this regard, a number of concepts like body schema and
body image were proposed. However, while the field is rich in
experimental observations, it is largely lacking mechanisms that
explain them. Computational models are scarce and address at the most
isolated subsystems. Humanoid robots possess morphologies –
physical characteristics as well as sensory and motor apparatus –
that are in some respects akin to human bodies and can thus be used
to expand the domain of computational modeling by anchoring it to the
physical environment and a physical body and allowing for
instantiation of complete sensorimotor loops. We present our modeling
endeavor in the iCub – a baby humanoid robot with 53 degrees of
freedom, two cameras with an anthropomorphic design, joint encoders
in every joint, and a whole-body artificial skin. The developmental
trajectory capitalizes on learning about the body through
self-stimulation or self-touch. This behavior was instantiated in the
robot and the data thus collected is fed into biologically motivated
learning algorithms (such as self-organizing maps) in order to first
obtain analogs of primary tactile and proprioceptive maps (areas 3b
and 3a). Later, we explore the contingencies induced in tactile and
proprioceptive afference by the self-touch configurations and study
how they may give rise to first body models with spatial properties.
Research director, Division of Cognitive Neuroscience Robotics, Institute for Academic Initiatives, Osaka University, Japan.
Andrew Bremner, PhD.
We
have implemented a child-like body with 22-DOF upper body and 10-DOF
legs, both are compliant owing to a pneumatic drive system.
The whole body could be a platform for developmental studies. In
the talk, we introduce preliminary experiments and discuss several issues for future applications.
The whole body could be a platform for developmental studies. In
the talk, we introduce preliminary experiments and discuss several issues for future applications.
Reader in psychology and head of department, Goldsmiths University, London, UK.
We still know relatively little about how human infants and children come to perceive their own bodies and the relationship between external events and the body. In the first part of this talk I will report on recent findings from my lab pertaining to how infants and young children come to be able to process the multisensory relationships which specify their own bodies and an embodied environment. I will focus particularly on the origins of representations of the position of one's own hand and the location of touches on the body and in external space. I will then go on to describe another programme of research investigating the development of an ability to tailor movements of the body to achieve specific goals in external space. These latter studies demonstrate developmental trajectories in human infancy whereby purposeful actions become more specialised.
Researcher, Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy.
Newborn
children develop progressively more complex sensorimotor skills by
exploring own body and external stimuli surrounding them. It has been
proposed that this development is guided by the detection of
contingencies between own movements and the consequent multimodal
sensory effects. These processes are investigated, both with
empirical experiments involving babies and with computational models
within the recently funded European project `GOAL-Robots --
Goal-based Open-ended Autonomous Learning Robots'. This work focuses
on a preliminary computational model of those processes. The
progressive enhancement of the model is expected to support the
interpretation of the empirical experiments, and to suggest new ones
based on its predictions. Moreover, it is expected to support the
design of new open-ended learning robotic controllers. The model is
based on three components implementing three learning processes: (a)
a component, formed by stacked Kohonen neural networks, supporting
the acquisition of increasingly abstract goals based on experienced
changes in the environment; (b) a component, formed by an echo-state
neural network, supporting the acquisition of the skills able to
accomplish the goals; (c) a component, based on a predictor of the
accomplishment of the pursued goal, used to measure the improvement
of each skill and hence to intrinsically motivate the selection of
its goal. The computational model is used as the controller of a
simulated agent in a 2D environment, composed of two kinematic 3DoF
arms. Multimodal sensory information from proprioception, touch, and
vision is used by the system to form goals and guide skill learning.
Results of the initial tests of the model are presented, together
with their possible implications for the empirical experiments.