Early gesture selectively predicts later language learning
Dev Sci. 2009 January; 12(1): 182–187.
Meredith L. Rowe and Susan Goldin-Meadow
Department of Psychology, University of Chicago, USA
Address for correspondence: Meredith L. Rowe, Department of Psychology, University of Chicago, 5848 S. University Ave, Chicago, IL 60637, USA; e-mail:firstname.lastname@example.org
The gestures children produce predict the early stages of spoken language development. Here we ask whether gesture is a global predictor of language learning, or whether particular gestures predict particular language outcomes. We observed 52 children interacting with their caregivers at home, and found that gesture use at 18 months selectively predicted lexical versus syntactic skills at 42 months, even with early child speech controlled. Specifically, number of different meanings conveyed in gesture at 18 months predicted vocabulary at 42 months, but number of gesture+speech combinations did not. In contrast, number of gesture+speech combinations, particularly those conveying sentence-like ideas, produced at 18 months predicted sentence complexity at 42 months, but meanings conveyed in gesture did not. We can thus predict particular milestones in vocabulary and sentence complexity at age 3 1/2 by watching how children move their hands two years earlier.
The average 10-month-old child does not yet produce intelligible speech but does communicate – through gesture (Bates, 1976; Bates, Benigni, Bretherton, Camaioni & Volterrra, 1979). Moreover, early gesture use is linked to later word learning – the more a child gestures early on, the larger the child’s vocabulary later in development (Acredolo & Goodwyn, 1988; Rowe, Özçaliskan & Goldin-Meadow, 2006). In fact, we can predict which lexical items will enter a child’s verbal vocabulary by looking at the objects that child indicated in gesture several months earlier (Iverson & Goldin-Meadow, 2005).
Gesture use continues to precede, and to predict, children’s language development as they enter the two-word stage. Children who cannot yet combine two words within a single utterance can nevertheless express a two-word idea using gesture and speech together (e.g. point at cup+‘mommy’, referring to mommy’s cup; Butcher & Goldin-Meadow, 2000; Capirci, Iverson, Pizzuto & Volterra, 1996; Greenfield & Smith, 1976). Interestingly, the age at which children first produce this type of gesture+speech combination reliably predicts the age at which they first produce two-word utterances (Goldin-Meadow & Butcher, 2003; Iverson & Goldin-Meadow, 2005; Iverson, Capirci, Volterra & Goldin-Meadow, 2008).
Gesture thus forecasts the earliest stages of language learning. But why? Early gesture use might be an early index of global communicative skill. For example, children who convey a large number of different meanings in their early gestures might be generally verbally facile. If so, not only should these children have large vocabularies later in development, but their sentences ought to be relatively complex as well. Alternatively, particular types of early gesture use could be specifically related to particular aspects of later spoken language use. For example, a child who conveys a large number of different meanings via gesture early in development might be expected to have a relatively large vocabulary several years later, but the child might not necessarily produce complex sentences. In contrast, a child who frequently combines gesture and speech to create sentence-like meanings (e.g. point at hat+‘dada’ = ‘that’s dada’s hat’) early in development might be expected to produce relatively complex spoken sentences several years later, but not necessarily to have a large vocabulary.
Our goal in this study was to test gesture’s ability to selectively predict later language learning. We calculated two distinct gesture measures early in development (18 months) and explored how well each measure predicted two different language measures – vocabulary size and sentence complexity – later in development (42 months).
Fifty-two typically developing children (27 males, 25 females) participated in the study. Children were drawn from a larger sample of families in a longitudinal study of language development. These families were selected to be representative of the greater Chicago area in terms of ethnicity and income levels. All children were being raised as monolingual English speakers.
Procedure and measures
Parent–child dyads were visited in the home every 4 months beginning when the children were 14 months, and videotaped for 90 minutes engaging in their ordinary activities. A typical session included toy play, book reading, and meal or snack time. Our analyses were based on naturalistic data from observations at 18 and 42 months. In addition, children were given a standardized language assessment at 42 months, which was included in the analyses.
All speech and gestures on the videotapes were transcribed. The unit of transcription was the utterance, defined as any sequence of words and/or gestures preceded and followed by a pause, a change in conversational turn, or a change in intonational pattern. Transcription reliability was established by having a second coder transcribe 20% of the videotapes; reliability was assessed at the utterance level and was achieved when coders agreed on 95% of transcription decisions.
All dictionary words, as well as onomatopoeic sounds (e.g. woof-woof) and evaluative sounds (e.g. uh-oh), were counted as words. The number of word types (number of different intelligible word roots) served as a control measure of spoken vocabulary. At 18 months children produced an average of 40 different vocabulary words (SD = 33.4). Mean length of utterance (MLU) in words served as a control measure of spoken syntactic skill. At 18 months children produced an average MLU in words of 1.16 (SD = 0.16). Child word types and MLU at 18 months were positively related to one another (r = .24, p = .08).
Children produced gestures indicating objects, people, or locations in the surrounding context (e.g. point at dog), gestures depicting attributes or actions of concrete or abstract referents via hand or body movements (e.g. flapping arms to represent flying), and gestures having pre-established meanings associated with particular gesture forms (e.g. shaking the head ‘no’). Other actions or hand movements that involved manipulating objects (e.g. turning the page of a book) or were part of a ritualized game (e.g. itsy-bitsy spider) were not considered gestures.
We focused on two measures of gesture use during the early stages of language learning. (1) Gesture vocabulary: the number of different meanings the child conveyed via gesture (e.g. point at dog = dog; shake head = no). At 18 months, children produced an average of 33.6 different vocabulary items via gesture (SD = 21.8). (2) Gesture+speech combinations conveying sentence-like ideas: the number of gesture+speech combinations in which gesture conveyed one idea and speech another (e.g. point at cup+‘mommy’). The onset of these types of gesture+speech combinations has been found to precede, and predict, the onset of two-word utterances in both English-learning (Goldin-Meadow & Butcher, 2003; Iverson & Goldin-Meadow, 2005) and Italian-learning (Iverson et al., 2008) children. At 18 months, children produced an average of 11.0 combinations of this type (SD = 11.4). The two gesture measures were positively associated with one another (r = .60, p < .001). We used 18-month (rather than14-month) measures as predictors because some children did not produce any words at all at 14 months, and less than half were producing gesture+speech combinations; in contrast, by 18 months, all of the children produced words (which allowed us to control for the size of the children’s early spoken vocabularies in our analyses), and 85% produced at least one gesture+speech combination.
Later measures of vocabulary size and sentence complexity
Children’s scores on the Peabody Picture Vocabulary Test (PPVT III; Dunn & Dunn, 1997), administered at 42 months, served as the outcome measure for later vocabulary skill. The PPVT is a widely used measure of vocabulary comprehension with published norms. The average normed PPVT score for our sample was 106 (SD = 17.12).
Children’s scores on the Index of Productive Syntax (IPSyn; Scarborough, 1990) were used as the outcome measure for later sentence complexity. IPSyn scores were calculated based on the spoken language each child produced during the 42-month videotaped session. The IPSyn gives children credit for producing different types of noun phrases, verb phrases, and sentences and does not measure tokens (i.e. it does not calculate the number of times each type is produced). It is therefore a measure of the range of structures the child is able to produce at a particular point in time (see Scarborough, 1990, for a description of scoring procedures). The average IPSyn score for our sample was 72 (SD = 10.6). Scores on the PPVT and IPSyn were positively related to one another (r = .37, p < .01).
We conducted a series of multiple regression analyses using children’s early gesture vocabulary and gesture+speech combinations at 18 months to predict vocabulary (PPVT) and sentence complexity (IPSyn) at 42 months. In order to determine whether including gesture improves our ability to predict later language above and beyond early speech predictors, we controlled for number of spoken word types produced at 18 months in our analyses of vocabulary size, and MLU at 18 months in our analyses of sentence complexity.
CHILDREN SPEECH CLINIC
Office ; JL Taman Bendungan Asahan 5 Jakarta Indonesia 10210
phone : 62(021) 70081995 – 5703646
email : email@example.com,
Clinic and Editor in Chief :
Dr WIDODO JUDARWANTO
email : firstname.lastname@example.org
Copyright © 2009, Children Speech Clinic Information Education Network. All rights reserved