Laura F. Meyers, Ph.D. Research Linguist, UCLA 13900 Fiji Way, #310 Marina del Rey, CA 90292 (310) 306-4789 |
Handouts reprinted with the permission of the author and of Frank Murphy Workshop #22 presented at the 27th Annual National Down Syndrome Congress Convention. Pittsburgh, PA, August 7, 1999 |
Abstract: Learning and using spoken and written language can be very difficult for toddlers, school-aged children and adults with Down syndrome. The disabilities associated with the syndrome can make speech and writing hard to understand and produce. Technology (computers and communication devices) can help bypass the disabilities. Effective methods of using technology with toddlers, school-aged children and adults are described and illustrated with videotapes. Information about specific technology, ways of getting the technology into the classroom and writing IEP and therapy goals are discussed.Teach me my language, not your, language. (Herbert Brun, 1986)
Speech and handwriting make access to language difficult for many children and adults with Down syndrome. It's hard for them to understand the speech of other people. Their own speech is often unintelligible. Motor problems make handwriting something to dread. Technology can make access to language easier, providing a clear intelligible speaking voice and printed output. Technology is only as effective as the teaching methods used to implement it. This workshop presents communication devices, computers, software, IEP/IPP goals and effective teaching methods for implementing technology.
What is language? Language is a system of rules, or grammar, that children and adults build in their heads to link the speech and writing around them to their meaningful understanding of the world. The grammar lets them produce and understand new meaningful speech and writing of their own. Language is generative. That means that by combining a limited set of words and grammatical markers in different orders, a child or adult can come up with an unlimited number of new utterances to use to express new meanings. No set of messages can ever match the generative power of language. Technology is a success only when it can be used to combine words and grammatical markers in speech output and/or text to come up with any new utterance needed.
How are spoken and written language learned? People learn their own language. They listen to speech and look at writing and figure out how it can mean something, based on their understanding of their own worlds. They guess what the speech and text might mean in order to "make sense" of it. This happens during active participation in conversations, and looking at and producing writing:
|
The function of language is communication of meaning through speech and writing. The most important rules for implementing technology to help children learn language are:
Some successful strategies for introducing a communication device: Ask the family what is most important to the student. Talk with the student about possible messages. Let the student choose the messages. Ask the student's permission before recording on the device. Always label each message clearly in writing in front of the student. Begin by recording whole messages under each switch. Move on to showing the student how to combine messages to make a whole utterance (I WANT+A HAMBURGER.). Generalize use of the device to all settings: home, school, work, community. Train people in these settings to help the student use the device. Keep a notebook with the student for communication among the staff. Note successes, problems, new messages needed etc. Expect generalization from using the device to the student's own speech and writing.
Why use assistive technology? Assistive technology can help children bypass the processing problems that make the speech signal and written text inaccessible to them. By functioning as an access tool, technology can allow children or adults to participate in the normal processes of spoken and written language learning. This can best be done by linking the child or adult's understanding of the world, their personal meaning systems, with technology-enhanced speech output, computer monitor graphics and text, during real interactions with significant peers and adults. Text and graphics on keyboards, monitors and paper provides learners with control of a visual signal for language. Unlike the speech signal (synthesized or human), text and graphics remain in the environment. For learners with problems with auditory perception, auditory processing or auditory memory, text and graphics can support the speech signal, showing the learners the segments, word(s) and language structure that have been said, so that they can be linked to speech sounds and with meaning. For learners with strengths in the visual modality, text and graphics can scaffold learning of the auditory-vocal signal of speech. Text on monitors and printouts can support the development of speech skills, by providing visual support for the articulation of segments, words and sentences. Synthesized speech output is a powerful tool for learning language. To be effective the speech output must be under the control of the learners. Learners can use the speech signal to develop comprehension skills by repeatedly activating the speech output to learn the sounds of words, phrases and sentences, hearing the same signal as many times as needed. When learners have control of speech output they can use the signal as a voice. For learners with problems with visual skills, speech output can be linked to text and graphics, providing meaningful auditory support for visual processing. For learners with motor problems (mouths, tongues, hands, eyes) computers can provide access to both speech and text.
Beginning language goals: Understanding words for people, objects and actions; producing first words, learning to use words to talk about people, objects and actions; producing first word combinations; learning simple sentence structure, learning how to understand sentences and how to use sentences to communicate with another person.
Recommended activities: Hearing evaluated fully often. Using similar language on a daily basis during play and daily living routines at home; augment speech with toys, objects, actions, sign language, picture books with text, flash cards of favorite words, communication devices with speech output and computer- generated speech output and monitor graphics; read favorite books daily; enroll your student in reading programs at the local library; creative writing on computers with speech output; fully included preschool/community with good peer language models; preschool/school/therapy (special) with enriched language.
Implementation of technology: Technology should be incorporated into natural beginning language learning contexts, augmented with objects and actions, with slow repeated predictable speech output under the control of the learner, with pictures, written language, and with signs. Play with toys is as important to language development as use of technology, since beginning language is learned during play. During a research project, toddlers with Down syndrome learned and used significantly more language when they controlled both speech output and graphics. They learned less when they controlled speech output only (Meyers, 1990).
Videos: Rachel (15 yrs, DS+ autism, AlphaTalker "She got me", Mac PowerBook 180, IntelliTalk, facilitated communication, 19 yrs, "You look like friend".)
IEP or therapy goals for communication device use:
Why work on written language?
Videos: Courtney (8 yrs old, writing with younger sister and brothers, IntelliTalk, IntelliKeys, "My turn, Sarah, Jeffrey"), Catherine (33 yrs, "best friend-love")
How do people normally learn written language? Meaning comes first the process of understanding written language starts with understanding entire stories or statements and then goes on to understanding sentences, words, and finally letters, the reverse of the way most children are expected to "read" in school (Smith, 1986, p. 33). The impetus for reading and writing is a functional one, just as was the impetus for learning to speak and listen in the first place. We learn to speak because we want to do things we cannot do otherwise, and we learn to read and write for the same reasons (Halliday). Children probably begin to read from the moment they become aware of print in any meaningful way. They strive to make sense of print before they are able to recognize many of the actual words. Children begin to spontaneously assign meaning to the print in their environment (Smith). You have to be a reader and writer to get control of form. Control of form is not a prerequisite to language learning, rather, the ability to participate provides a growing control of form (Smith).
Videos: Geoffrey (Macintosh PowerBook 180, IntelliTalk, 5 yrs, "I love you"; reading the monitor "Geoffrey, I love you, Sarah"; watching video of himself on TV; reading the printout on his dad's lap, "Cheeseburger, Dollie likes picnics, Sarah is under the bed"
Teaching speaking through writing (Meyers, 1994). Hypothesis: children with Down syndrome can learn to speak in grammatical sentences by writing on computers with speech output about personally meaningful topics. Each subject wrote a 30 page book, one page per session; ten sessions writing on a computer with speech output, ten sessions writing on a computer without speech output, ten sessions writing with pens. The methods combined a whole language approach with phonics. The students chose the topic and what they wanted to say about their topic. The researchers helped them get their meaning into grammatical sentences by providing help with language structure and vocabulary. The children were shown how to sound out the segments of the words they were typing. The children were encouraged to use their books as reference books, to look up language structure, vocabulary and spelling.
Results: The children used significantly more spontaneous grammatical intermediate and complex sentences during a videotaped language sample elicited after 10 sessions of creative writing on a computer with speech output than they did at baseline or after 10 sessions of creative writing without speech output, or with pencils and paper. The combination of being freed of the attention requirements of handwriting, seeing their personally meaningful text perfectly produced on the monitor screen and on the printout, and hearing the text in speech output was clearly the most effective intervention.
Videos: (Apple IIC, E or GS, Echo speech synthesizer, Keytalk software, James, 16 yrs, baseline, "her nice", 5th speech output, "next week on Tuesday")
IEP or therapy goals for writing:
Provide the auditory, visual, motor and social support (scaffolding) needed to access the technology. Some forms of scaffolding include: visual scaffolding (eye exams, corrective lenses, pictures, toys and other objects, text on and off of the computer), auditory scaffolding (hearing evaluations, sounding out letters phonic prompts, models of language structure and vocabulary, memory prompts, speech output), motor scaffolding (special seating, keyguards and expanded keyboards, facilitated communication (Crossley, 1997)), social scaffolding (demonstrating how to participate in conversations, convey meaning to others, and clear up problems with comprehension). Technology can function as cognitive scaffolding, providing access to speech and text.
Children and adults with Down syndrome need long-term teaching, just like people without disabilities. They learn over time. They keep on developing new skills. Teaching is not: prescribing what should be learned, imposing information upon the learner, testing or labeling. The teacher must have knowledge of language and language development and be available for a long-term consistent commitment. The teacher must know the person, be aware of what is immediately personally meaningful to the person and have the skill to teach the person how to use technology to link that meaning to spoken and written language.
Video: Mia Peterson (23 years, "I wanted to be that researcher and I am!".)
Recommended technology and software:
Revised: November 8, 1999. |