Kamis, 23 Desember 2010

If The Tactilear Can Bring Hearing To The Deaf, What Can It Do For Those Learning A New Language?

Abstract:
Can the phoneme sequences of speech when presented via the TactilEar (a multi-stimuli, vibrotactile device worn on the wrist) be understood as language? The eight prelingually deaf children in an oralist program participating in the first field test of the TactilEar received between two and four and one-half hours of instruction. Those 13-15 years old attain 50% accuracy with one hour of instruction on six-item tests given from a field of nine items. Children 3-6 years attained a 50% test accuracy within two hours of instruction. For these tests the coded speech carried the entire speech message, no lipreading was possible.
The TactilEar, a multi-sensory, vibrotactile device worn on the wrist, offers a possible second means for speech reception that may short circuit the auditory reception parameters established by the adult. This means of tactile speech reception defines the phonemes of speech in sound-specific holographic tactile patterns and presents these patterns at frequencies proportional to the pitch of the voice. Its first field test with prelingually deaf children is reported in the following text. Its application in language acquisition is unexplored. However, the TactilEar adds this fresh input factor in that it can make the adult a child again in oral language learning and speech development.
This material is based upon work supported by the National Science Foundation under award number ECS-8260593 in the Small Business Innovative Research Program. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation.

KEYWORDS: hearing, language acquisition, handicapped, tactile sense, vibrotactile device, TactilEar, prelingual deafness, research, experiment results
Viewing the problem of deafness as a condition of inaccessibility to speech and sounds in the environment, development of a means to enable reception of identified sounds by tactile means for those with hearing loss is being carried out by Ear Three systems. The resultant product is the TactilEar. Results of the first field test of the TactilEar, research sponsored by the National Science Foundation - Small Business Innovative Research program, is reported here.
BACKGROUND
The common sensory substitution for hearing loss is visual. Lipreading, Total Communication, sign language, and Cued Speech all require a visual substitution for the auditory loss. The phonemic, syllabic coding device of R. Orin Cornett, the AutoCuer, requires visual reception of code increments and the speaker's lips to interpret the speech message. This puts multiple constraints on the AutoCuer listener; however, the reception provides complete phoneme definitions (Cornett, 1975; Nicholls, 1979).
In child development, the tactile, visual, and auditory senses all serve major learning functions for the young child. Formal education tends to deemphasize the role of the tactile sense in favor of the visual sense, substituting the picture and text for the object and touch experience. Children with hearing loss are then operating nearly 100% visually under these conditions.
With the communication role of the visual sense emphasized greatly in language learning, some of the observational experiences of childhood can be missed. As the researcher taught high school basic math at the Model Secondary School for the Deaf (Gallaudet) after long experience with 3-12 year olds in the Montessori environment, this
27
deprivation was obvious.
Timing increments are far less resolved visually than with the auditory or tactile senses. The sound couplets (a consonant and a vowel sound) signaled in Cued Speech enable the hand movement and its visual recognition to occur in speech cadence. Movies, 16mm with sound track, operate at 24 frames per second or 41.6 milliseconds (ms) per frame. The duration of the "t" sound in speech is 2-3 ms, and the "b" and "p" are about 5 ms. Vowels and sustained consonants such as "m," "s," and "r" can drag out to 200 ms (Marley, 1982). When signaling single phonemes, the eyes would miss some of the phoneme components of speech. Using the visual mode of the TactilEar code, animating realtime speech coding using white spots on a black background with 16mm film, viewers could not resolve the short, one frame, sound codes. Also, regarding frequency presentation, observing light emitting diodes (LEDs) driven with varying frequency between 100 and 300 cycles per second (eps) (10-3 ms), only continuous light could be perceived, even by subjects bothered by 60 cps (16.7 ms) flicker of fluorescent lighting.
To spread the reception mode for speech more fairly among the senses, to enable reception by people with both vision and hearing impairment, and to get more information to the person's perception processes, tactile presentation of the speech phoneme code is preferred. The animated patterns of sequential phonemes in speech can be distinguished as can frequency variation between 100 and 300 cps which can trace the pitch variations in speech and music when the tactile sense receives the signals in a properly designed device.
Georg von Bekesy (1967) reports that nerves in the cochlea of the ear receive stimuli at the same frequencies as the tactile reporting nerves in the skin, at between 10 and 400 cps. The mechanism of the ear translates the audio signal frequencies to location-specific areas in the cochlea, thus indicating the sound frequencies by location. This concept of hearing is used in the Saunder's Belt and the Vocoder (Engelmann and Rosov, 1975). However, resolution of speech using only 16-32 stimuli does not match that of ear hearing where stimuli (cochlear hairs) number in the thousands.
In design of a tactile device, one must take into account von Bekesy's work on inhibition factors of multiple stimulation. Using two sharpened lead pencils, this factor of critical spacing of tactile stimuli can be easily verified. Our work agrees with von Bekesy's requirement for 1/2 inch spacing between tactile stimulators to feel two distinct stimuli. In the TactilEar used in this study, 7/8 inch spacing was minimal between proximal stimulators.
THE TACTILEAR PHONEME CODE
The code presents two types of information, the sound identification and the related mouthform, on opposing sides of the wrist. This allows spatial holographic tactile patterns with configurations unique to each phoneme in human speech. To add to its information richness, these patterns can be displayed with vibrational frequencies proportional to the pitch or tone of speech or song.
On the watch side of the wrist, two columns of four stimulators each distinguish the vowels in one column and consonants in the other. Eight patterns are used in each column: each stimulator singly, 1-3, 1-4, 2-4, and 1-2-3-4. The vowel-consonant groupings are distinguished by column change patterns, and the particular phonemes by location up and down and along the column and the total intensity of the stimulation which is proportional to the number of stimulators in action. It is best interpreted in an analogue fashion rather than digital, mentally.
On the inside of the wrist, a 3X3 square matrix defines the related mouthform to the phoneme being presented. This the AutoCuer does not do; however, the Tradoma device (Braida, 1982) does it elegantly. Considering the square matrix as a tic-tac-toe board, winning through the center gives four alternatives. Each direction equates a consonant-specific mouthform separating, for example, "s," "m," "t," and "f." The single corners and the center represent mouthforms for "e," "a," "r," "k," and "o," respectively. The consonants give more intense signals with a vector direction to the pattern. The single signals give an imbalancing or centering of the square. With these two matrices aligned parallel on the wrist, the animated signal is believed to be clearly readable with experience.
Drivers for the TactilEar include manually presented speech code using a keyboard, animated speech using 16mm film with a screen imbedded with photodiodes which drive the tactile stimulators, and the Marley sound and speech analyzer which is presently in the last stages of development. The experiment reported here employed manually coded speech.
THE EXPERIMENT
Eight children enrolled in the Oralist Program in the Fairfax County (Virginia) Public Schools and who were classified as prelingually deaf by their parents served as subjects for the experiment. Four were in the intermediate school (7th and 8th grades), three were in preschool, and one in first grade. One instructor served all subjects.
Instructional sessions were given on 18 consecutive school days in small rooms provided for the purpose within the two schools. Each child was individually instructed and tested before and after each instructional session. The 13-15 year olds had 25 minute sessions, allowing five minutes per test and fifteen minutes for instruction. The 3-6 year olds had twenty minutes sessions, thus reducing the instructional time to ten minutes.
Instructions consisted of presenting vocabulary words. Initial word groups had single syllable words represented by pictures. Each group had a single vowel with varying consonant patterns. The three groups had vowel
28
sounds of cat, toe, and key. Eight or nine words were in a group with the o group containing "boat, bow, bowl, door, four, go, hoe, toad, and toe." Each group was presented twice, once tactilely only and once with a visually coded speech display added to the presentation.
To make double word groups, the numbers one to six were presented combining them with any of the 26 pictured vocabulary words. For triple word groups, prepositions were added and illustrated by the placement of the number in relation to the pictured object, as, five on the cat (italics show required choices). This series comprised the first twelve lessons and were used with both groups of subjects. The younger children had great difficulty so both groups were given different experiences for the last six sessions.
The last six sessions for the intermediate school youngsters provided a change of vocabulary to colors and geometric shapes. For the younger children, seven real objects were used including "ball, bow, box, car, doll, pen, and pencil." Illness and field trips reduced the number of sessions for these younger children.
Vocabulary word presentation was by picture, number placement, and object with lipread word presentation followed by the phonemically coded word using either tactile only or tactile and visual display.
TESTING
ONLY THE MANUALLY CODED WORD SERVED AS THE STIMULI IN TESTING. During the first twelve sessions some of the tests were tactile coding only, others tactile and visual. The last six lessons had tactile only testing. All subjects used the same TactilEar armband.
Testing apparatus besides the manually driven TactilEar included a floormat divided longwise with a ribbon, card markers to identify subject, date, tactile or tactile-visual display, and nine selected pictures, and when needed the numbers on cards for one through six. To record test performance for delayed grading, an instant film flash camera was used.
As the six test items were sequentially coded to the subject, the subject would choose the appropriate picture and place it on the right side of the ribbon on the floormat below the date-subject information. With multiple-word work (with numbers), the number and its placement would be added to the picture. After the test items were complete, the test item lists presented on cards with large print would be placed beside the subject's answers and the picture taken.
The test content differed between the pretest and posttest.
Pretests—The first three pretests tested the subject's ability to read, read lips, and interpret pictures by matching associated items as socks and shoes. At the fourth session, each subject had been presented with all 26 pictured words so nine were selected from the field of 26 items and the test consisted of six of those. These tests tapped long term retention.
Posttests—Posttests included items from the immediately preceding instructional session and tapped short term recall. The 13-15 year olds consistently had six items per test including multiple word groups as four over the bowl as one test item. The 3-6 year olds were overwhelmed by nine items, so in the second six sessions they were given fewer to choose from and fewer test items. In the third six sessions, seven objects were used and four or five items were used in the test.
RESULTS
Subject performances on tests are compiled in Table 1 for the intermediate school youngsters. Scores are given in one-place decimals with a perfect score being 6.0. The younger children's scores are given as percentages because the number of items varied. The first three pretests for related skills were 100% for each test for all intermediate school youngsters, but varied among the younger children, so they are provided in Table 2 with other test information. Their performance with pictorial cards was so poor that their number of correct answers for the total number of test items is given as a raw score. Total time of instruction is included with the score data on both tables.
Two external factors caused variation in intermediate youngsters' scores: the refusal to work with the color series though there was no color-blindness among subjects, and, performance in the last session which was lower because the subjects were held up from participating in a school event by having to participate in the session.
To fairly compare performance of the younger children with the intermediate school youngsters, the latters' scores given as percentage correct for between 1:15 and 1:45 hours of instruction are 51%, 78%, 70% and 63% respectively, given in order of each youngster's listing in Table 1. This is the equivalent time of instruction for them before the younger children started using the real objects as test items.
The children had difficulty working with fields of eight or nine items and they could not complete a six item test. Numbers for three of them were something to order, not to identify. And the prepositions were beyond their understanding. For the second week and the first half of the third, the research design was in shambles, but with the using of objects to represent the words, adherence to the research design returned. For this reason, reporting of test data is limited compared to that available for the intermediate school youngsters.
Scoring of all tests was done from the instant photographs by the instructor and the principal investigator. Agreement between scores was 96.2%. The discrepancies were resolved before the data was analyzed. This means that ten tests were scored which contained any discrepancy out of the total of 262 tests scored.
A paired t-Test on early and late sums for all eight subjects are: for early, 1.88 (SD=1.40), and late, 3.04 (SD=1.29),
29
giving a t-Value of 3.26 (Df=7), p > .01. Therefore learning to listen with the TactilEar code did occur.
DISCUSSION
Eight prelingually deaf children ages 3-6 and 13-15 years did use the phonimic code of the TactilEar successfully to listen and understand the coded single word, and, for the older youngsters, up to triple word series. Scores being higher for multi-syllable items (4.4) than for single syllable items (3.1), shows promise for the TactilEar in running speech reception.
The manually coded speech can be somewhat slower than normal speech, but with so much variation in pronunciation speed in the United States among individuals, among dialects, and among circumstances, one could fairly say that manually coded speech could be as extended as twice the time of relaxed, normal speech and still be acceptable.
Compared to reported vocabulary learning times for methods of deaf communication, this meager exposure to a new communication form is remarkably fast especially for learning to recognize words and word groups with meaning. The most comparable time factor study is Oller (1980) who, in one to two hours of instruction with teenage deaf subjects using the Vocoder and lipreading, was able to have all subjects reach 75% accuracy in distinguishing between hard-to-lipread word pairs. The probability of one correct answer per item is 50% in Oller's case.
In this study using the TactilEar code only—NO LIPREADING—the average test scores for the time interval 1:00-1:45 total time is 59% for these teenage subjects and for 2:00-2:45 is 67%, these scores include 32 tests per increment. The probability of one correct answer on the six item test with a diminishing field of nine answers is 0.6 and these tests included a 37 word vocabulary during those time increments.
Subject
Score Average (Number of Tests)
Age
Sex
Instr. Hours
All Tests
Single Word
Pretest/Posttest
Double Word
Triple Word
Color
Shape
Color & Shape
13.5
F
4:00
2.62(30)
1.8(8)/2.8(5)
2.6(30/3.4(5)
0/3.9(2)
2(2)/3(2)
-/3(1)
1.5(1)/3.5(1)
13.5
F
3:45
5.07(278)
4.5(6)/4.5(5)
5.6(3)/5.1(4)
-/5.2(2)
6(2)/6(2)
-/6(1)
4.5(1)/5.5(1)
14.5
M
4:00
2.98(29)
2.9(8)/3.0(7)
2.8(2)/5.0(4)
-/4.5(1)
1(2)/1(2)
-/6(1)
1.0(1)/0.5(1)
13.0
M
4:30
2.89(33)
2.6(9)/3.9(7)
4.2(3)/4.6(5)
-/4.0(2)
0(2)/2(2)
-/6(1)
1.0(1)/2.0(1)
Weighted Averages:
3.39
2.8/3.5
3.9/4.5
-/4.4
2.3/3.0
-/5.3
2.0/2.9
Mono-syllable and Multi-syllable Test Performance Excluding Color and Color & Shape Tests
Mono-syllable Test Average: 3.1
Multi-syllable Test Average: 4.4 (This category includes the Shape Test.)
Probability of one correct answer on six item test with nine choices with a diminishing field of answers is 0.6.
Table 1. Intermediate School Youngsters' Test Scores with 6.0 a Perfect Score
Subject
Percentage Correct (Number of Tests)
Early Session Scores
Age
Sex
Instr. Hours
Time with Objects
Objects:
Pretest
Posttest
Speech Reading
Picture Matching
Reading
Total Correct
Tests
4
M
2:00
0:40
31% (3)
50% (4)
100%
100%
33%
11
13
6
F
2:10
0:20
75% (1)
50% (2)
100%
100%
100%
29
19
4.5
M
2:30
0:40
33% (3)
40% (4)
100%
100%
33%
16
18
3
F
2:10
0:40
50% (3)
53% (4)
67%
67%
0%
23
17
The test scores as percentage represent the proportion correct of four or five test items with a field of seven objects to choose from with a diminishing field of choices.
Table 2. Preschool and Primary Children's Test Scores in Percentage
30
Why is the TactilEar code so readable? Because speech sounds are unambiguously presented and fully defined.
Similar speech recognition has been found with some subjects with Cued Speech (Nicholls, 1979) which, when the speaker's face is in view and the person Cues his or her speech or the prototype AutoCuer is used, also fully defines the speech sound sequences.
With the planned twenty lessons series of instruction, the TactilEar, because it can be used by hearing impaired, deaf, and deaf-blind individuals, because its design lends itself to quantity production, and because of the promising performance of the subjects involved in this research, may offer a new, better, and perhaps faster method of coping with prelingual deafness.

Tidak ada komentar:

Posting Komentar