I’ve been doing a bit of reading to prepare for my next placement in paediatric community health.
In an intriguing paper, Ukoumunne and colleagues pushed the ELVS data through a statistical machine to discern discrete trajectories of language impairment and precocity. The best fit had five average paths for the children’s language between 1 and 4 years:
- Typical: 68.5% of children stayed pretty average
- Precocious (late): These children started typical, but were more likely to outpace their peers from 24 months.
- Impaired (early): These children were impaired up to 12 months, but were typical by 2-4 years.
- Impaired (late): Typical development to 2 years, with impairment from then to 4 years.
- Precocious (early): Likely precocity early on with a return to typicality at 4 years.
The authors are keen to stress that these paths are averages, and do not predict what individual children might achieve. To apply this analysis in a clinical context would be to commit the Ecological Fallacy, “where inferences about the nature of individuals are deduced from inference for the group to which those individuals belong“.
Suppose there is a suburb called “Exampletown” with 400 residents who each earn $60,000/annum, except for one who earns $50 million/annum (an extreme example, but good for making the point!). The average income for Exampletown is $184,850/annum (the median is obviously $60,000). You might read a list of suburbs ranked by average income. Say you meet someone from Exampletown – you might assume they are quite wealthy, because you know their suburb has an extremely high average income. But they aren’t! You have committed the ecological fallacy, trying to apply an inference that applies to a group to an individual member from that group.
Application to Language Delay
While the model above is seductive, it is also true that 6% of the typical group will be impaired at 4 years, and that 52% of the Impaired (late) are typical at 4 years. Because there are a lot of more in the typical group, this 6% actually represents 55% of the total number of impaired children at age 4.
People might say that the data shows that children are too variable to allow clinicians to make the call to intervene before 4 years. After all, they might improve! Or we could be taking away resources from children who will need them when their impairment manifests. It’s a tricky problem.
I have a small issue with people using the ELVS data to make clinical decisions (if they do). The study was not conducted to look at the efficacy or efficiency or early language intervention. Instead, it was an observational study: it cannot answer these questions.
It’s a rather unsatisfying conclusion, but I’ll be interested to report on what my next clinic does. All community health services suffer from chronic under-resourcing, so decisions need to be made about priorities. Are these decisions being made with reference to efficiency/efficacy data, or observational/epidemiological data?
Ukoumunne, O. C., Wake, M., Carlin, J., Bavin, E. L., Lum, J., Skeat, J., . . . Reilly, S. (2012). Profiles of language development in pre-school children: A longitudinal latent class analysis of data from the Early Language in Victoria Study. Child: Care, Health and Development, 38(3), 341-349. doi: 10.1111/j.1365-2214.2011.01234.x
I just finished an intensive fluency placement at University. We work in pairs with adult clients who stutter for nine hours a day for five days, teaching them the Smooth Speech technique. It was intense, and at a conservative estimate I think I provided over two thousand verbal corrections to my assigned client (who I’ll anonymize as X and use the gender neutral pronoun ‘they’). Several of the clients spoke multiple languages, and I thought it would be interesting to take a quick look on the state of the literature on bilingual stuttering and then present a quick anecdote regarding a complication of Smooth Speech therapy in multilingual clients.
- Do bilinguals stutter more? I think it’s safe to say no. An ELVS paper on stuttering (perhaps the best designed/controlled study of its type) found bilingualism or speaking a language other than English not to be a predictor of stuttering (n>1500) .
- Are bilinguals less likely to recover? This is more controversial. One study of 38 children found bilingualism to be an risk factor for persistence of stuttering . This seems suspect to me, if only because it should be easy to see if stuttering is more prevalent in adulthood in linguistically rich countries where many are bilingual (say, Switzerland) as opposed to reasonably resolutely monolingual countries (like Australia).
- Can monolingual clinicians treat bilingual clients? Here I will defer to my anecdotal experience in the next section of this post.
It’s difficult to say too much about client X without revealing confidential information. Suffice to say that they had stuttered from early childhood, and spoke five languages: three widely spoken Indo-Aryan languages, Arabic and English.
We began by taking a detailed language history. We asked X which languages they spoke, where they had learned them, when they used them today, whether they understood/spoke/read/wrote better in any, and if their stuttering was better in any of the languages. X reported that they spoke most fluently in English, but thought this was because they spoke English at home and work, and only used their other languages on the phone home. X stuttered in each language, and the loci of the stuttering appeared to be common (word initial glides, stops and fricatives). We did not take an initial rating in X’s other languages, as X believed delivering monologues in these languages was not representative of their usual verbal requirements.
I did a little research into the phonology of X’s languages (Wikipedia normally has great summaries). X’s primary language distinguishes four voicing types for each stop:
- tenuis, as /p/, which is like ⟨p⟩ in English spin
- voiced, as /b/, which is like ⟨b⟩ in English bin
- aspirated, as /pʰ/, which is like ⟨p⟩ in English pin, and
- murmured, as /bʱ/. [according to Wikipedia]
English distinguishes two, which are generalised to voiced and unvoiced, although English stop allophones encompass many more possibilities.
X’s stops in English were tense and explosive, and led to the characteristic ‘choppy’ sound of the speech stream associated with Indo-Aryan speakers who speak English. This presented a problem for the smooth speech treatment, which relies on gentle onsets – For my client, using the gentle onsets in Hindi would cause the stops to sound murmured, and would possibly change the meaning of the word. Much of the week involved softening and elongating X’s utterance-initial syllables, something they found quite difficult due to the bilingual interference. Tasks were completed in all of X’s languages, and luckily their were other clients who could converse with them in multiple languages (us clinicians felt fairly linguistically inadequate by comparison).
By the end of the week, X found a happy medium in their native language and English between not saying the right word and being ‘explosive’, and the end result was rewarding for them to see.
- Reilly, S., Onslow, M., Packman, A., Cini, E., Conway, L., Ukoumunne, O. C., . . . Wake, M. (2013). Natural history of stuttering to 4 years of age: a prospective community-based study. Pediatrics, 132(3), 460-467. doi: 10.1542/peds.2012-3067
A throwaway remark by my Audiology lecturer caught me by surprise. She said that she sits behind children when training them after they receive a cochlear implant in order that they not rely on visual cues when learning to discriminate speech sounds.
Children and adults with hearing loss cannot simply be fitted with an aid or an implant and then walk away ready to hear. They need specific (and in some cases a lot of) training in perceiving speech sounds (as do hearing people, who do this as babies). However, there is a divide between deaf educators/audiologists about the best way to train listening in this population:
- Auditory-Verbal: No sign language, no visual-cues (i.e. lip-reading) – the child must learn to listen solely through the use of the acoustic signal.
- Auditory-Oral: Children can use lip-reading and contextual cues as well as listening to crack the speech signal.
Of course there are many shades between these two approaches, and countless other approaches to deaf education. Perhaps I’ll just link to the ASHA Evidence Map…
On the other side of the SP range of practice…
I thought it was interesting, because it conflicted with how we do phonological therapy in Speech Pathology, where children who cannot distinguish phonemes are encouraged to perceive both the articulatory and acoustic differences in the sounds. It also is a similar ‘debate’ in Aphasia rehabilitation. One approach is Constraint-Induced Language Therapy (CILT), where clients are restricted to verbal output (no gesture, writing or drawing) through the use of physical screens. A review  found large effect sizes, but since the therapy was intense, it remains unknown whether ‘constraint’ is an important aspect of the treatment.
However, there is another school of thought which claims clients should be able to draw on any residual communication in any form. Such an approach is found in Multi-Modal Aphasia Treatment (MMAT), which a pilot study found to be equally efficacious as CILT. A RCT is in the works.
The danger of single-treatment studies
When your study has only one treatment, even if it has a control, it is impossible to say whether it is a treatment that should be used. If clinicians are to choose the best treatments (which we are not presently required to do, all we are required to do is to use evidence-based treatments), we need more studies like  and less like . Two other examples spring to mind:
- Literacy: Is Reading Recovery more efficacious than a Systematic Phonics program?
- Fluency: Is Demands and Capacities Therapy more efficacious than the Lidcombe Program?
(There is research on the second question  finding equal efficacy, but the Lidcombe program was only administered for 12 weeks, in defiance of best-practice and proscribed standards for its implementation).
When we do therapy, we cannot point to research that proves that every component of the intervention (like constraint) is directly related to a result. Perhaps if we removed it, the treatment would still work. This is where a solid theoretical framework helps. If what I’m doing is reasonable given what I know about the body and the brain, I think I’m a lot more comfortable, even if a small aspect of my intervention hasn’t been checked by an RCT.
- Cherney, L. R., Patterson, J. P., Raymer, A., Frymark, T., & Schooling, T. (2008). Evidence-based systematic review: Effects of intensity of treatment and constraint-induced language therapy for individuals with stroke-induced aphasia. Journal of Speech, Language, and Hearing Research, 51, 1282–1299.
- Rose, M. L., Attard, M. C., et al. (2013). Multi-Modality Aphasia Therapy Is as Efficacious as a Constraint-Induced Aphasia Therapy for Chronic Aphasia: A Phase 1 Study. Aphasiology, 27(8), 938-971.
- Franken, M. C., Kielstra-Van der Schalk, C. J., Boelens H. (2005). Experimental treatment of early stuttering: a preliminary study. Journal of Fluency Disorders, 30(3), 189-99.